threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "INTENTION\n\nInspired by the effort to integrate JIT for executor acceleration I thought selected simple algorithms working with array-oriented data should be drastically accelerated by using SIMD instructions on modern hardware.\n\nI want to introduce this style of programming with the example of hex_encode:\n- operates on arrays (bytea)\n- simple algorithm\n- in some situations partially limiting the performance (e.g pg_dump)\n\nIMPLEMENTATION GUIDELINES\n\nThe main goal ist to accelerate common cases on the most common hardware by exploiting all the resources the hardware delivers.\nThe following guidelines took me to a first implementation:\n\n- restrict on 64 -bit architectures\n\tThese are the dominant server architectures, have the necessary data formats and corresponding registers and operating instructions\n- start with Intel x86-64 SIMD instructions:\n\tThis is the vastly most used platform, available for development and in practical use\n- don’t restrict the concept to only Intel x86-64, so that later people with more experience on other architectures can jump in and implement comparable algorithms\n- fallback to the established implementation in postgres in non appropriate cases or on user request (GUC)\n- implementation of leaf function/procedures in assembly language\n\tThese consist mostly of a central loop without calling subroutines or doing additionally branching\n\n- coding for maximum hardware usage instead of elegant programming\n\tOnce tested, the simple algorithm works as advertised and is used to replace most execution parts of the standard implementaion in C\n\n- isolated footprint by integrating it only in the specific subroutine (here hex-encode)\n\tThis ensures that the requirements for fast execution are met (e.g. buffer sizes) and no repeated checks are needed like in a library use case.\n\n- trying to keep both vector execution ports always doing useful work by avoiding waits for latencies\n\n- trying to access memory in linear fashion (reading from input buffer, writing to output buffer) to avaoid internal cache problems\n- focus optimization for the most advanced SIMD instruction set: AVX512\n\tThis provides the most advanced instructions and quite a lot of large registers to aid in latency avoiding\n\n- if possible provide fallback implementations on older SIMD standards (e.g. AVX2 or SSE2)\n\tThis is usefull on many older servers and client processors, but due to their too little number of registers latency avoiding or full execution queues cannot be fully achieved.\n\n\nIMPLEMENTATION DETAILS\n\n- The loops implementing the algorithm are written in NASM assembler:\n\tNASM is actively maintained, has many output formats, follows the Intel style, has all current instrucions implemented and is fast.\n\n- The loops are mostly independent of operating systems, so all OS’s basing on a NASM obj output format are supported:\n\tThis includes Linux and Windows as the most important ones\n\n- The algorithms use advanced techniques (constant and temporary registers) to avoid most unnessary memory accesses:\n\tThe assembly implementation gives you the full control over the registers (unlike intrinsics) \n\n- Multiple dependency chains work interleaved to minimize latencies:\n\tCoding is often interspersed and using almost all registers available.\n\n- Some instructions (Moves, zeroing) are executed outside the processor execution ports:\n\tThese don’t consume execution cyles on a port but their latency has to be considered.\n\n- Some vector instructions (multiply add) have latencies of 5 for example:\n\tThis means that after the instruction is issued, the processor has to wait 5 cycles until the result can be used in the same dependency chain. To avoid this and keep all vector execution ports (p0 and p5) busy you have to have 9 other instructions in between doing work on other streams of the algorithm to maximize hardware usage and overall performance.\n\n- All loops are implemented as separate C-callable functions (according to the OS calling convention):\n\tThey are all leaf functions by calling no other subroutines.\n\n- The decision which implementation is choosen is done at the caller side by a special dispatcher routine:\n\tThe caller handles the architectural capabilites (instruction sets available) and knows the required work: There is often a suitable minimum amount of work required for efficently calling a provided implementation.\n\n- Loops should be run at least 2-4 times to compensate for initializing overhead:\n\tThis implicits a certain amount of minimum work count based on the specific SIMD implementations\n\n- The loops terminate after detecting an error (e.g. wrong input data) and return the succesfull completed amount of work:\n\tThe standard linear implementation takes over with the already established error-handling.\n\n- The loops work optimally with some extra output buffer space at the end to be able to overshoot in the last round:\n\tNonethless the correct amount of work is returned to the caller and a vector size of output buffer following the real result is zeroed out (Currently disabled!)\n\n- the loop may preload some data after the input buffer but assures that the following page boundary is never crossed to avoid any access violation:\n\tThis makes no harm to the memory system because the output buffer has a supplemental buffer at the end, but this could be changed to leaving the tail handling to the standard implementaion if deemed unsupportable (as for now).\n\n\n(to be continued...)\n\n",
"msg_date": "Fri, 31 Dec 2021 15:31:35 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "Introducing PgVA aka PostgresVectorAcceleration using SIMD vector\n instructions starting with hex_encode"
},
{
"msg_contents": "(continued)\n\nPERFORMANCE\n\nFirst Level: execution units (focused on AVX512)\n\nEvery modern processor has at least 2 vector execution units (p1 and p5 on Intel) which execute a different set of instructions in a pipelined fashion. Some simple classes of instructions (logical, arithmetic) can be executed on both ports. The results of a short operation are available in the next cycle for another instruction, which in its total form a dependancy chain.\nLonger executions provide their results only after some more clock cycles, so the latency increases from at least 1 to higher numbers.\n\nThis constellation implies that a single dependancy chain never can exhaust the full processor capabilities. To fight these latencies multiple interleaving dependancy chains should be used.\nInstructions with long latencies (e.g. memory accesses) should be issued long in advance before using their results.\n\nIn most cases only the two vector execution ports are the ultimate bottleneck, since the processor can execute memory reads, memory writes, scalar instructions and branches on other specialized units or avoid them totaly (register zeroing).\n\nThe hex_encode algorithm executes 5 instructions (uops to be correct) on p5, 3 on p1 (or arbitrary) and 1 load and 2 store uops.\n\nAssuming a processor with 2.5 GHz (for simplicity) we have 0.5 billion vectors processed per second, which gives 64bytes*0.5 billion=32 GB maximum processed per second.\nIn normal database units this is really a HUGE number (and it is using only ONE core!).\nBut in this case the doubled amount of results inc comparison to the source has to be written to memory (64GB/sec), which exceeds the possibilities of normal desktop processors.\n\nAs another example I may present the checksum algorithm, which is only read-intensive and uses 3 uops on p5 as the bottleneck path/vector.\n\nOn a 3GHz processor checksum can process 64 GB per sec and core.\n\nIt is interesting to check the performance levels on the upcoming new generation of XEON (Sapphire Rapids in 2022), which will have much increased memory bandwith (8channels DDR5, up to 12 on AMD Genoa), which will have some special models with HBM2 memory stacks and which has 2 execution units for stores to match the read capacity of also 2 instructions/cycle.\n\n\nSecond Level: alignment, caches and memory\n\nElder processor generations had a VERY big impact for page split accesses to memory, which will occur on vector data when they are not naturally aligned.\n\nGiving the future developments I would consider even 128 byte or 256 byte alignment, since it may be possible to get 1024 or 2048 bit vectors (already specified in ARM architecture).\n\nOn the level of caches one must consider „cache thrashing“ when the accesses to the caches exceed the associative maximum af a cache. In some algorithms (very high parallel checksum calculations with copy function) you could overload a single cacheline address in the case of too many parallel accesses. In these cases you can start the algorithm (on the fixed size blocks) a little bit delayed, so that some algorithm chains access vector n and some others vectors n+1 interleaved in the execution loop.\n\nMemory should be accessed in natural order to maximize the use of processor cache prefetching.\n\nAll accesses should be optimized to use the registers where possible: long latencies of memory access and some initial instructions can be combined in early issued instructions used only much later in time.\n\nThe memory latencies lead to data preloading, where the data for the next round of the loop are loaded at the first possible moment when target registers are available. This is crucial for latency fighting in many algorithms.\n\n\nThird level: Internal data structures\n\nVector operations work best with array oriented structures (here a bytea datatype or a shared buffer block for checksum calculation).\nClobbering individual scalar data (32/64 bit scalars) into vectors is much slower and really stresses the memory subsystem.\n\nThis implies a focus on more „struct of arrays“ as „array of structures“ in postgres, which seems difficult in postgres due to its established structure and long heritage.\n\nBy exploring the code more deeply (than my knowledge so far) it should be easy to identify many more places for simple algorithms working on array structures.\n\n\nFourth Level: Vertical integration\n\nThe base of most algorithm is the loading of the data into registers, doing some algorithmic calculations and write it out.\nSubsequent steps are coded in another layer (e.g. copying to storage, trimming the data for output etc.). This often requires to read the data again and do some other transformations.\n\nVertical integration combines some simple steps for better memory utilization.\n\nAs an example I think of pg_dump to dump a huge amount of bytea data (not uncommon in real applications). Most of these data are in toast tables, often uncompressed due to their inherant structure. The dump must read the toast pages into memory, decompose the page, hexdump the content, put the result in an output buffer and trigger the I/O. By integrating all these steps into one big performance improvements can be achieved (but naturally not here in my first implementation!).\n\n\nFifth Level: Pooling\n\nSome algorithm are so fast that they need to work on multiple datastreams at once to fully utilize a processor core. One example is checksum calculation.\nTo saturate the processor capabilities with large vectors you have to do the checksum on multiple pages in parallel (e.g. 2, 4 or 8).\nThis occurs often in real life (loading shared buffers to memory, flushing shared buffers to disk, precaching the shared buffers etc.).\n\nSome pooling (collect up to 16 shared buffer blocks in a pool) allows a fast checksumming for blocks which are now processed in a serial fashion.\nThis requires some adoption at some isolated parts in the postgres code base and turns a serial procedure into a parallel processing for objects treated in the same fashion at the (nearly) same time.\n\n\nBENCHMARKS:\n\nI have included a little benchmark program. It is not very sophisticated and fancy, but allows to estimate the performance of commonly used processors.\n\nIt requires nasm to be installed/downloaded (on linux or Windows).\n\nIt executes the hexdump algorithm one million times on the binary of nasm (2.15.05 current version).\n\nThe benchmark simply runs (for about 1000 sec), the user has to count the time himself.\nThe binary of nasm (using it as benchmark source data) is included as the source data in \n\nHEX_BENCH_DATA_1300KB.asm\n\n(please adjust the location where you downloaded nasm.exe on windows).\n\nThe binary (of each architecture) has a size of 1356 KB on windows and 1718 KB on linux.\n\nThe commands to build the binary are (also found in hex_bench.asm)\n\non Windows:\n\n:: commands to build on Windows (nasm and golink in the path)\nnasm -f WIN64 -g hex_bench.asm -l hex_bench.lis\nnasm -f WIN64 -g hex_x86_64.asm -l hex_x86_64.lis\nnasm -f WIN64 -g HEX_BENCH_DATA_1300KB.asm\ngolink /console hex_bench.obj hex_x86_64.obj HEX_BENCH_DATA_1300KB.obj\n\nGolink is a small utility linker on Windows found under:\n\nhttp://www.godevtool.com/\n\non Linux:\n\n# commands to build on LINUX\nnasm -f elf64 -g hex_bench.asm -l hex_bench.lis\nnasm -f elf64 -g hex_x86_64.asm -l hex_x86_64.lis\nnasm -f elf64 -g HEX_BENCH_DATA_1300KB.asm\nld -o hex_bench hex_bench.o hex_x86_64.o HEX_BENCH_DATA_1300KB.o\n\nThe selected hex_encode_routine is hardcoded to hex_encode_avx512bw (please choose another implementation on processors not supporting AVX512 by changing the comments in hex_bench.asm)\n\nThe best result I could achieve was roughly 95 seconds for 1 Million dumps of 1718 KB on a Tigerlake laptop using AVX512. This gives about 18 GB/s source-hexdumping rate on a single core!\n\nIn another run with postgres the time to hexdump about half a million tuples with a bytea column yeilding about 6 GB of output reduced the time from about 68 seconds to 60 seconds, which clearly shows the postgres overhead for executing the copy command on such a data set.\n\nSQL> Copy col_bytearray from my_table to 'N:/ZZ_SAV/my_hexdump.sql';\n\n(This was on a customers dataset not reproduced here).\n\n\n\n\nPOSTGRES INTEGRATION (HELP NEEDED)\n\nThe architecture-dependant introduction of vector routines requires some integration efforts into Postgres.\n\nI have designed a concept for easy integration and extensibility, but some concrete steps need support from others due to my restricted knowledge of the whole system.\n\n(For now this global configuration is part on the top of encode.c, but it certainly must be moved to a more adequate place for initialization).\n\nThe main concept tries to match the CPU capabilities with the requirements of a certain implementation. This is not only for hex_encode but for an arbitrary number of algorithms implemented in an accelerated version (here SIMD vectors, but others may be possible too).\n\nWe have a global array called valid_impl_id_arr indicating all the implementations capabable running on the current CPU.\n\nAn implementor defines an algorithm and gets an invariant ID (here ALGORITHM_ID_HEX_ENCODE, should be kept in a global header).\n\nThese Ids are valid for all architectures, even if there exists no accelerated version yet.\n\nIn internal arrays (see hex_x86_64.asm) all possible implementations are stored according with their requirements (CPU features, minimum length etc.).\n\nIn the initialization phase of the running exe (backend or forend in the future) the current cpu_capabilities are checked once and the maximum valid implementation index is stored in the global visible valid_impl_id_arr.\n\nThe highest requirements have the highest Index, so the capabilities are checked in decreasing index order.\n\nFor example (hex_encode): We have 4 implementations, but on an AVX2-only machine the valid_impl_id_arr[ALGORITHM_ID_HEX_ENCODE] is only set to 3, because the requirements of AVX512BW are not met. There is always index zero indicating the algorithm has no valid implementation or the CPU has no sufficiant capabilities.\n\nTo disable an algorithm totally from being accelerated the masking by an algotithm_disable_mask is provided, which is normally all zero but could be set to disable a certain amount of algorithms by ORing (1«ALGORITHM_ID_constants). This emergency disablement should be kept in a GUC and applied only at image initialization time.\n\nThe CPU-capabilites are determined by cpuid instructions (on x86-64) and defined in cpu_capabilties_x86_64.asm.\n\nBut this scheme is not restricted to Intel ISA only. Other hardware architectures (most probably ARM, POWER or RISCV) are identified by different CPU_IS_ARCH_xxx constants (numbers from 1-7) and implementers get the specific CPU capabilities in their own fashion which may be totally different to Intel ISA.\n\nSo every CPU has its cpu_capabilities_unmasked value as an unique int64 value.\nThis value is normally copied 1:1 to the global cpu_capabilities, but for testing or in emergency it is masked by a configuration mask simulating a certain CPU. This allows a developer to test the implementations for lower-class cpus without the need for the specific hardware.\nThis cpu_capabilities_mask defaults to -1 (all bits 1) and should be derived also from a GUC.\n\nFor up to 63 algorithms we need 2 int64 GUC values to selectively disable certain parts of accelerated implementation.\n\nHelp is greatly appreciated to code this concepts with GUCs and put the globals and their initialization at the right place.\n\n\nTOOL CHAIN (HELP NEEDED)\n\nOn x86-64 I use nasm (Netwide assembler) because its well maintained, fast, instruction complete and covers multiple object format.\n\nThe assembler routines should work on most x86-64 operating systems, but for the moment only elf64 and WIN64 output formats are supported.\n\nThe standard calling convention is followed mostly in the LINUX style, on Windows the parameters are moved around accordingly. The same assembler-source-code can be used on both platforms.\n\nWebsite for downloading the win binary / rpm repo\n\nhttps://nasm.us/\n\nI have updated the makefile to include the nasm command and the nasm flags, but I need help to make these based on configure.\n\nI also have no knowledge on other operating systems (MAC-OS etc.)\n\nThe calling conventions can be easily adopted if they differ but somebody else should jump in for testing.\n\nIf absolutely needed, nasm allows cross-assembling for a different platform, so the objects could be provided in a library for these cases.\n\nFor Windows the nasm support must be integrated into the generation of the *.vcxproj for Visual Studio.\n\nI found the VSNASM project on github which explains how to integrate Nasm into VisualStudio.\n\nhttps://github.com/ShiftMediaProject/VSNASM\n\nBut I really need help by an expert to integrate it in the perl building process.\n\nMy internal development on windows is using manually assembly/linking so far.\n\n\nI would much appreciate if someone else could jump in for a patch to configure-integration and another patch for .vcxobj integration.\n\n\nOUTLOOK\n\nOnce the toolchain and global postgres integration is done (these are totally new for me) this kind of vector (or matrix perhaps) acceleration is quite easy.\n\nBy identifying simple algorithms and using some architecture knowledge of the choosen platform a new implementation is easily coded and debugged because the complexity is often limited (performance optimization may be a challenge).\n\nThe integration to postgres remains quite locally and is not very invasive.\n\nThe acceleration for the specific algorithm is really huge, despite it puts the focus on other bottlenecks in the current code base. This makes the base algorithms almost disappear in CPU-usage and extends the scale to the dimensions of Terabytes.\n\nThe whole architecture is thereby not limited to Intel ISA (even if this is certainly the most common real world use case) and can be easily adopted to other hardware architectures.\n\nI have some other algorithms already in the pipeline, formost hex_decode (which must be debugged and checked for error handling), but during implementation i stumbled over base64_encode/decode which has also its implementations coded.\n\nI only want to start with a first project (hex_encode/hex_decode) targetting PG15 if possible and approved by the community. Then I’ll try to polish/debug/document the whole project to finish it to a committable state.\n\nThere is much room for other implementations (checksum verification/setting, aggregation, numeric datatype, merging, generate_series, integer and floating point output …) which could be addressed later on.\n\nDue to my different background (not really a C hacker) I need some help from some experts in specific areas. For coding Intel vector assembly for the project I can provide some help with tips and revisions.\n\nI have CC-included some people of the project who offered help or where already involved in this coding area.\n\nThank you all very much for your patience with this new project\n\nHans Buschmann",
"msg_date": "Fri, 31 Dec 2021 15:34:55 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "AW: Introducing PgVA aka PostgresVectorAcceleration using SIMD vector\n instructions starting with hex_encode"
},
{
"msg_contents": "On Fri, Dec 31, 2021 at 9:32 AM Hans Buschmann <buschmann@nidsa.net> wrote:\n\n> Inspired by the effort to integrate JIT for executor acceleration I thought selected simple algorithms working with array-oriented data should be drastically accelerated by using SIMD instructions on modern hardware.\n\nHi Hans,\n\nI have experimented with SIMD within Postgres last year, so I have\nsome idea of the benefits and difficulties. I do think we can profit\nfrom SIMD more, but we must be very careful to manage complexity and\nmaximize usefulness. Hopefully I can offer some advice.\n\n> - restrict on 64 -bit architectures\n> These are the dominant server architectures, have the necessary data formats and corresponding registers and operating instructions\n> - start with Intel x86-64 SIMD instructions:\n> This is the vastly most used platform, available for development and in practical use\n> - don’t restrict the concept to only Intel x86-64, so that later people with more experience on other architectures can jump in and implement comparable algorithms\n> - fallback to the established implementation in postgres in non appropriate cases or on user request (GUC)\n\nThese are all reasonable goals, except GUCs are the wrong place to\nchoose hardware implementations -- it should Just Work.\n\n> - coding for maximum hardware usage instead of elegant programming\n> Once tested, the simple algorithm works as advertised and is used to replace most execution parts of the standard implementaion in C\n\n-1\n\nMaintaining good programming style is a key goal of the project. There\nare certainly non-elegant parts in the code, but that has a cost and\nwe must consider tradeoffs carefully. I have read some of the\noptimized code in glibc and it is not fun. They at least know they are\ntargeting one OS and one compiler -- we don't have that luxury.\n\n> - focus optimization for the most advanced SIMD instruction set: AVX512\n> This provides the most advanced instructions and quite a lot of large registers to aid in latency avoiding\n\n-1\n\nAVX512 is a hodge-podge of different instruction subsets and are\nentirely lacking on some recent Intel server hardware. Also only\navailable from a single chipmaker thus far.\n\n> - The loops implementing the algorithm are written in NASM assembler:\n> NASM is actively maintained, has many output formats, follows the Intel style, has all current instrucions implemented and is fast.\n\n> - The loops are mostly independent of operating systems, so all OS’s basing on a NASM obj output format are supported:\n> This includes Linux and Windows as the most important ones\n\n> - The algorithms use advanced techniques (constant and temporary registers) to avoid most unnessary memory accesses:\n> The assembly implementation gives you the full control over the registers (unlike intrinsics)\n\nOn the other hand, intrinsics are easy to integrate into a C codebase\nand relieve us from thinking about object formats. A performance\nfeature that happens to work only on common OS's is probably fine from\nthe user point of view, but if we have to add a lot of extra stuff to\nmake it work at all, that's not a good trade off. \"Mostly independent\"\nof the OS is not acceptable -- we shouldn't have to think about the OS\nat all when the coding does not involve OS facilities (I/O, processes,\netc).\n\n> As an example I think of pg_dump to dump a huge amount of bytea data (not uncommon in real applications). Most of these data are in toast tables, often uncompressed due to their inherant structure. The dump must read the toast pages into memory, decompose the page, hexdump the content, put the result in an output buffer and trigger the I/O. By integrating all these steps into one big performance improvements can be achieved (but naturally not here in my first implementation!).\n\nSeems like a reasonable area to work on, but I've never measured.\n\n> The best result I could achieve was roughly 95 seconds for 1 Million dumps of 1718 KB on a Tigerlake laptop using AVX512. This gives about 18 GB/s source-hexdumping rate on a single core!\n>\n> In another run with postgres the time to hexdump about half a million tuples with a bytea column yeilding about 6 GB of output reduced the time from about 68 seconds to 60 seconds, which clearly shows the postgres overhead for executing the copy command on such a data set.\n\nI don't quite follow -- is this patched vs. unpatched Postgres? I'm\nnot sure what's been demonstrated.\n\n> The assembler routines should work on most x86-64 operating systems, but for the moment only elf64 and WIN64 output formats are supported.\n>\n> The standard calling convention is followed mostly in the LINUX style, on Windows the parameters are moved around accordingly. The same assembler-source-code can be used on both platforms.\n\n> I have updated the makefile to include the nasm command and the nasm flags, but I need help to make these based on configure.\n>\n> I also have no knowledge on other operating systems (MAC-OS etc.)\n>\n> The calling conventions can be easily adopted if they differ but somebody else should jump in for testing.\n\nAs I implied earlier, this is way too low-level. If we have to worry\nabout obj formats and calling conventions, we'd better be getting\nsomething *really* amazing in return.\n\n> But I really need help by an expert to integrate it in the perl building process.\n\n> I would much appreciate if someone else could jump in for a patch to configure-integration and another patch for .vcxobj integration.\n\nIt's a bit presumptuous to enlist others for specific help without\ngeneral agreement on the design, especially on the most tedious parts.\nAlso, here's a general engineering tip: If the non-fun part is too\ncomplex for you to figure out, that might indicate the fun part is too\nambitious. I suggest starting with a simple patch with SSE2 (always\npresent on x86-64) intrinsics, one that anyone can apply and test\nwithout any additional work. Then we can evaluate if the speed-up in\nthe hex encoding case is worth some additional complexity. As part of\nthat work, it might be good to see if some portable improved algorithm\nis already available somewhere.\n\n> There is much room for other implementations (checksum verification/setting, aggregation, numeric datatype, merging, generate_series, integer and floating point output …) which could be addressed later on.\n\nFloat output has already been pretty well optimized. CRC checksums\nalready have a hardware implementation on x86 and Arm. I don't know of\nany practical workload where generate_series() is too slow.\nAggregation is an interesting case, but I'm not sure what the current\nbottlenecks are.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Jan 2022 12:34:03 -0600",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Introducing PgVA aka PostgresVectorAcceleration using SIMD vector\n instructions starting with hex_encode"
}
] |
[
{
"msg_contents": "Hi,\n\ncfbot now runs most tests on windows, the windows task is by far the slowest,\nand the task limitted most in concurrency [2]. Running tap tests is the\nbiggest part of that. This is a bigger issue on windows because we don't have\ninfrastructure (yet) to run tests in parallel.\n\nThere's a few tests which stand out in their slowness, which seem worth\naddressing even if we tackle test parallelism on windows at some point. I\noften find them to be the slowest tests on linux too.\n\nPicking a random successful cfbot run [1] I see the following tap tests taking\nmore than 20 seconds:\n\n67188 ms pg_basebackup t/010_pg_basebackup.pl\n59710 ms recovery t/001_stream_rep.pl\n57542 ms pg_rewind t/001_basic.pl\n56179 ms subscription t/001_rep_changes.pl\n42146 ms pgbench t/001_pgbench_with_server.pl\n38264 ms recovery t/018_wal_optimize.pl\n33642 ms subscription t/013_partition.pl\n29129 ms pg_dump t/002_pg_dump.pl\n25751 ms pg_verifybackup t/002_algorithm.pl\n20628 ms subscription t/011_generated.pl\n\nIt would be good if we could make those tests faster, or if not easily\npossible, at least split those tests into smaller tap tests.\n\nSplitting a longer test into smaller ones is preferrable even if they take the\nsame time, because we can use prove level concurrency on windows to gain some\ntest parallelism. As a nice side-effect it makes it also quicker to run a\nsplit test isolated during development.\n\nGreetings,\n\nAndres Freund\n\n[1] https://cirrus-ci.com/task/5207126145499136\n[2] https://cirrus-ci.org/faq/#are-there-any-limits\n\n\n",
"msg_date": "Fri, 31 Dec 2021 11:25:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-31 11:25:28 -0800, Andres Freund wrote:\n> cfbot now runs most tests on windows, the windows task is by far the slowest,\n> and the task limitted most in concurrency [2]. Running tap tests is the\n> biggest part of that. This is a bigger issue on windows because we don't have\n> infrastructure (yet) to run tests in parallel.\n> \n> There's a few tests which stand out in their slowness, which seem worth\n> addressing even if we tackle test parallelism on windows at some point. I\n> often find them to be the slowest tests on linux too.\n> \n> Picking a random successful cfbot run [1] I see the following tap tests taking\n> more than 20 seconds:\n> \n> 67188 ms pg_basebackup t/010_pg_basebackup.pl\n> 25751 ms pg_verifybackup t/002_algorithm.pl\n\nThe reason these in particular are slow is that they do a lot of\npg_basebackups without either / one-of -cfast / --no-sync. The lack of -cfast\nin particularly is responsible for a significant proportion of the test\ntime. The only reason this didn't cause the tests to take many minutes is that\nspread checkpoints only throttle when writing out a buffer and there aren't\nthat many dirty buffers...\n\nAttached is a patch changing the parameters in all the instances I\nfound. Testing on a local instance it about halves the runtime of\nt/010_pg_basebackup.pl on linux and windows (but there's still a 2x time\ndifference between the two), it's less when running the tests concurrently CI.\n\nIt might be worth having one explicit use of -cspread. Perhaps combined with\nan explicit checkpoint beforehand?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 17 Jan 2022 10:41:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 1:41 PM Andres Freund <andres@anarazel.de> wrote:\n> The reason these in particular are slow is that they do a lot of\n> pg_basebackups without either / one-of -cfast / --no-sync. The lack of -cfast\n> in particularly is responsible for a significant proportion of the test\n> time. The only reason this didn't cause the tests to take many minutes is that\n> spread checkpoints only throttle when writing out a buffer and there aren't\n> that many dirty buffers...\n\nAdding -cfast to 002_algorithm.pl seems totally reasonable. I'm not\nsure what else can realistically be done to speed it up without losing\nthe point of the test. And it's basically just a single loop, so\nsplitting it up doesn't seem to make a lot of sense either.\n\npg_basebackup's 010_pg_basebackup.pl looks like it could be split up,\nthough. That one, at least to me, looks like people have just kept\nadding semi-related things into the same test file.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Jan 2022 14:05:17 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-17 14:05:17 -0500, Robert Haas wrote:\n> On Mon, Jan 17, 2022 at 1:41 PM Andres Freund <andres@anarazel.de> wrote:\n> > The reason these in particular are slow is that they do a lot of\n> > pg_basebackups without either / one-of -cfast / --no-sync. The lack of -cfast\n> > in particularly is responsible for a significant proportion of the test\n> > time. The only reason this didn't cause the tests to take many minutes is that\n> > spread checkpoints only throttle when writing out a buffer and there aren't\n> > that many dirty buffers...\n>\n> Adding -cfast to 002_algorithm.pl seems totally reasonable. I'm not\n> sure what else can realistically be done to speed it up without losing\n> the point of the test. And it's basically just a single loop, so\n> splitting it up doesn't seem to make a lot of sense either.\n\nIt's also not that slow compared other tests after the -cfast addition.\n\nHowever, I'm a bit surprised at how long the individual pg_verifybackup calls\ntake on windows - about as long as the pg_basebackup call itself.\n\n Running: pg_basebackup -D C:/dev/postgres/.\\src\\bin\\pg_verifybackup\\/tmp_check/t_002_algorithm_primary_data/backup/sha224 --manifest-checksums sha224 --no-sync -cfast\n# timing: [4.798 + 0.704s]: complete\n# Running: pg_verifybackup -e C:/dev/postgres/.\\src\\bin\\pg_verifybackup\\/tmp_check/t_002_algorithm_primary_data/backup/sha224\nbackup successfully verified\n# timing: [5.507 + 0.697s]: completed\n\n\nInterestingly, with crc32c, this is not so:\n\n# Running: pg_basebackup -D C:/dev/postgres/.\\src\\bin\\pg_verifybackup\\/tmp_check/t_002_algorithm_primary_data/backup/crc32c --manifest-checksums crc32c --no-sync -cfast\n# timing: [3.500 + 0.688s]: completed\nok 5 - backup ok with algorithm \"crc32c\"\nok 6 - crc32c is mentioned many times in the manifest\n# Running: pg_verifybackup -e C:/dev/postgres/.\\src\\bin\\pg_verifybackup\\/tmp_check/t_002_algorithm_primary_data/backup/crc32c\nbackup successfully verified\n# timing: [4.194 + 0.197s]: completed\n\n\nI wonder if there's something explaining why pg_verifybackup is greatly slowed\ndown by sha224 but not crc32c, but the server's runtime only differs by ~20ms?\nIt seems incongruous that pg_basebackup, with all the complexity of needing to\ncommunicate with the server, transferring the backup over network, and *also*\ncomputing checksums, takes as long as the pg_verifybackup invocation?\n\n\n> pg_basebackup's 010_pg_basebackup.pl looks like it could be split up,\n> though. That one, at least to me, looks like people have just kept\n> adding semi-related things into the same test file.\n\n\nYea.\n\n\nIt's generally interesting how much time initdb takes in these tests. It's\nabout 1.1s on my linux workstation, and 2.1s on windows.\n\nI've occasionally pondered caching initdb results and reusing them across\ntests - just the locking around it seems a bit nasty, but perhaps that could\nbe done as part of the tmp_install step. Of course, it'd need to deal with\ndifferent options etc...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jan 2022 11:57:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if there's something explaining why pg_verifybackup is greatly slowed\n> down by sha224 but not crc32c, but the server's runtime only differs by ~20ms?\n> It seems incongruous that pg_basebackup, with all the complexity of needing to\n> communicate with the server, transferring the backup over network, and *also*\n> computing checksums, takes as long as the pg_verifybackup invocation?\n\nI guess there must be something explaining it, but I don't know what\nit could be. The client and the server are each running the checksum\nalgorithm over the same data. If that's not the same speed then .... I\ndon't get it. Unless, somehow, they're using different implementations\nof it?\n\n> I've occasionally pondered caching initdb results and reusing them across\n> tests - just the locking around it seems a bit nasty, but perhaps that could\n> be done as part of the tmp_install step. Of course, it'd need to deal with\n> different options etc...\n\nIt's a thought, but it does seem like a bit of a pain to implement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Jan 2022 15:13:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've occasionally pondered caching initdb results and reusing them across\n> tests - just the locking around it seems a bit nasty, but perhaps that could\n> be done as part of the tmp_install step. Of course, it'd need to deal with\n> different options etc...\n\nI'd actually built a prototype to do that, based on making a reference\ncluster and then \"cp -a\"'ing it instead of re-running initdb. I gave\nup when I found than on slower, disk-bound machines it was hardly\nany faster. Thinking about it now, I wonder why not just re-use one\ncluster for many tests, only dropping and re-creating the database\nin which the testing happens.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jan 2022 15:48:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-17 15:13:57 -0500, Robert Haas wrote:\n> I guess there must be something explaining it, but I don't know what\n> it could be. The client and the server are each running the checksum\n> algorithm over the same data. If that's not the same speed then .... I\n> don't get it. Unless, somehow, they're using different implementations\n> of it?\n\nI think that actually might be the issue. On linux a test a pg_verifybackup\nwas much faster than on windows (as in 10x). But if I disable openssl, it's\nonly 2x.\n\nOn the windows instance I *do* have openssl enabled. But I suspect something\nis off and the windows buildsystem ends up with our hand-rolled implementation\non the client side, but not the server side. Which'd explain the times I'm\nseeing: We have a fast CRC implementation, but the rest is pretty darn\nunoptimized.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jan 2022 13:03:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> > pg_basebackup's 010_pg_basebackup.pl looks like it could be split up,\n> > though. That one, at least to me, looks like people have just kept\n> > adding semi-related things into the same test file.\n>\n> Yea.\n\nHere's a patch that splits up that file. Essentially the first half of\nthe file is concerned with testing that a backup ends up in the state\nit expects, while the second half is concerned with checking that\nvarious options to pg_basebackup work. So I split it that way, plus I\nmoved some of the really basic stuff to a completely separate file\nwith a very brief runtime. The test results are interesting.\n\nUnpatched:\n\n[12:33:33] t/010_pg_basebackup.pl ... ok 16161 ms ( 0.02 usr 0.00\nsys + 2.07 cusr 7.80 csys = 9.89 CPU)\n[12:33:49] t/020_pg_receivewal.pl ... ok 4115 ms ( 0.00 usr 0.00\nsys + 0.89 cusr 1.73 csys = 2.62 CPU)\n[12:33:53] t/030_pg_recvlogical.pl .. ok 1857 ms ( 0.01 usr 0.01\nsys + 0.63 cusr 0.73 csys = 1.38 CPU)\n[12:33:55]\nAll tests successful.\nFiles=3, Tests=177, 22 wallclock secs ( 0.04 usr 0.02 sys + 3.59\ncusr 10.26 csys = 13.91 CPU)\n\nPached:\n\n[12:32:03] t/010_pg_basebackup_basic.pl ...... ok 192 ms ( 0.01\nusr 0.00 sys + 0.10 cusr 0.05 csys = 0.16 CPU)\n[12:32:03] t/011_pg_basebackup_integrity.pl .. ok 5530 ms ( 0.00\nusr 0.00 sys + 0.87 cusr 2.51 csys = 3.38 CPU)\n[12:32:09] t/012_pg_basebackup_options.pl .... ok 13117 ms ( 0.00\nusr 0.00 sys + 1.87 cusr 6.31 csys = 8.18 CPU)\n[12:32:22] t/020_pg_receivewal.pl ............ ok 4314 ms ( 0.01\nusr 0.00 sys + 0.97 cusr 1.77 csys = 2.75 CPU)\n[12:32:26] t/030_pg_recvlogical.pl ........... ok 1908 ms ( 0.00\nusr 0.00 sys + 0.64 cusr 0.77 csys = 1.41 CPU)\n[12:32:28]\nAll tests successful.\nFiles=5, Tests=177, 25 wallclock secs ( 0.04 usr 0.02 sys + 4.45\ncusr 11.41 csys = 15.92 CPU)\n\nSadly, we've gained about 2.5 seconds of runtime as the price of\nsplitting the test. Arguably the options part could be split up a lot\nmore finely than this, but that would drive up the runtime even more,\nbasically because we'd need more initdbs. So I don't know whether it's\nbetter to leave things as they are, split them this much, or split\nthem more. I think this amount of splitting might be justified simply\nin the interests of clarity, but I'm reluctant to go further unless we\nget some nifty initdb-caching system.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 18 Jan 2022 12:49:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 12:49:16 -0500, Robert Haas wrote:\n> Here's a patch that splits up that file.\n\nAh, nice! The split seems sensible to me.\n\n\n> Sadly, we've gained about 2.5 seconds of runtime as the price of\n> splitting the test. Arguably the options part could be split up a lot\n> more finely than this, but that would drive up the runtime even more,\n> basically because we'd need more initdbs. So I don't know whether it's\n> better to leave things as they are, split them this much, or split\n> them more. I think this amount of splitting might be justified simply\n> in the interests of clarity, but I'm reluctant to go further unless we\n> get some nifty initdb-caching system.\n\nHm. From the buildfarm / CF perspective it might still be a win, because the\ndifferent pieces can run concurrently. But it's not great :(.\n\nMaybe we really should do at least the most simplistic caching for initdbs, by\ndoing one initdb as part of the creation of temp_install. Then Cluster::init\nwould need logic to only use that if $params{extra} is empty.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:40:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 13:40:40 -0800, Andres Freund wrote:\n> Maybe we really should do at least the most simplistic caching for initdbs, by\n> doing one initdb as part of the creation of temp_install. Then Cluster::init\n> would need logic to only use that if $params{extra} is empty.\n\nI hacked this together. And the wins are bigger than I thought. On my\nworkstation, with plenty cpu and storage bandwidth, according to\n /usr/bin/time check-world NO_TEMP_INSTALL=1\nthings go from\n\n 321.56user 74.00system 2:26.22elapsed 270%CPU (0avgtext+0avgdata 93768maxresident)k\n 24inputs+32781336outputs (2254major+8717121minor)pagefaults 0swaps\n\nto\n\n 86.62user 57.10system 1:57.83elapsed 121%CPU (0avgtext+0avgdata 93752maxresident)k\n 8inputs+32683408outputs (1360major+6672618minor)pagefaults 0swaps\n\nThe difference in elapsed and system time is pretty good, but the user time\ndifference is quite staggering.\n\n\nThis doesn't yet actually address the case of the basebackup tests, because\nthat specifies a \"non-default\" option, preventing the use of the template\ninitdb. But the effects are already big enough that I thought it's worth\nsharing.\n\nOn CI for windows this reduces the time for the subscription tests from 03:24,\nto 2:39. There's some run-to-run variation, but it's a pretty clear signal...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 18 Jan 2022 17:00:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-18 13:40:40 -0800, Andres Freund wrote:\n>> Maybe we really should do at least the most simplistic caching for initdbs, by\n>> doing one initdb as part of the creation of temp_install. Then Cluster::init\n>> would need logic to only use that if $params{extra} is empty.\n\n> I hacked this together. And the wins are bigger than I thought.\n\nMe too ;-). As I remarked earlier, I'd tried this once before and\ngave up because it didn't seem to be winning much. But that was\nbefore we had so many initdb's triggered by TAP tests, I think.\n\nI tried this patch on florican's host, which seems mostly disk-bound\nwhen doing check-world. It barely gets any win from parallelism:\n\n$ time make -s check-world -j1 >/dev/null\n 3809.60 real 584.44 user 282.23 sys\n$ time make -s check-world -j2 >/dev/null\n 3789.90 real 610.60 user 289.60 sys\n\nAdding v2-0001-hack-use-template-initdb-in-tap-tests.patch:\n\n$ time make -s check-world -j1 >/dev/null\n 3193.46 real 221.32 user 226.11 sys\n$ time make -s check-world -j2 >/dev/null\n 3211.19 real 224.31 user 230.07 sys\n\n(Note that all four runs have the \"fsync = on\" removed from\n008_fsm_truncation.pl.)\n\nSo this looks like it'll be a nice win for low-end hardware, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jan 2022 11:54:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-19 11:54:01 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-01-18 13:40:40 -0800, Andres Freund wrote:\n> >> Maybe we really should do at least the most simplistic caching for initdbs, by\n> >> doing one initdb as part of the creation of temp_install. Then Cluster::init\n> >> would need logic to only use that if $params{extra} is empty.\n>\n> > I hacked this together. And the wins are bigger than I thought.\n>\n> Me too ;-). As I remarked earlier, I'd tried this once before and\n> gave up because it didn't seem to be winning much. But that was\n> before we had so many initdb's triggered by TAP tests, I think.\n\nWhat approach did you use? Do you have a better idea than generating\ntmp_install/initdb_template?\n\nI for a bit wondered whether initdb should do this internally instead. But it\nseemed more work than I wanted to tackle.\n\nThe bit in the patch about generating initdb_template in Install.pm definitely\nneeds to be made conditional, but I don't precisely know on what. The\nbuildfarm just calls it as\n perl install.pl \"$installdir\n\n\n> So this looks like it'll be a nice win for low-end hardware, too.\n\nNice!\n\n\n> (Note that all four runs have the \"fsync = on\" removed from\n> 008_fsm_truncation.pl.)\n\nI assume you're planning on comitting that?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Jan 2022 09:03:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-19 11:54:01 -0500, Tom Lane wrote:\n>> Me too ;-). As I remarked earlier, I'd tried this once before and\n>> gave up because it didn't seem to be winning much. But that was\n>> before we had so many initdb's triggered by TAP tests, I think.\n\n> What approach did you use? Do you have a better idea than generating\n> tmp_install/initdb_template?\n\nNo, it was largely the same as what you have here, I think. I dug\nup my WIP patch and attach it below, just in case there's any ideas\nworth borrowing.\n\n>> (Note that all four runs have the \"fsync = on\" removed from\n>> 008_fsm_truncation.pl.)\n\n> I assume you're planning on comitting that?\n\nYeah, will do that shortly.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 19 Jan 2022 12:14:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-19 12:14:21 -0500, Tom Lane wrote:\n> No, it was largely the same as what you have here, I think. I dug\n> up my WIP patch and attach it below, just in case there's any ideas\n> worth borrowing.\n\nHeh, it does look quite similar.\n\n\n> +\t\t\t\t\t \"cp -a \\\"%s\\\" \\\"%s/data\\\" > \\\"%s/log/initdb.log\\\" 2>&1\",\n> +\t\t\t\t\t pg_proto_datadir,\n> +\t\t\t\t\t temp_instance,\n> +\t\t\t\t\t outputdir);\n> +\t\t\tif (system(buf))\n> +\t\t\t{\n> +\t\t\t\tfprintf(stderr, _(\"\\n%s: cp failed\\nExamine %s/log/initdb.log for the reason.\\nCommand was: %s\\n\"), progname, outputdir, buf);\n> +\t\t\t\texit(2);\n> +\t\t\t}\n\nBoth ours have this. Unfortunately on windows cp doesn't natively\nexist. Although git does provide it. I tried a few things that appear to be\nnatively available (time is best of three executions):\n\n gnu cp from git, cp -a tmp_install\\initdb_template t\\\n 670ms\n\n xcopy.exe /E /Q tmp_install\\initdb_template t\\\n 638ms\n\n robocopy /e /NFL /NDL tmp_install\\initdb_template t\\\n 575ms\n\nSo I guess we could use robocopy? That's shipped as part of windows starting in\nwindows 10... xcopy has been there for longer, so I might just default to that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Jan 2022 09:42:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-19 09:42:31 -0800, Andres Freund wrote:\n> Both ours have this. Unfortunately on windows cp doesn't natively\n> exist. Although git does provide it. I tried a few things that appear to be\n> natively available (time is best of three executions):\n> \n> gnu cp from git, cp -a tmp_install\\initdb_template t\\\n> 670ms\n> \n> xcopy.exe /E /Q tmp_install\\initdb_template t\\\n> 638ms\n\nThis errors out if there's any forward slashes in paths, thinking it's a\nflag. Seems out.\n\n\n> robocopy /e /NFL /NDL tmp_install\\initdb_template t\\\n> 575ms\n> \n> So I guess we could use robocopy? That's shipped as part of windows starting in\n> windows 10... xcopy has been there for longer, so I might just default to that.\n\nIt's part of of the OS back to at least windows 2016. I've found some random\nlinks on the webs saying that it's included \"This command is available in\nVista and Windows 7 by default. For Windows XP and Server 2003 this tool can\nbe downloaded as part of Server 2003 Windows Resource Kit tools. \".\n\nGiven that our oldest supported msvc version only runs on Windows 7 upwards\n[2], I think we should be good?\n\n\nAlternatively we could lift copydir() to src/common? But that seems like a bit\nmore work than I want to put in.\n\n\nFor a second I was thinking that using something like copy --reflink=auto\ncould make a lot of sense for machines like florican, removing most of the IO\nfrom a \"templated initdb\". But it looks like freebsd doesn't have that, and\nit'd be a pain to figure out whether cp has --reflink.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.windows-commandline.com/download-robocopy/\n[2] https://docs.microsoft.com/en-us/visualstudio/releases/2013/vs2013-sysrequirements-vs\n\n\n",
"msg_date": "Wed, 19 Jan 2022 18:18:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "\nOn 1/19/22 21:18, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-19 09:42:31 -0800, Andres Freund wrote:\n>> Both ours have this. Unfortunately on windows cp doesn't natively\n>> exist. Although git does provide it. I tried a few things that appear to be\n>> natively available (time is best of three executions):\n>>\n>> gnu cp from git, cp -a tmp_install\\initdb_template t\\\n>> 670ms\n>>\n>> xcopy.exe /E /Q tmp_install\\initdb_template t\\\n>> 638ms\n> This errors out if there's any forward slashes in paths, thinking it's a\n> flag. Seems out.\n>\n>\n>> robocopy /e /NFL /NDL tmp_install\\initdb_template t\\\n>> 575ms\n>>\n>> So I guess we could use robocopy? That's shipped as part of windows starting in\n>> windows 10... xcopy has been there for longer, so I might just default to that.\n> It's part of of the OS back to at least windows 2016. I've found some random\n> links on the webs saying that it's included \"This command is available in\n> Vista and Windows 7 by default. For Windows XP and Server 2003 this tool can\n> be downloaded as part of Server 2003 Windows Resource Kit tools. \".\n>\n> Given that our oldest supported msvc version only runs on Windows 7 upwards\n> [2], I think we should be good?\n>\n>\n> Alternatively we could lift copydir() to src/common? But that seems like a bit\n> more work than I want to put in.\n>\n>\n> For a second I was thinking that using something like copy --reflink=auto\n> could make a lot of sense for machines like florican, removing most of the IO\n> from a \"templated initdb\". But it looks like freebsd doesn't have that, and\n> it'd be a pain to figure out whether cp has --reflink.\n\n\n\nFYI, the buildfarm code has this. It doesn't need backslashed paths, you\njust need to quote the paths, which you should probably do anyway:\n\n\n sub copydir\n {\n my ($from, $to, $logfile) = @_;\n my ($cp, $rd);\n if ($PGBuild::conf{using_msvc})\n {\n $cp = \"robocopy /nfl /ndl /np /e /sec \";\n $rd = qq{/LOG+:\"$logfile\" >nul};\n }\n else\n {\n $cp = \"cp -r\";\n $rd = qq{> \"$logfile\"};\n }\n system(qq{$cp \"$from\" \"$to\" $rd 2>&1});\n ## no critic (RequireLocalizedPunctuationVars)\n $? = 0 if ($cp =~ /robocopy/ && $? >> 8 == 1);\n return;\n }\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 20 Jan 2022 16:54:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-19 18:18:59 -0800, Andres Freund wrote:\n> > robocopy /e /NFL /NDL tmp_install\\initdb_template t\\\n> > 575ms\n> > \n> > So I guess we could use robocopy? That's shipped as part of windows starting in\n> > windows 10... xcopy has been there for longer, so I might just default to that.\n> \n> It's part of of the OS back to at least windows 2016. I've found some random\n> links on the webs saying that it's included \"This command is available in\n> Vista and Windows 7 by default. For Windows XP and Server 2003 this tool can\n> be downloaded as part of Server 2003 Windows Resource Kit tools. \".\n> \n> Given that our oldest supported msvc version only runs on Windows 7 upwards\n> [2], I think we should be good?\n\nOne thing I'm not sure about is where to perform the creation of the\n\"template\" for the msvc scripts. The prototype upthread created it\nunconditionally in Install.pm, but that's clearly not right.\n\nThe buildfarm currently creates the temporary installation using a generic\nperl install.pl \"$installdir\" and then uses NO_TEMP_INSTALL.\n\nI don't really have a better idea than to introduce a dedicated vcregress.pl\ncommand to create the temporary installation? :(\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 Jan 2022 09:54:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: slowest tap tests - split or accelerate?"
},
{
"msg_contents": "Hi,\n\nWe have some issues with CI on macos and windows being too expensive (more on\nthat soon in a separate email), which reminded me of this thread (with\noriginal title: [1])\n\nI've attached a somewhat cleaned up version of the patch to cache initdb\nacross runs. The results are still fairly impressive in my opinion.\n\n\nOne thing I do not like, but don't have a good idea for how to improve, is\nthat there's a bunch of duplicated logic in pg_regress.c and Cluster.pm. I've\ntried to move that into initdb.c itself, but that ends up pretty ugly, because\nwe need to be a lot more careful about checking whether options are compatible\netc. I've also thought about just putting this into a separate perl script,\nbut right now we still allow basic regression tests without perl being\navailable. So I concluded that for now just having the copies is the best\nanswer.\n\n\nTimes for running all tests under meson, on my workstation (20 cores / 40\nthreads):\n\ncassert build -O2:\n\nBefore:\nreal\t0m44.638s\nuser\t7m58.780s\nsys\t2m48.773s\n\nAfter:\nreal\t0m38.938s\nuser\t2m37.615s\nsys\t2m0.570s\n\n\ncassert build -O0:\n\nBefore:\nreal\t1m11.290s\nuser\t13m9.817s\nsys\t2m54.946s\n\nAfter:\nreal\t1m2.959s\nuser\t3m5.835s\nsys\t1m59.887s\n\n\nnon-cassert build:\n\nBefore:\nreal\t0m34.579s\nuser\t5m30.418s\nsys\t2m40.507s\n\nAfter:\nreal\t0m27.710s\nuser\t2m20.644s\nsys\t1m55.770s\n\n\nOn CI this reduces the test times substantially:\nFreebsd 8:51 -> 5:35\nDebian w/ asan, autoconf 6:43 -> 4:55\nDebian w/ alignmentsan, ubsan 4:02 -> 2:33\nmacos 5:07 -> 4:29\nwindows 10:21 -> 9:49\n\nThis is ignoring a bit of run-to-run variance, but the trend is obvious enough\nthat it's not worth worrying about that.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20220120021859.3zpsfqn4z7ob7afz%40alap3.anarazel.de",
"msg_date": "Sat, 5 Aug 2023 12:56:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "initdb caching during tests"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Times for running all tests under meson, on my workstation (20 cores / 40\n> threads):\n\n> cassert build -O2:\n\n> Before:\n> real\t0m44.638s\n> user\t7m58.780s\n> sys\t2m48.773s\n\n> After:\n> real\t0m38.938s\n> user\t2m37.615s\n> sys\t2m0.570s\n\nImpressive results. Even though your bottom-line time doesn't change that\nmuch, the big reduction in CPU time should translate to a nice speedup\non slower buildfarm animals.\n\n(Disclaimer: I've not read the patch.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Aug 2023 16:58:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-05 16:58:38 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Times for running all tests under meson, on my workstation (20 cores / 40\n> > threads):\n> \n> > cassert build -O2:\n> \n> > Before:\n> > real\t0m44.638s\n> > user\t7m58.780s\n> > sys\t2m48.773s\n> \n> > After:\n> > real\t0m38.938s\n> > user\t2m37.615s\n> > sys\t2m0.570s\n> \n> Impressive results. Even though your bottom-line time doesn't change that\n> much\n\nUnfortunately we have a few tests that take quite a while - for those the\ninitdb removal doesn't make that much of a difference. Particularly because\nthis machine has enough CPUs to not be fully busy except for the first few\nseconds...\n\nE.g. for a run with the patch applied:\n\n258/265 postgresql:pg_basebackup / pg_basebackup/010_pg_basebackup OK 16.58s 187 subtests passed\n259/265 postgresql:subscription / subscription/100_bugs OK 6.69s 12 subtests passed\n260/265 postgresql:regress / regress/regress OK 24.95s 215 subtests passed\n261/265 postgresql:ssl / ssl/001_ssltests OK 7.97s 205 subtests passed\n262/265 postgresql:pg_dump / pg_dump/002_pg_dump OK 19.65s 11262 subtests passed\n263/265 postgresql:recovery / recovery/027_stream_regress OK 29.34s 6 subtests passed\n264/265 postgresql:isolation / isolation/isolation OK 33.94s 112 subtests passed\n265/265 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade OK 38.22s 18 subtests passed\n\nThe pg_upgrade test is faster in isolation (29s), but not that much. The\noverall runtime is reduces due to the reduced \"competing\" cpu usage, but other\nthan that...\n\n\nLooking at where the time is spent when running the pg_upgrade test on its own:\n\ngrep -E '^\\[' testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade |sed -E -e 's/.*\\(([0-9.]+)s\\)(.*)/\\1 \\2/g'|sort -n -r\n\ncassert:\n13.094 ok 5 - regression tests pass\n6.147 ok 14 - run of pg_upgrade for new instance\n2.340 ok 6 - dump before running pg_upgrade\n1.638 ok 17 - dump after running pg_upgrade\n1.375 ok 12 - run of pg_upgrade --check for new instance\n0.798 ok 1 - check locales in original cluster\n0.371 ok 9 - invalid database causes failure status (got 1 vs expected 1)\n0.149 ok 7 - run of pg_upgrade --check for new instance with incorrect binary path\n0.131 ok 16 - check that locales in new cluster match original cluster\n\noptimized:\n8.372 ok 5 - regression tests pass\n3.641 ok 14 - run of pg_upgrade for new instance\n1.371 ok 12 - run of pg_upgrade --check for new instance\n1.104 ok 6 - dump before running pg_upgrade\n0.636 ok 17 - dump after running pg_upgrade\n0.594 ok 1 - check locales in original cluster\n0.359 ok 9 - invalid database causes failure status (got 1 vs expected 1)\n0.148 ok 7 - run of pg_upgrade --check for new instance with incorrect binary path\n0.127 ok 16 - check that locales in new cluster match original cluster\n\n\nThe time for \"dump before running pg_upgrade\" is misleadingly high - there's\nno output between starting initdb and the dump, so the timing includes initdb\nand a bunch of other work. But it's still not fast (1.637s) after.\n\nA small factor is that the initdb times are not insignificant, because the\ntemplate initdb can't be used due to a bunch of parameters passed to initdb :)\n\n\n> the big reduction in CPU time should translate to a nice speedup on slower\n> buildfarm animals.\n\nYea. It's a particularly large win when using valgrind. Under valgrind, a very\nlarge portion of the time for many tests is just spent doing initdb... So I am\nhoping to see some nice gains for skink.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Aug 2023 15:26:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "> On 5 Aug 2023, at 21:56, Andres Freund <andres@anarazel.de> wrote:\n\n> We have some issues with CI on macos and windows being too expensive (more on\n> that soon in a separate email), which reminded me of this thread (with\n> original title: [1])\n> \n> I've attached a somewhat cleaned up version of the patch to cache initdb\n> across runs. The results are still fairly impressive in my opinion.\n> \n> One thing I do not like, but don't have a good idea for how to improve, is\n> that there's a bunch of duplicated logic in pg_regress.c and Cluster.pm. I've\n> tried to move that into initdb.c itself, but that ends up pretty ugly, because\n> we need to be a lot more careful about checking whether options are compatible\n> etc. I've also thought about just putting this into a separate perl script,\n> but right now we still allow basic regression tests without perl being\n> available. So I concluded that for now just having the copies is the best\n> answer.\n\nI had a look at this today and have been running a lot of tests with it without\nfinding anything that breaks. The duplicated code is unfortunate, but after\nplaying around with some options I agree that it's likely the best option.\n\nWhile looking I did venture down the rabbithole of making it support extra\nparams as well, but I don't think moving the goalposts there is doing us any\nfavors, it's clearly chasing diminishing returns.\n\nMy only small gripe is that I keep thinking about template databases for CREATE\nDATABASE when reading the error messages in this patch, which is clearly not\nrelated to what this does.\n\n+ note(\"initializing database system by copying initdb template\");\n\nI personally would've used cache instead of template in the user facing parts\nto keep concepts separated, but thats personal taste.\n\nAll in all, I think this is committable as is.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 23:47:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-22 23:47:24 +0200, Daniel Gustafsson wrote:\n> I had a look at this today and have been running a lot of tests with it without\n> finding anything that breaks.\n\nThanks!\n\n\n> The duplicated code is unfortunate, but after playing around with some\n> options I agree that it's likely the best option.\n\nGood and bad to hear :)\n\n\n> While looking I did venture down the rabbithole of making it support extra\n> params as well, but I don't think moving the goalposts there is doing us any\n> favors, it's clearly chasing diminishing returns.\n\nAgreed. I also went down that rabbithole, but it quickly gets a lot more code\nand complexity - and there just aren't that many tests using non-default\noptions.\n\n\n> My only small gripe is that I keep thinking about template databases for CREATE\n> DATABASE when reading the error messages in this patch, which is clearly not\n> related to what this does.\n> \n> + note(\"initializing database system by copying initdb template\");\n> \n> I personally would've used cache instead of template in the user facing parts\n> to keep concepts separated, but thats personal taste.\n\nI am going back and forth on that one (as one can notice with $subject). It\ndoesn't quite seem like a cache, as it's not \"created\" on demand and only\nusable when the exactly same parameters are used repeatedly. But template is\noverloaded as you say...\n\n\n> All in all, I think this is committable as is.\n\nCool. Planning to do that tomorrow. We can easily extend / adjust this later,\nit just affects testing infrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Aug 2023 18:17:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "> On 23 Aug 2023, at 03:17, Andres Freund <andres@anarazel.de> wrote:\n> On 2023-08-22 23:47:24 +0200, Daniel Gustafsson wrote:\n\n>> My only small gripe is that I keep thinking about template databases for CREATE\n>> DATABASE when reading the error messages in this patch, which is clearly not\n>> related to what this does.\n>> \n>> + note(\"initializing database system by copying initdb template\");\n>> \n>> I personally would've used cache instead of template in the user facing parts\n>> to keep concepts separated, but thats personal taste.\n> \n> I am going back and forth on that one (as one can notice with $subject). It\n> doesn't quite seem like a cache, as it's not \"created\" on demand and only\n> usable when the exactly same parameters are used repeatedly. But template is\n> overloaded as you say...\n\nThat's a fair point, cache is not a good word to describe a stored copy of\nsomething prefabricated. Let's go with template, we can always refine in-tree\nif a better wording comes along.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 23 Aug 2023 10:10:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-23 10:10:31 +0200, Daniel Gustafsson wrote:\n> > On 23 Aug 2023, at 03:17, Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-08-22 23:47:24 +0200, Daniel Gustafsson wrote:\n> \n> >> My only small gripe is that I keep thinking about template databases for CREATE\n> >> DATABASE when reading the error messages in this patch, which is clearly not\n> >> related to what this does.\n> >> \n> >> + note(\"initializing database system by copying initdb template\");\n> >> \n> >> I personally would've used cache instead of template in the user facing parts\n> >> to keep concepts separated, but thats personal taste.\n> > \n> > I am going back and forth on that one (as one can notice with $subject). It\n> > doesn't quite seem like a cache, as it's not \"created\" on demand and only\n> > usable when the exactly same parameters are used repeatedly. But template is\n> > overloaded as you say...\n> \n> That's a fair point, cache is not a good word to describe a stored copy of\n> something prefabricated. Let's go with template, we can always refine in-tree\n> if a better wording comes along.\n\nCool. Pushed that way. Only change I made is to redirect the output of cp\n(and/or robocopy) in pg_regress, similar to how that was done for initdb\nproper.\n\nLet's see what the buildfarm says - it's not inconceivable that it'll show\nsome issues.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 24 Aug 2023 15:10:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 10:10 AM Andres Freund <andres@anarazel.de> wrote:\n> Let's see what the buildfarm says - it's not inconceivable that it'll show\n> some issues.\n\nApparently Solaris doesn't like \"cp -a\", per animal \"margay\". I think\n\"cp -RPp\" should be enough everywhere?\n\nhttps://docs.oracle.com/cd/E88353_01/html/E37839/cp-1.html\nhttps://pubs.opengroup.org/onlinepubs/9699919799.2013edition/utilities/cp.html",
"msg_date": "Fri, 25 Aug 2023 17:50:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "> On 25 Aug 2023, at 07:50, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Fri, Aug 25, 2023 at 10:10 AM Andres Freund <andres@anarazel.de> wrote:\n>> Let's see what the buildfarm says - it's not inconceivable that it'll show\n>> some issues.\n> \n> Apparently Solaris doesn't like \"cp -a\", per animal \"margay\". I think\n> \"cp -RPp\" should be enough everywhere?\n\nAgreed, AFAICT that should work equally well on all supported platforms.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:00:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-25 09:00:24 +0200, Daniel Gustafsson wrote:\n> > On 25 Aug 2023, at 07:50, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > \n> > On Fri, Aug 25, 2023 at 10:10 AM Andres Freund <andres@anarazel.de> wrote:\n> >> Let's see what the buildfarm says - it's not inconceivable that it'll show\n> >> some issues.\n> > \n> > Apparently Solaris doesn't like \"cp -a\", per animal \"margay\". I think\n> > \"cp -RPp\" should be enough everywhere?\n\nThanks for noticing the issue and submitting the patch.\n\n\n> Agreed, AFAICT that should work equally well on all supported platforms.\n\nAlso agreed. Unsurprisingly, CI didn't find anything on the tested platforms.\n\nPushed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Aug 2023 06:57:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 03:10:00PM -0700, Andres Freund wrote:\n> Cool. Pushed that way.\n\nI just noticed the tests running about 30% faster on my machine due to\nthis. Thanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 25 Aug 2023 09:29:59 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "On Fri, 25 Aug 2023 at 00:16, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-08-23 10:10:31 +0200, Daniel Gustafsson wrote:\n> > > On 23 Aug 2023, at 03:17, Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-08-22 23:47:24 +0200, Daniel Gustafsson wrote:\n> >\n> > >> My only small gripe is that I keep thinking about template databases for CREATE\n> > >> DATABASE when reading the error messages in this patch, which is clearly not\n> > >> related to what this does.\n> > >>\n> > >> + note(\"initializing database system by copying initdb template\");\n> > >>\n> > >> I personally would've used cache instead of template in the user facing parts\n> > >> to keep concepts separated, but thats personal taste.\n> > >\n> > > I am going back and forth on that one (as one can notice with $subject). It\n> > > doesn't quite seem like a cache, as it's not \"created\" on demand and only\n> > > usable when the exactly same parameters are used repeatedly. But template is\n> > > overloaded as you say...\n> >\n> > That's a fair point, cache is not a good word to describe a stored copy of\n> > something prefabricated. Let's go with template, we can always refine in-tree\n> > if a better wording comes along.\n>\n> Cool. Pushed that way. Only change I made is to redirect the output of cp\n> (and/or robocopy) in pg_regress, similar to how that was done for initdb\n> proper.\n\nWhile working on some things that are prone to breaking initdb, I\nnoticed that this template isn't generated with --no-clean, while\npg_regress does do that. This meant `make check` didn't have any\nmeaningful debuggable output when I broke the processes in initdb,\nwhich is undesirable.\n\nAttached a patch that fixes this for both make and meson, by adding\n--no-clean to the initdb template.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Thu, 7 Dec 2023 14:50:46 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "> On 7 Dec 2023, at 14:50, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n\n> Attached a patch that fixes this for both make and meson, by adding\n> --no-clean to the initdb template.\n\nMakes sense. While in there I think we should rename -N to the long optoin\n--no-sync to make it easier to grep for and make the buildfiles more\nself-documenting.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 7 Dec 2023 15:06:41 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "On Thu, 7 Dec 2023 at 15:06, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 7 Dec 2023, at 14:50, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>\n> > Attached a patch that fixes this for both make and meson, by adding\n> > --no-clean to the initdb template.\n>\n> Makes sense. While in there I think we should rename -N to the long optoin\n> --no-sync to make it easier to grep for and make the buildfiles more\n> self-documenting.\n\nThen that'd be the attached patch, which also includes --auth instead\nof -A, for the same reason as -N vs --no-sync\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Thu, 7 Dec 2023 15:27:10 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
},
{
"msg_contents": "> On 7 Dec 2023, at 15:27, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n\n> Then that'd be the attached patch, which also includes --auth instead\n> of -A, for the same reason as -N vs --no-sync\n\nApplied to master, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 8 Dec 2023 13:59:22 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: initdb caching during tests"
}
] |
[
{
"msg_contents": "Hi,\n\nOne of the complaints I sometimes hear from users and customers using \nPostgres to store JSON documents (as JSONB type, of course) is that the \nselectivity estimates are often pretty poor.\n\nCurrently we only really have MCV and histograms with whole documents, \nand we can deduce some stats from that. But that is somewhat bogus \nbecause there's only ~100 documents in either MCV/histogram (with the \ndefault statistics target). And moreover we discard all \"oversized\" \nvalues (over 1kB) before even calculating those stats, which makes it \neven less representative.\n\nA couple weeks ago I started playing with this, and I experimented with \nimproving extended statistics in this direction. After a while I noticed \na forgotten development branch from 2016 which tried to do this by \nadding a custom typanalyze function, which seemed like a more natural \nidea (because it's really a statistics for a single column).\n\nBut then I went to pgconf NYC in early December, and I spoke to Oleg \nabout various JSON-related things, and he mentioned they've been working \non this topic some time ago too, but did not have time to pursue it. So \nhe pointed me to a branch [1] developed by Nikita Glukhov.\n\nI like Nikita's branch because it solved a couple architectural issues \nthat I've been aware of, but only solved them in a rather hackish way.\n\nI had a discussion with Nikita about his approach what can we do to move \nit forward. He's focusing on other JSON stuff, but he's OK with me \ntaking over and moving it forward. So here we go ...\n\nNikita rebased his branch recently, I've kept improving it in various \n(mostly a lot of comments and docs, and some minor fixes and tweaks). \nI've pushed my version with a couple extra commits in [2], but you can \nignore that except if you want to see what I added/changed.\n\nAttached is a couple patches adding adding the main part of the feature. \nThere's a couple more commits in the github repositories, adding more \nadvanced features - I'll briefly explain those later, but I'm not \nincluding them here because those are optional features and it'd be \ndistracting to include them here.\n\nThere are 6 patches in the series, but the magic mostly happens in parts \n0001 and 0006. The other parts are mostly just adding infrastructure, \nwhich may be a sizeable amount of code, but the changes are fairly \nsimple and obvious. So let's focus on 0001 and 0006.\n\nTo add JSON statistics we need to do two basic things - we need to build \nthe statistics and then we need to allow using them while estimating \nconditions.\n\n\n1) building stats\n\nLet's talk about building the stats first. The patch does one of the \nthings I experimented with - 0006 adds a jsonb_typanalyze function, and \nit associates it with the data type. The function extracts paths and \nvalues from the JSONB document, builds the statistics, and then stores \nthe result in pg_statistic as a new stakind.\n\nI've been planning to store the stats in pg_statistic too, but I've been \nconsidering to use a custom data type. The patch does something far more \nelegant - it simply uses stavalues to store an array of JSONB documents, \neach describing stats for one path extracted from the sampled documents.\n\nOne (very simple) element of the array might look like this:\n\n {\"freq\": 1,\n \"json\": {\n \"mcv\": {\n \"values\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n \"numbers\": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]},\n \"width\": 19,\n \"distinct\": 10,\n \"nullfrac\": 0,\n \"correlation\": 0.10449},\n \"path\": \"$.\\\"a\\\"\",\n \"freq_null\": 0, \"freq_array\": 0, \"freq_object\": 0,\n \"freq_string\": 0, \"freq_boolean\": 0, \"freq_numeric\": 0}\n\nIn this case there's only a MCV list (represented by two arrays, just \nlike in pg_statistic), but there might be another part with a histogram. \nThere's also the other columns we'd expect to see in pg_statistic.\n\nIn principle, we need pg_statistic for each path we extract from the \nJSON documents and decide it's interesting enough for estimation. There \nare probably other ways to serialize/represent this, but I find using \nJSONB for this pretty convenient because we need to deal with a mix of \ndata types (for the same path), and other JSON specific stuff. Storing \nthat in Postgres arrays would be problematic.\n\nI'm sure there's plenty open questions - for example I think we'll need \nsome logic to decide which paths to keep, otherwise the statistics can \nget quite big, if we're dealing with large / variable documents. We're \nalready doing something similar for MCV lists.\n\nOne of Nikita's patches not included in this thread allow \"selective\" \nstatistics, where you can define in advance a \"filter\" restricting which \nparts are considered interesting by ANALYZE. That's interesting, but I \nthink we need some simple MCV-like heuristics first anyway.\n\nAnother open question is how deep the stats should be. Imagine documents \nlike this:\n\n {\"a\" : {\"b\" : {\"c\" : {\"d\" : ...}}}}\n\nThe current patch build stats for all possible paths:\n\n \"a\"\n \"a.b\"\n \"a.b.c\"\n \"a.b.c.d\"\n\nand of course many of the paths will often have JSONB documents as \nvalues, not simple scalar values. I wonder if we should limit the depth \nsomehow, and maybe build stats only for scalar values.\n\n\n2) applying the statistics\n\nOne of the problems is how to actually use the statistics. For @> \noperator it's simple enough, because it's (jsonb @> jsonb) so we have \ndirect access to the stats. But often the conditions look like this:\n\n jsonb_column ->> 'key' = 'value'\n\nso the condition is actually on an expression, not on the JSONB column \ndirectly. My solutions were pretty ugly hacks, but Nikita had a neat \nidea - we can define a custom procedure for each operator, which is \nresponsible for \"calculating\" the statistics for the expression.\n\nSo in this case \"->>\" will have such \"oprstat\" procedure, which fetches \nstats for the JSONB column, extracts stats for the \"key\" path. And then \nwe can use that for estimation of the (text = text) condition.\n\nThis is what 0001 does, pretty much. We simply look for expression stats \nprovided by an index, extended statistics, and then - if oprstat is \ndefined for the operator - we try to derive the stats.\n\nThis opens other interesting opportunities for the future - one of the \nparts adds oprstat for basic arithmetic operators, which allows deducing \nstatistics for expressions like (a+10) from statistics on column (a).\n\nWhich seems like a neat feature on it's own, but it also interacts with \nthe extended statistics in somewhat non-obvious ways (especially when \nestimating GROUP BY cardinalities).\n\nOf course, there's a limit of what we can reasonably estimate - for \nexample, there may be statistical dependencies between paths, and this \npatch does not even attempt to deal with that. In a way, this is similar \nto correlation between columns, except that here we have a dynamic set \nof columns, which makes it much much harder. We'd need something like \nextended stats on steroids, pretty much.\n\n\nI'm sure I've forgotten various important bits - many of them are \nmentioned or explained in comments, but I'm sure others are not. And I'd \nbet there are things I forgot about entirely or got wrong. So feel free \nto ask.\n\n\nIn any case, I think this seems like a good first step to improve our \nestimates for JSOB columns.\n\nregards\n\n\n[1] https://github.com/postgrespro/postgres/tree/jsonb_stats\n\n[2] https://github.com/tvondra/postgres/tree/jsonb_stats_rework\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 31 Dec 2021 23:06:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Fri, Dec 31, 2021 at 2:07 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> One of the complaints I sometimes hear from users and customers using\n> Postgres to store JSON documents (as JSONB type, of course) is that the\n> selectivity estimates are often pretty poor.\n>\n> Currently we only really have MCV and histograms with whole documents,\n> and we can deduce some stats from that. But that is somewhat bogus\n> because there's only ~100 documents in either MCV/histogram (with the\n> default statistics target). And moreover we discard all \"oversized\"\n> values (over 1kB) before even calculating those stats, which makes it\n> even less representative.\n>\n> A couple weeks ago I started playing with this, and I experimented with\n> improving extended statistics in this direction. After a while I noticed\n> a forgotten development branch from 2016 which tried to do this by\n> adding a custom typanalyze function, which seemed like a more natural\n> idea (because it's really a statistics for a single column).\n>\n> But then I went to pgconf NYC in early December, and I spoke to Oleg\n> about various JSON-related things, and he mentioned they've been working\n> on this topic some time ago too, but did not have time to pursue it. So\n> he pointed me to a branch [1] developed by Nikita Glukhov.\n>\n> I like Nikita's branch because it solved a couple architectural issues\n> that I've been aware of, but only solved them in a rather hackish way.\n>\n> I had a discussion with Nikita about his approach what can we do to move\n> it forward. He's focusing on other JSON stuff, but he's OK with me\n> taking over and moving it forward. So here we go ...\n>\n> Nikita rebased his branch recently, I've kept improving it in various\n> (mostly a lot of comments and docs, and some minor fixes and tweaks).\n> I've pushed my version with a couple extra commits in [2], but you can\n> ignore that except if you want to see what I added/changed.\n>\n> Attached is a couple patches adding adding the main part of the feature.\n> There's a couple more commits in the github repositories, adding more\n> advanced features - I'll briefly explain those later, but I'm not\n> including them here because those are optional features and it'd be\n> distracting to include them here.\n>\n> There are 6 patches in the series, but the magic mostly happens in parts\n> 0001 and 0006. The other parts are mostly just adding infrastructure,\n> which may be a sizeable amount of code, but the changes are fairly\n> simple and obvious. So let's focus on 0001 and 0006.\n>\n> To add JSON statistics we need to do two basic things - we need to build\n> the statistics and then we need to allow using them while estimating\n> conditions.\n>\n>\n> 1) building stats\n>\n> Let's talk about building the stats first. The patch does one of the\n> things I experimented with - 0006 adds a jsonb_typanalyze function, and\n> it associates it with the data type. The function extracts paths and\n> values from the JSONB document, builds the statistics, and then stores\n> the result in pg_statistic as a new stakind.\n>\n> I've been planning to store the stats in pg_statistic too, but I've been\n> considering to use a custom data type. The patch does something far more\n> elegant - it simply uses stavalues to store an array of JSONB documents,\n> each describing stats for one path extracted from the sampled documents.\n>\n> One (very simple) element of the array might look like this:\n>\n> {\"freq\": 1,\n> \"json\": {\n> \"mcv\": {\n> \"values\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n> \"numbers\": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]},\n> \"width\": 19,\n> \"distinct\": 10,\n> \"nullfrac\": 0,\n> \"correlation\": 0.10449},\n> \"path\": \"$.\\\"a\\\"\",\n> \"freq_null\": 0, \"freq_array\": 0, \"freq_object\": 0,\n> \"freq_string\": 0, \"freq_boolean\": 0, \"freq_numeric\": 0}\n>\n> In this case there's only a MCV list (represented by two arrays, just\n> like in pg_statistic), but there might be another part with a histogram.\n> There's also the other columns we'd expect to see in pg_statistic.\n>\n> In principle, we need pg_statistic for each path we extract from the\n> JSON documents and decide it's interesting enough for estimation. There\n> are probably other ways to serialize/represent this, but I find using\n> JSONB for this pretty convenient because we need to deal with a mix of\n> data types (for the same path), and other JSON specific stuff. Storing\n> that in Postgres arrays would be problematic.\n>\n> I'm sure there's plenty open questions - for example I think we'll need\n> some logic to decide which paths to keep, otherwise the statistics can\n> get quite big, if we're dealing with large / variable documents. We're\n> already doing something similar for MCV lists.\n>\n> One of Nikita's patches not included in this thread allow \"selective\"\n> statistics, where you can define in advance a \"filter\" restricting which\n> parts are considered interesting by ANALYZE. That's interesting, but I\n> think we need some simple MCV-like heuristics first anyway.\n>\n> Another open question is how deep the stats should be. Imagine documents\n> like this:\n>\n> {\"a\" : {\"b\" : {\"c\" : {\"d\" : ...}}}}\n>\n> The current patch build stats for all possible paths:\n>\n> \"a\"\n> \"a.b\"\n> \"a.b.c\"\n> \"a.b.c.d\"\n>\n> and of course many of the paths will often have JSONB documents as\n> values, not simple scalar values. I wonder if we should limit the depth\n> somehow, and maybe build stats only for scalar values.\n>\n>\n> 2) applying the statistics\n>\n> One of the problems is how to actually use the statistics. For @>\n> operator it's simple enough, because it's (jsonb @> jsonb) so we have\n> direct access to the stats. But often the conditions look like this:\n>\n> jsonb_column ->> 'key' = 'value'\n>\n> so the condition is actually on an expression, not on the JSONB column\n> directly. My solutions were pretty ugly hacks, but Nikita had a neat\n> idea - we can define a custom procedure for each operator, which is\n> responsible for \"calculating\" the statistics for the expression.\n>\n> So in this case \"->>\" will have such \"oprstat\" procedure, which fetches\n> stats for the JSONB column, extracts stats for the \"key\" path. And then\n> we can use that for estimation of the (text = text) condition.\n>\n> This is what 0001 does, pretty much. We simply look for expression stats\n> provided by an index, extended statistics, and then - if oprstat is\n> defined for the operator - we try to derive the stats.\n>\n> This opens other interesting opportunities for the future - one of the\n> parts adds oprstat for basic arithmetic operators, which allows deducing\n> statistics for expressions like (a+10) from statistics on column (a).\n>\n> Which seems like a neat feature on it's own, but it also interacts with\n> the extended statistics in somewhat non-obvious ways (especially when\n> estimating GROUP BY cardinalities).\n>\n> Of course, there's a limit of what we can reasonably estimate - for\n> example, there may be statistical dependencies between paths, and this\n> patch does not even attempt to deal with that. In a way, this is similar\n> to correlation between columns, except that here we have a dynamic set\n> of columns, which makes it much much harder. We'd need something like\n> extended stats on steroids, pretty much.\n>\n>\n> I'm sure I've forgotten various important bits - many of them are\n> mentioned or explained in comments, but I'm sure others are not. And I'd\n> bet there are things I forgot about entirely or got wrong. So feel free\n> to ask.\n>\n>\n> In any case, I think this seems like a good first step to improve our\n> estimates for JSOB columns.\n>\n> regards\n>\n>\n> [1] https://github.com/postgrespro/postgres/tree/jsonb_stats\n>\n> [2] https://github.com/tvondra/postgres/tree/jsonb_stats_rework\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\nHi,\nFor patch 1:\n\n+ List *statisticsName = NIL; /* optional stats estimat.\nprocedure */\n\nI think if the variable is named estimatorName (or something similar), it\nwould be easier for people to grasp its purpose.\n\n+ /* XXX perhaps full \"statistics\" wording would be better */\n+ else if (strcmp(defel->defname, \"stats\") == 0)\n\nI would recommend (stats sounds too general):\n\n+ else if (strcmp(defel->defname, \"statsestimator\") == 0)\n\n+ statisticsOid = ValidateStatisticsEstimator(statisticsName);\n\nstatisticsOid -> statsEstimatorOid\n\nFor get_oprstat():\n\n+ }\n+ else\n+ return (RegProcedure) InvalidOid;\n\nkeyword else is not needed (considering the return statement in if block).\n\nFor patch 06:\n\n+ /* FIXME Could be before the memset, I guess? Checking\nvardata->statsTuple. */\n+ if (!data->statsTuple)\n+ return false;\n\nI would agree the check can be lifted above the memset call.\n\n+ * XXX This does not really extract any stats, it merely allocates the\nstruct?\n+ */\n+static JsonPathStats\n+jsonPathStatsGetSpecialStats(JsonPathStats pstats, JsonPathStatsType type)\n\nAs comments says, I think allocJsonPathStats() would be better name for the\nfunc.\n\n+ * XXX Why doesn't this do jsonPathStatsGetTypeFreq check similar to what\n+ * jsonPathStatsGetLengthStats does?\n\nI think `jsonPathStatsGetTypeFreq(pstats, jbvArray, 0.0) <= 0.0` check\nshould be added for jsonPathStatsGetArrayLengthStats().\n\nTo be continued ...\n\nOn Fri, Dec 31, 2021 at 2:07 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nOne of the complaints I sometimes hear from users and customers using \nPostgres to store JSON documents (as JSONB type, of course) is that the \nselectivity estimates are often pretty poor.\n\nCurrently we only really have MCV and histograms with whole documents, \nand we can deduce some stats from that. But that is somewhat bogus \nbecause there's only ~100 documents in either MCV/histogram (with the \ndefault statistics target). And moreover we discard all \"oversized\" \nvalues (over 1kB) before even calculating those stats, which makes it \neven less representative.\n\nA couple weeks ago I started playing with this, and I experimented with \nimproving extended statistics in this direction. After a while I noticed \na forgotten development branch from 2016 which tried to do this by \nadding a custom typanalyze function, which seemed like a more natural \nidea (because it's really a statistics for a single column).\n\nBut then I went to pgconf NYC in early December, and I spoke to Oleg \nabout various JSON-related things, and he mentioned they've been working \non this topic some time ago too, but did not have time to pursue it. So \nhe pointed me to a branch [1] developed by Nikita Glukhov.\n\nI like Nikita's branch because it solved a couple architectural issues \nthat I've been aware of, but only solved them in a rather hackish way.\n\nI had a discussion with Nikita about his approach what can we do to move \nit forward. He's focusing on other JSON stuff, but he's OK with me \ntaking over and moving it forward. So here we go ...\n\nNikita rebased his branch recently, I've kept improving it in various \n(mostly a lot of comments and docs, and some minor fixes and tweaks). \nI've pushed my version with a couple extra commits in [2], but you can \nignore that except if you want to see what I added/changed.\n\nAttached is a couple patches adding adding the main part of the feature. \nThere's a couple more commits in the github repositories, adding more \nadvanced features - I'll briefly explain those later, but I'm not \nincluding them here because those are optional features and it'd be \ndistracting to include them here.\n\nThere are 6 patches in the series, but the magic mostly happens in parts \n0001 and 0006. The other parts are mostly just adding infrastructure, \nwhich may be a sizeable amount of code, but the changes are fairly \nsimple and obvious. So let's focus on 0001 and 0006.\n\nTo add JSON statistics we need to do two basic things - we need to build \nthe statistics and then we need to allow using them while estimating \nconditions.\n\n\n1) building stats\n\nLet's talk about building the stats first. The patch does one of the \nthings I experimented with - 0006 adds a jsonb_typanalyze function, and \nit associates it with the data type. The function extracts paths and \nvalues from the JSONB document, builds the statistics, and then stores \nthe result in pg_statistic as a new stakind.\n\nI've been planning to store the stats in pg_statistic too, but I've been \nconsidering to use a custom data type. The patch does something far more \nelegant - it simply uses stavalues to store an array of JSONB documents, \neach describing stats for one path extracted from the sampled documents.\n\nOne (very simple) element of the array might look like this:\n\n {\"freq\": 1,\n \"json\": {\n \"mcv\": {\n \"values\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n \"numbers\": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]},\n \"width\": 19,\n \"distinct\": 10,\n \"nullfrac\": 0,\n \"correlation\": 0.10449},\n \"path\": \"$.\\\"a\\\"\",\n \"freq_null\": 0, \"freq_array\": 0, \"freq_object\": 0,\n \"freq_string\": 0, \"freq_boolean\": 0, \"freq_numeric\": 0}\n\nIn this case there's only a MCV list (represented by two arrays, just \nlike in pg_statistic), but there might be another part with a histogram. \nThere's also the other columns we'd expect to see in pg_statistic.\n\nIn principle, we need pg_statistic for each path we extract from the \nJSON documents and decide it's interesting enough for estimation. There \nare probably other ways to serialize/represent this, but I find using \nJSONB for this pretty convenient because we need to deal with a mix of \ndata types (for the same path), and other JSON specific stuff. Storing \nthat in Postgres arrays would be problematic.\n\nI'm sure there's plenty open questions - for example I think we'll need \nsome logic to decide which paths to keep, otherwise the statistics can \nget quite big, if we're dealing with large / variable documents. We're \nalready doing something similar for MCV lists.\n\nOne of Nikita's patches not included in this thread allow \"selective\" \nstatistics, where you can define in advance a \"filter\" restricting which \nparts are considered interesting by ANALYZE. That's interesting, but I \nthink we need some simple MCV-like heuristics first anyway.\n\nAnother open question is how deep the stats should be. Imagine documents \nlike this:\n\n {\"a\" : {\"b\" : {\"c\" : {\"d\" : ...}}}}\n\nThe current patch build stats for all possible paths:\n\n \"a\"\n \"a.b\"\n \"a.b.c\"\n \"a.b.c.d\"\n\nand of course many of the paths will often have JSONB documents as \nvalues, not simple scalar values. I wonder if we should limit the depth \nsomehow, and maybe build stats only for scalar values.\n\n\n2) applying the statistics\n\nOne of the problems is how to actually use the statistics. For @> \noperator it's simple enough, because it's (jsonb @> jsonb) so we have \ndirect access to the stats. But often the conditions look like this:\n\n jsonb_column ->> 'key' = 'value'\n\nso the condition is actually on an expression, not on the JSONB column \ndirectly. My solutions were pretty ugly hacks, but Nikita had a neat \nidea - we can define a custom procedure for each operator, which is \nresponsible for \"calculating\" the statistics for the expression.\n\nSo in this case \"->>\" will have such \"oprstat\" procedure, which fetches \nstats for the JSONB column, extracts stats for the \"key\" path. And then \nwe can use that for estimation of the (text = text) condition.\n\nThis is what 0001 does, pretty much. We simply look for expression stats \nprovided by an index, extended statistics, and then - if oprstat is \ndefined for the operator - we try to derive the stats.\n\nThis opens other interesting opportunities for the future - one of the \nparts adds oprstat for basic arithmetic operators, which allows deducing \nstatistics for expressions like (a+10) from statistics on column (a).\n\nWhich seems like a neat feature on it's own, but it also interacts with \nthe extended statistics in somewhat non-obvious ways (especially when \nestimating GROUP BY cardinalities).\n\nOf course, there's a limit of what we can reasonably estimate - for \nexample, there may be statistical dependencies between paths, and this \npatch does not even attempt to deal with that. In a way, this is similar \nto correlation between columns, except that here we have a dynamic set \nof columns, which makes it much much harder. We'd need something like \nextended stats on steroids, pretty much.\n\n\nI'm sure I've forgotten various important bits - many of them are \nmentioned or explained in comments, but I'm sure others are not. And I'd \nbet there are things I forgot about entirely or got wrong. So feel free \nto ask.\n\n\nIn any case, I think this seems like a good first step to improve our \nestimates for JSOB columns.\n\nregards\n\n\n[1] https://github.com/postgrespro/postgres/tree/jsonb_stats\n\n[2] https://github.com/tvondra/postgres/tree/jsonb_stats_rework\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,For patch 1:+ List *statisticsName = NIL; /* optional stats estimat. procedure */I think if the variable is named estimatorName (or something similar), it would be easier for people to grasp its purpose.+ /* XXX perhaps full \"statistics\" wording would be better */+ else if (strcmp(defel->defname, \"stats\") == 0)I would recommend (stats sounds too general):+ else if (strcmp(defel->defname, \"statsestimator\") == 0)+ statisticsOid = ValidateStatisticsEstimator(statisticsName);statisticsOid -> statsEstimatorOidFor get_oprstat():+ }+ else+ return (RegProcedure) InvalidOid;keyword else is not needed (considering the return statement in if block).For patch 06:+ /* FIXME Could be before the memset, I guess? Checking vardata->statsTuple. */+ if (!data->statsTuple)+ return false;I would agree the check can be lifted above the memset call.+ * XXX This does not really extract any stats, it merely allocates the struct?+ */+static JsonPathStats+jsonPathStatsGetSpecialStats(JsonPathStats pstats, JsonPathStatsType type)As comments says, I think allocJsonPathStats() would be better name for the func.+ * XXX Why doesn't this do jsonPathStatsGetTypeFreq check similar to what+ * jsonPathStatsGetLengthStats does?I think `jsonPathStatsGetTypeFreq(pstats, jbvArray, 0.0) <= 0.0` check should be added for jsonPathStatsGetArrayLengthStats().To be continued ...",
"msg_date": "Sat, 1 Jan 2022 07:33:10 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Sat, Jan 1, 2022 at 7:33 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Dec 31, 2021 at 2:07 PM Tomas Vondra <\n> tomas.vondra@enterprisedb.com> wrote:\n>\n>> Hi,\n>>\n>> One of the complaints I sometimes hear from users and customers using\n>> Postgres to store JSON documents (as JSONB type, of course) is that the\n>> selectivity estimates are often pretty poor.\n>>\n>> Currently we only really have MCV and histograms with whole documents,\n>> and we can deduce some stats from that. But that is somewhat bogus\n>> because there's only ~100 documents in either MCV/histogram (with the\n>> default statistics target). And moreover we discard all \"oversized\"\n>> values (over 1kB) before even calculating those stats, which makes it\n>> even less representative.\n>>\n>> A couple weeks ago I started playing with this, and I experimented with\n>> improving extended statistics in this direction. After a while I noticed\n>> a forgotten development branch from 2016 which tried to do this by\n>> adding a custom typanalyze function, which seemed like a more natural\n>> idea (because it's really a statistics for a single column).\n>>\n>> But then I went to pgconf NYC in early December, and I spoke to Oleg\n>> about various JSON-related things, and he mentioned they've been working\n>> on this topic some time ago too, but did not have time to pursue it. So\n>> he pointed me to a branch [1] developed by Nikita Glukhov.\n>>\n>> I like Nikita's branch because it solved a couple architectural issues\n>> that I've been aware of, but only solved them in a rather hackish way.\n>>\n>> I had a discussion with Nikita about his approach what can we do to move\n>> it forward. He's focusing on other JSON stuff, but he's OK with me\n>> taking over and moving it forward. So here we go ...\n>>\n>> Nikita rebased his branch recently, I've kept improving it in various\n>> (mostly a lot of comments and docs, and some minor fixes and tweaks).\n>> I've pushed my version with a couple extra commits in [2], but you can\n>> ignore that except if you want to see what I added/changed.\n>>\n>> Attached is a couple patches adding adding the main part of the feature.\n>> There's a couple more commits in the github repositories, adding more\n>> advanced features - I'll briefly explain those later, but I'm not\n>> including them here because those are optional features and it'd be\n>> distracting to include them here.\n>>\n>> There are 6 patches in the series, but the magic mostly happens in parts\n>> 0001 and 0006. The other parts are mostly just adding infrastructure,\n>> which may be a sizeable amount of code, but the changes are fairly\n>> simple and obvious. So let's focus on 0001 and 0006.\n>>\n>> To add JSON statistics we need to do two basic things - we need to build\n>> the statistics and then we need to allow using them while estimating\n>> conditions.\n>>\n>>\n>> 1) building stats\n>>\n>> Let's talk about building the stats first. The patch does one of the\n>> things I experimented with - 0006 adds a jsonb_typanalyze function, and\n>> it associates it with the data type. The function extracts paths and\n>> values from the JSONB document, builds the statistics, and then stores\n>> the result in pg_statistic as a new stakind.\n>>\n>> I've been planning to store the stats in pg_statistic too, but I've been\n>> considering to use a custom data type. The patch does something far more\n>> elegant - it simply uses stavalues to store an array of JSONB documents,\n>> each describing stats for one path extracted from the sampled documents.\n>>\n>> One (very simple) element of the array might look like this:\n>>\n>> {\"freq\": 1,\n>> \"json\": {\n>> \"mcv\": {\n>> \"values\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n>> \"numbers\": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]},\n>> \"width\": 19,\n>> \"distinct\": 10,\n>> \"nullfrac\": 0,\n>> \"correlation\": 0.10449},\n>> \"path\": \"$.\\\"a\\\"\",\n>> \"freq_null\": 0, \"freq_array\": 0, \"freq_object\": 0,\n>> \"freq_string\": 0, \"freq_boolean\": 0, \"freq_numeric\": 0}\n>>\n>> In this case there's only a MCV list (represented by two arrays, just\n>> like in pg_statistic), but there might be another part with a histogram.\n>> There's also the other columns we'd expect to see in pg_statistic.\n>>\n>> In principle, we need pg_statistic for each path we extract from the\n>> JSON documents and decide it's interesting enough for estimation. There\n>> are probably other ways to serialize/represent this, but I find using\n>> JSONB for this pretty convenient because we need to deal with a mix of\n>> data types (for the same path), and other JSON specific stuff. Storing\n>> that in Postgres arrays would be problematic.\n>>\n>> I'm sure there's plenty open questions - for example I think we'll need\n>> some logic to decide which paths to keep, otherwise the statistics can\n>> get quite big, if we're dealing with large / variable documents. We're\n>> already doing something similar for MCV lists.\n>>\n>> One of Nikita's patches not included in this thread allow \"selective\"\n>> statistics, where you can define in advance a \"filter\" restricting which\n>> parts are considered interesting by ANALYZE. That's interesting, but I\n>> think we need some simple MCV-like heuristics first anyway.\n>>\n>> Another open question is how deep the stats should be. Imagine documents\n>> like this:\n>>\n>> {\"a\" : {\"b\" : {\"c\" : {\"d\" : ...}}}}\n>>\n>> The current patch build stats for all possible paths:\n>>\n>> \"a\"\n>> \"a.b\"\n>> \"a.b.c\"\n>> \"a.b.c.d\"\n>>\n>> and of course many of the paths will often have JSONB documents as\n>> values, not simple scalar values. I wonder if we should limit the depth\n>> somehow, and maybe build stats only for scalar values.\n>>\n>>\n>> 2) applying the statistics\n>>\n>> One of the problems is how to actually use the statistics. For @>\n>> operator it's simple enough, because it's (jsonb @> jsonb) so we have\n>> direct access to the stats. But often the conditions look like this:\n>>\n>> jsonb_column ->> 'key' = 'value'\n>>\n>> so the condition is actually on an expression, not on the JSONB column\n>> directly. My solutions were pretty ugly hacks, but Nikita had a neat\n>> idea - we can define a custom procedure for each operator, which is\n>> responsible for \"calculating\" the statistics for the expression.\n>>\n>> So in this case \"->>\" will have such \"oprstat\" procedure, which fetches\n>> stats for the JSONB column, extracts stats for the \"key\" path. And then\n>> we can use that for estimation of the (text = text) condition.\n>>\n>> This is what 0001 does, pretty much. We simply look for expression stats\n>> provided by an index, extended statistics, and then - if oprstat is\n>> defined for the operator - we try to derive the stats.\n>>\n>> This opens other interesting opportunities for the future - one of the\n>> parts adds oprstat for basic arithmetic operators, which allows deducing\n>> statistics for expressions like (a+10) from statistics on column (a).\n>>\n>> Which seems like a neat feature on it's own, but it also interacts with\n>> the extended statistics in somewhat non-obvious ways (especially when\n>> estimating GROUP BY cardinalities).\n>>\n>> Of course, there's a limit of what we can reasonably estimate - for\n>> example, there may be statistical dependencies between paths, and this\n>> patch does not even attempt to deal with that. In a way, this is similar\n>> to correlation between columns, except that here we have a dynamic set\n>> of columns, which makes it much much harder. We'd need something like\n>> extended stats on steroids, pretty much.\n>>\n>>\n>> I'm sure I've forgotten various important bits - many of them are\n>> mentioned or explained in comments, but I'm sure others are not. And I'd\n>> bet there are things I forgot about entirely or got wrong. So feel free\n>> to ask.\n>>\n>>\n>> In any case, I think this seems like a good first step to improve our\n>> estimates for JSOB columns.\n>>\n>> regards\n>>\n>>\n>> [1] https://github.com/postgrespro/postgres/tree/jsonb_stats\n>>\n>> [2] https://github.com/tvondra/postgres/tree/jsonb_stats_rework\n>>\n>> --\n>> Tomas Vondra\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>\n>\n> Hi,\n> For patch 1:\n>\n> + List *statisticsName = NIL; /* optional stats estimat.\n> procedure */\n>\n> I think if the variable is named estimatorName (or something similar), it\n> would be easier for people to grasp its purpose.\n>\n> + /* XXX perhaps full \"statistics\" wording would be better */\n> + else if (strcmp(defel->defname, \"stats\") == 0)\n>\n> I would recommend (stats sounds too general):\n>\n> + else if (strcmp(defel->defname, \"statsestimator\") == 0)\n>\n> + statisticsOid = ValidateStatisticsEstimator(statisticsName);\n>\n> statisticsOid -> statsEstimatorOid\n>\n> For get_oprstat():\n>\n> + }\n> + else\n> + return (RegProcedure) InvalidOid;\n>\n> keyword else is not needed (considering the return statement in if block).\n>\n> For patch 06:\n>\n> + /* FIXME Could be before the memset, I guess? Checking\n> vardata->statsTuple. */\n> + if (!data->statsTuple)\n> + return false;\n>\n> I would agree the check can be lifted above the memset call.\n>\n> + * XXX This does not really extract any stats, it merely allocates the\n> struct?\n> + */\n> +static JsonPathStats\n> +jsonPathStatsGetSpecialStats(JsonPathStats pstats, JsonPathStatsType type)\n>\n> As comments says, I think allocJsonPathStats() would be better name for\n> the func.\n>\n> + * XXX Why doesn't this do jsonPathStatsGetTypeFreq check similar to what\n> + * jsonPathStatsGetLengthStats does?\n>\n> I think `jsonPathStatsGetTypeFreq(pstats, jbvArray, 0.0) <= 0.0` check\n> should be added for jsonPathStatsGetArrayLengthStats().\n>\n> To be continued ...\n>\nHi,\n\n+static JsonPathStats\n+jsonStatsFindPathStats(JsonStats jsdata, char *path, int pathlen)\n\nStats appears twice in the method name. I think findJsonPathStats() should\nsuffice.\nIt should check `if (jsdata->nullfrac >= 1.0)` as jsonStatsGetPathStatsStr\ndoes.\n\n+JsonPathStats\n+jsonStatsGetPathStatsStr(JsonStats jsdata, const char *subpath, int\nsubpathlen)\n\nThis func can be static, right ?\nI think findJsonPathStatsWithPrefix() would be a better name for the func.\n\n+ * XXX Doesn't this need ecape_json too?\n+ */\n+static void\n+jsonPathAppendEntryWithLen(StringInfo path, const char *entry, int len)\n+{\n+ char *tmpentry = pnstrdup(entry, len);\n+ jsonPathAppendEntry(path, tmpentry);\n\necape_json() is called within jsonPathAppendEntry(). The XXX comment can be\ndropped.\n\n+jsonPathStatsGetArrayIndexSelectivity(JsonPathStats pstats, int index)\n\nIt seems getJsonSelectivityWithArrayIndex() would be a better name.\n\n+ sel = scalarineqsel(NULL, operator,\n+ operator == JsonbGtOperator ||\n+ operator == JsonbGeOperator,\n+ operator == JsonbLeOperator ||\n+ operator == JsonbGeOperator,\n\nLooking at the comment for scalarineqsel():\n\n * scalarineqsel - Selectivity of \"<\", \"<=\", \">\", \">=\" for scalars.\n *\n * This is the guts of scalarltsel/scalarlesel/scalargtsel/scalargesel.\n * The isgt and iseq flags distinguish which of the four cases apply.\n\nIt seems JsonbLtOperator doesn't appear in the call, can I ask why ?\n\nCheers\n\nOn Sat, Jan 1, 2022 at 7:33 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Dec 31, 2021 at 2:07 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nOne of the complaints I sometimes hear from users and customers using \nPostgres to store JSON documents (as JSONB type, of course) is that the \nselectivity estimates are often pretty poor.\n\nCurrently we only really have MCV and histograms with whole documents, \nand we can deduce some stats from that. But that is somewhat bogus \nbecause there's only ~100 documents in either MCV/histogram (with the \ndefault statistics target). And moreover we discard all \"oversized\" \nvalues (over 1kB) before even calculating those stats, which makes it \neven less representative.\n\nA couple weeks ago I started playing with this, and I experimented with \nimproving extended statistics in this direction. After a while I noticed \na forgotten development branch from 2016 which tried to do this by \nadding a custom typanalyze function, which seemed like a more natural \nidea (because it's really a statistics for a single column).\n\nBut then I went to pgconf NYC in early December, and I spoke to Oleg \nabout various JSON-related things, and he mentioned they've been working \non this topic some time ago too, but did not have time to pursue it. So \nhe pointed me to a branch [1] developed by Nikita Glukhov.\n\nI like Nikita's branch because it solved a couple architectural issues \nthat I've been aware of, but only solved them in a rather hackish way.\n\nI had a discussion with Nikita about his approach what can we do to move \nit forward. He's focusing on other JSON stuff, but he's OK with me \ntaking over and moving it forward. So here we go ...\n\nNikita rebased his branch recently, I've kept improving it in various \n(mostly a lot of comments and docs, and some minor fixes and tweaks). \nI've pushed my version with a couple extra commits in [2], but you can \nignore that except if you want to see what I added/changed.\n\nAttached is a couple patches adding adding the main part of the feature. \nThere's a couple more commits in the github repositories, adding more \nadvanced features - I'll briefly explain those later, but I'm not \nincluding them here because those are optional features and it'd be \ndistracting to include them here.\n\nThere are 6 patches in the series, but the magic mostly happens in parts \n0001 and 0006. The other parts are mostly just adding infrastructure, \nwhich may be a sizeable amount of code, but the changes are fairly \nsimple and obvious. So let's focus on 0001 and 0006.\n\nTo add JSON statistics we need to do two basic things - we need to build \nthe statistics and then we need to allow using them while estimating \nconditions.\n\n\n1) building stats\n\nLet's talk about building the stats first. The patch does one of the \nthings I experimented with - 0006 adds a jsonb_typanalyze function, and \nit associates it with the data type. The function extracts paths and \nvalues from the JSONB document, builds the statistics, and then stores \nthe result in pg_statistic as a new stakind.\n\nI've been planning to store the stats in pg_statistic too, but I've been \nconsidering to use a custom data type. The patch does something far more \nelegant - it simply uses stavalues to store an array of JSONB documents, \neach describing stats for one path extracted from the sampled documents.\n\nOne (very simple) element of the array might look like this:\n\n {\"freq\": 1,\n \"json\": {\n \"mcv\": {\n \"values\": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n \"numbers\": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]},\n \"width\": 19,\n \"distinct\": 10,\n \"nullfrac\": 0,\n \"correlation\": 0.10449},\n \"path\": \"$.\\\"a\\\"\",\n \"freq_null\": 0, \"freq_array\": 0, \"freq_object\": 0,\n \"freq_string\": 0, \"freq_boolean\": 0, \"freq_numeric\": 0}\n\nIn this case there's only a MCV list (represented by two arrays, just \nlike in pg_statistic), but there might be another part with a histogram. \nThere's also the other columns we'd expect to see in pg_statistic.\n\nIn principle, we need pg_statistic for each path we extract from the \nJSON documents and decide it's interesting enough for estimation. There \nare probably other ways to serialize/represent this, but I find using \nJSONB for this pretty convenient because we need to deal with a mix of \ndata types (for the same path), and other JSON specific stuff. Storing \nthat in Postgres arrays would be problematic.\n\nI'm sure there's plenty open questions - for example I think we'll need \nsome logic to decide which paths to keep, otherwise the statistics can \nget quite big, if we're dealing with large / variable documents. We're \nalready doing something similar for MCV lists.\n\nOne of Nikita's patches not included in this thread allow \"selective\" \nstatistics, where you can define in advance a \"filter\" restricting which \nparts are considered interesting by ANALYZE. That's interesting, but I \nthink we need some simple MCV-like heuristics first anyway.\n\nAnother open question is how deep the stats should be. Imagine documents \nlike this:\n\n {\"a\" : {\"b\" : {\"c\" : {\"d\" : ...}}}}\n\nThe current patch build stats for all possible paths:\n\n \"a\"\n \"a.b\"\n \"a.b.c\"\n \"a.b.c.d\"\n\nand of course many of the paths will often have JSONB documents as \nvalues, not simple scalar values. I wonder if we should limit the depth \nsomehow, and maybe build stats only for scalar values.\n\n\n2) applying the statistics\n\nOne of the problems is how to actually use the statistics. For @> \noperator it's simple enough, because it's (jsonb @> jsonb) so we have \ndirect access to the stats. But often the conditions look like this:\n\n jsonb_column ->> 'key' = 'value'\n\nso the condition is actually on an expression, not on the JSONB column \ndirectly. My solutions were pretty ugly hacks, but Nikita had a neat \nidea - we can define a custom procedure for each operator, which is \nresponsible for \"calculating\" the statistics for the expression.\n\nSo in this case \"->>\" will have such \"oprstat\" procedure, which fetches \nstats for the JSONB column, extracts stats for the \"key\" path. And then \nwe can use that for estimation of the (text = text) condition.\n\nThis is what 0001 does, pretty much. We simply look for expression stats \nprovided by an index, extended statistics, and then - if oprstat is \ndefined for the operator - we try to derive the stats.\n\nThis opens other interesting opportunities for the future - one of the \nparts adds oprstat for basic arithmetic operators, which allows deducing \nstatistics for expressions like (a+10) from statistics on column (a).\n\nWhich seems like a neat feature on it's own, but it also interacts with \nthe extended statistics in somewhat non-obvious ways (especially when \nestimating GROUP BY cardinalities).\n\nOf course, there's a limit of what we can reasonably estimate - for \nexample, there may be statistical dependencies between paths, and this \npatch does not even attempt to deal with that. In a way, this is similar \nto correlation between columns, except that here we have a dynamic set \nof columns, which makes it much much harder. We'd need something like \nextended stats on steroids, pretty much.\n\n\nI'm sure I've forgotten various important bits - many of them are \nmentioned or explained in comments, but I'm sure others are not. And I'd \nbet there are things I forgot about entirely or got wrong. So feel free \nto ask.\n\n\nIn any case, I think this seems like a good first step to improve our \nestimates for JSOB columns.\n\nregards\n\n\n[1] https://github.com/postgrespro/postgres/tree/jsonb_stats\n\n[2] https://github.com/tvondra/postgres/tree/jsonb_stats_rework\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,For patch 1:+ List *statisticsName = NIL; /* optional stats estimat. procedure */I think if the variable is named estimatorName (or something similar), it would be easier for people to grasp its purpose.+ /* XXX perhaps full \"statistics\" wording would be better */+ else if (strcmp(defel->defname, \"stats\") == 0)I would recommend (stats sounds too general):+ else if (strcmp(defel->defname, \"statsestimator\") == 0)+ statisticsOid = ValidateStatisticsEstimator(statisticsName);statisticsOid -> statsEstimatorOidFor get_oprstat():+ }+ else+ return (RegProcedure) InvalidOid;keyword else is not needed (considering the return statement in if block).For patch 06:+ /* FIXME Could be before the memset, I guess? Checking vardata->statsTuple. */+ if (!data->statsTuple)+ return false;I would agree the check can be lifted above the memset call.+ * XXX This does not really extract any stats, it merely allocates the struct?+ */+static JsonPathStats+jsonPathStatsGetSpecialStats(JsonPathStats pstats, JsonPathStatsType type)As comments says, I think allocJsonPathStats() would be better name for the func.+ * XXX Why doesn't this do jsonPathStatsGetTypeFreq check similar to what+ * jsonPathStatsGetLengthStats does?I think `jsonPathStatsGetTypeFreq(pstats, jbvArray, 0.0) <= 0.0` check should be added for jsonPathStatsGetArrayLengthStats().To be continued ...Hi,+static JsonPathStats+jsonStatsFindPathStats(JsonStats jsdata, char *path, int pathlen)Stats appears twice in the method name. I think findJsonPathStats() should suffice.It should check `if (jsdata->nullfrac >= 1.0)` as jsonStatsGetPathStatsStr does.+JsonPathStats+jsonStatsGetPathStatsStr(JsonStats jsdata, const char *subpath, int subpathlen)This func can be static, right ?I think findJsonPathStatsWithPrefix() would be a better name for the func.+ * XXX Doesn't this need ecape_json too?+ */+static void+jsonPathAppendEntryWithLen(StringInfo path, const char *entry, int len)+{+ char *tmpentry = pnstrdup(entry, len);+ jsonPathAppendEntry(path, tmpentry);ecape_json() is called within jsonPathAppendEntry(). The XXX comment can be dropped.+jsonPathStatsGetArrayIndexSelectivity(JsonPathStats pstats, int index)It seems getJsonSelectivityWithArrayIndex() would be a better name.+ sel = scalarineqsel(NULL, operator,+ operator == JsonbGtOperator ||+ operator == JsonbGeOperator,+ operator == JsonbLeOperator ||+ operator == JsonbGeOperator,Looking at the comment for scalarineqsel(): * scalarineqsel - Selectivity of \"<\", \"<=\", \">\", \">=\" for scalars. * * This is the guts of scalarltsel/scalarlesel/scalargtsel/scalargesel. * The isgt and iseq flags distinguish which of the four cases apply.It seems JsonbLtOperator doesn't appear in the call, can I ask why ?Cheers",
"msg_date": "Sat, 1 Jan 2022 13:16:47 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Sat, Jan 1, 2022 at 11:07 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> 0006-Add-jsonb-statistics-20211230.patch\n\nHi Tomas,\n\n-CREATE OR REPLACE FUNCTION explain_jsonb(sql_query text)\n+CREATE OR REPLACE FUNCTION explain_jsonb(sql_query text)\n\nhttps://cirrus-ci.com/task/6405547984420864\n\nIt looks like there is a Unicode BOM sequence in there, which is\nupsetting our Windows testing but not the 3 Unixes (not sure why).\nProbably added by an editor.\n\n\n",
"msg_date": "Thu, 6 Jan 2022 09:13:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Fri, 31 Dec 2021 at 22:07, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n> The patch does something far more\n> elegant - it simply uses stavalues to store an array of JSONB documents,\n> each describing stats for one path extracted from the sampled documents.\n\nSounds good.\n\n> I'm sure there's plenty open questions - for example I think we'll need\n> some logic to decide which paths to keep, otherwise the statistics can\n> get quite big, if we're dealing with large / variable documents. We're\n> already doing something similar for MCV lists.\n>\n> One of Nikita's patches not included in this thread allow \"selective\"\n> statistics, where you can define in advance a \"filter\" restricting which\n> parts are considered interesting by ANALYZE. That's interesting, but I\n> think we need some simple MCV-like heuristics first anyway.\n>\n> Another open question is how deep the stats should be. Imagine documents\n> like this:\n>\n> {\"a\" : {\"b\" : {\"c\" : {\"d\" : ...}}}}\n>\n> The current patch build stats for all possible paths:\n>\n> \"a\"\n> \"a.b\"\n> \"a.b.c\"\n> \"a.b.c.d\"\n>\n> and of course many of the paths will often have JSONB documents as\n> values, not simple scalar values. I wonder if we should limit the depth\n> somehow, and maybe build stats only for scalar values.\n\nThe user interface for designing filters sounds hard, so I'd hope we\ncan ignore that for now.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 5 Jan 2022 20:22:31 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On 1/1/22 16:33, Zhihong Yu wrote:\n> Hi,\n> For patch 1:\n> \n> + List *statisticsName = NIL; /* optional stats estimat. \n> procedure */\n> \n> I think if the variable is named estimatorName (or something similar), \n> it would be easier for people to grasp its purpose.\n> \n\nI agree \"statisticsName\" might be too generic or confusing, but I'm not \nsure \"estimator\" is an improvement. Because this is not an \"estimator\" \n(in the sense of estimating selectivity). It \"transforms\" statistics to \nmatch the expression.\n\n> + /* XXX perhaps full \"statistics\" wording would be better */\n> + else if (strcmp(defel->defname, \"stats\") == 0)\n> \n> I would recommend (stats sounds too general):\n> \n> + else if (strcmp(defel->defname, \"statsestimator\") == 0)\n> \n> + statisticsOid = ValidateStatisticsEstimator(statisticsName);\n> \n> statisticsOid -> statsEstimatorOid\n> \n\nSame issue with the \"estimator\" bit :-(\n\n> For get_oprstat():\n> \n> + }\n> + else\n> + return (RegProcedure) InvalidOid;\n> \n> keyword else is not needed (considering the return statement in if block).\n> \n\nTrue, but this is how the other get_ functions in lsyscache.c do it. \nLike get_oprjoin().\n\n> For patch 06:\n> \n> + /* FIXME Could be before the memset, I guess? Checking \n> vardata->statsTuple. */\n> + if (!data->statsTuple)\n> + return false;\n> \n> I would agree the check can be lifted above the memset call.\n> \n\nOK.\n\n> + * XXX This does not really extract any stats, it merely allocates the \n> struct?\n> + */\n> +static JsonPathStats\n> +jsonPathStatsGetSpecialStats(JsonPathStats pstats, JsonPathStatsType type)\n> \n> As comments says, I think allocJsonPathStats() would be better name for \n> the func.\n> \n> + * XXX Why doesn't this do jsonPathStatsGetTypeFreq check similar to what\n> + * jsonPathStatsGetLengthStats does?\n> \n> I think `jsonPathStatsGetTypeFreq(pstats, jbvArray, 0.0) <= 0.0` check \n> should be added for jsonPathStatsGetArrayLengthStats().\n> \n> To be continued ...\n\nOK. I'll see if Nikita has some ideas about the naming changes.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Jan 2022 20:12:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On 1/1/22 22:16, Zhihong Yu wrote:\n> Hi,\n> \n> +static JsonPathStats\n> +jsonStatsFindPathStats(JsonStats jsdata, char *path, int pathlen)\n> \n> Stats appears twice in the method name. I think findJsonPathStats() \n> should suffice.\n> It should check `if (jsdata->nullfrac >= 1.0)` \n> as jsonStatsGetPathStatsStr does.\n> \n> +JsonPathStats\n> +jsonStatsGetPathStatsStr(JsonStats jsdata, const char *subpath, int \n> subpathlen)\n> \n> This func can be static, right ?\n> I think findJsonPathStatsWithPrefix() would be a better name for the func.\n> \n> + * XXX Doesn't this need ecape_json too?\n> + */\n> +static void\n> +jsonPathAppendEntryWithLen(StringInfo path, const char *entry, int len)\n> +{\n> + char *tmpentry = pnstrdup(entry, len);\n> + jsonPathAppendEntry(path, tmpentry);\n> \n> ecape_json() is called within jsonPathAppendEntry(). The XXX comment can \n> be dropped.\n> \n> +jsonPathStatsGetArrayIndexSelectivity(JsonPathStats pstats, int index)\n> \n> It seems getJsonSelectivityWithArrayIndex() would be a better name.\n> \n\nThanks. I'll think about the naming changes.\n\n> + sel = scalarineqsel(NULL, operator,\n> + operator == JsonbGtOperator ||\n> + operator == JsonbGeOperator,\n> + operator == JsonbLeOperator ||\n> + operator == JsonbGeOperator,\n> \n> Looking at the comment for scalarineqsel():\n> \n> * scalarineqsel - Selectivity of \"<\", \"<=\", \">\", \">=\" for scalars.\n> *\n> * This is the guts of scalarltsel/scalarlesel/scalargtsel/scalargesel.\n> * The isgt and iseq flags distinguish which of the four cases apply.\n> \n> It seems JsonbLtOperator doesn't appear in the call, can I ask why ?\n> \n\nBecause the scalarineqsel signature is this\n\n scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, bool iseq,\n Oid collation,\n VariableStatData *vardata, Datum constval,\n Oid consttype)\n\nso\n\n /* is it greater or greater-or-equal */\n isgt = operator == JsonbGtOperator ||\n operator == JsonbGeOperator\n\n /* is it equality? */\n iseq = operator == JsonbLeOperator ||\n operator == JsonbGeOperator,\n\nSo I think this is correct. A comment explaining this would be nice.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Jan 2022 20:22:33 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On 1/5/22 21:13, Thomas Munro wrote:\n> On Sat, Jan 1, 2022 at 11:07 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> 0006-Add-jsonb-statistics-20211230.patch\n> \n> Hi Tomas,\n> \n> -CREATE OR REPLACE FUNCTION explain_jsonb(sql_query text)\n> +CREATE OR REPLACE FUNCTION explain_jsonb(sql_query text)\n> \n> https://cirrus-ci.com/task/6405547984420864\n> \n> It looks like there is a Unicode BOM sequence in there, which is\n> upsetting our Windows testing but not the 3 Unixes (not sure why).\n> Probably added by an editor.\n> \n\nThanks, fixed along with some whitespace issues.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 6 Jan 2022 20:26:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "\n\nOn 1/5/22 21:22, Simon Riggs wrote:\n> On Fri, 31 Dec 2021 at 22:07, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>> The patch does something far more\n>> elegant - it simply uses stavalues to store an array of JSONB documents,\n>> each describing stats for one path extracted from the sampled documents.\n> \n> Sounds good.\n> \n>> I'm sure there's plenty open questions - for example I think we'll need\n>> some logic to decide which paths to keep, otherwise the statistics can\n>> get quite big, if we're dealing with large / variable documents. We're\n>> already doing something similar for MCV lists.\n>>\n>> One of Nikita's patches not included in this thread allow \"selective\"\n>> statistics, where you can define in advance a \"filter\" restricting which\n>> parts are considered interesting by ANALYZE. That's interesting, but I\n>> think we need some simple MCV-like heuristics first anyway.\n>>\n>> Another open question is how deep the stats should be. Imagine documents\n>> like this:\n>>\n>> {\"a\" : {\"b\" : {\"c\" : {\"d\" : ...}}}}\n>>\n>> The current patch build stats for all possible paths:\n>>\n>> \"a\"\n>> \"a.b\"\n>> \"a.b.c\"\n>> \"a.b.c.d\"\n>>\n>> and of course many of the paths will often have JSONB documents as\n>> values, not simple scalar values. I wonder if we should limit the depth\n>> somehow, and maybe build stats only for scalar values.\n> \n> The user interface for designing filters sounds hard, so I'd hope we\n> can ignore that for now.\n> \n\nNot sure I understand. I wasn't suggesting any user-defined filtering, \nbut something done by default, similarly to what we do for regular MCV \nlists, based on frequency. We'd include frequent paths while excluding \nrare ones.\n\nSo no need for a user interface.\n\nThat might not work for documents with stable schema and a lot of \ntop-level paths, because all the top-level paths have 1.0 frequency. But \nfor documents with dynamic schema (different documents having different \nschemas/paths) it might help.\n\nSimilarly for the non-scalar values - I don't think we can really keep \nregular statistics on such values (for the same reason why it's not \nenough for whole JSONB columns), so why to build/store that anyway.\n\n\nNikita did implement a way to specify custom filters using jsonpath, but \nI did not include that into this patch series. And questions regarding \nthe interface were one of the reasons.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Jan 2022 20:48:39 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "Hi!\n\nI am glad that you found my very old patch interesting and started to\nwork on it. We failed to post it in 2016 mostly because we were not\nsatisfied with JSONB storage. Also we decided to wait for completion\nof work on extended statistics as we thought that it could help us.\nBut in early 2017 we switched to SQL/JSON and forgot about this patch.\n\n\nI think custom datatype is necessary for better performance. With a\nplain JSONB we need to do a lot of work for extraction of path stats:\n - iterate through MCV/Histogram JSONB arrays\n - cast numeric values to float, string to text etc.\n - build PG arrays from extracted datums\n - form pg_statistic tuple.\n\nWith a custom data type we could store pg_statistic tuple unmodified\nand use it without any overhead. But then we need modify a bit\nVariableStatData and several functions to pass additional nullfrac\ncorrections.\n\nMaybe simple record type (path text, data pg_statistic, ext_data jsonb)\nwould be enough.\n\n\n\nAlso there is an idea to store per-path separately in pg_statistic_ext\nrows using expression like (jb #> '{foo,bar}') as stxexprs. This could\nalso help user to select which paths to analyze simply by using some\nsort of CREATE STATISTICS. But it is really unclear how to:\n * store pg_statistic_ext rows from typanalyze\n * form path expressions for array elements (maybe add new jsonpath\n operator)\n * match various -> operator chains to stxexprs\n * jsonpath-like languages need simple API for searching by stxexprs\n\n\n\nPer-path statistics should only be collected for scalars. This can be\nenabled by flag JsonAnalyzeContext.scalarsOnly. But there are is a\nproblem: computed MCVs and histograms will be broken and we will not be\nable to use them for queries like (jb > const) in general case. Also\nwe will not be and use internally in scalarineqsel() and var_eq_const()\n(see jsonSelectivity()). Instead, we will have to implement own\nestimator functions for JSONB comparison operators that will correctly\nuse our hacked MCVs and histograms (and maybe not all cases will be\nsupported; for example, comparison to scalars only).\n\nIt's possible to replace objects and arrays with empty ones when\nscalarsOnly is set to keep correct frequencies of non-scalars.\nBut there is an old bug in JSONB comparison: empty arrays are placed\nbefore other values in the JSONB sort order, although they should go\nafter all scalars. So we need also to distinguish empty and non-empty\narrays here.\n\n\n\nI tried to fix a major part of places marked as XXX and FIXME, the new\nversion of the patches is attached. There are a lot of changes, you\ncan see them in a step-by-step form in the corresponding branch\njsonb_stats_20220122 in our GitHub repo [1].\n\n\nBelow is the explanation of fixed XXXs:\n\nPatch 0001\n\nsrc/backend/commands/operatorcmds.c:\n\n> XXX Maybe not the right name, because \"estimator\" implies we're \n> calculating selectivity. But we're actually deriving statistics for \n> an expression.\n\nRenamed to \"Derivator\".\n\n> XXX perhaps full \"statistics\" wording would be better\n\nRenamed to \"statistics\".\n\nsrc/backend/utils/adt/selfuncs.c:\n\n> examine_operator_expression():\n\n> XXX Not sure why this returns bool - we ignore the return value\n> anyway. We might as well return the calculated vardata (or NULL).\n\nOprstat was changed to return void.\n\n> XXX Not sure what to do about recursion - there can be another OpExpr\n> in one of the arguments, and we should call this recursively from the\n> oprstat procedure. But that's not possible, because it's marked as\n> static.\n\nOprstat call chain: get_restriction_variable() => examine_variable() =>\nexamine_operator_expression().\n\nThe main thing here is that OpExpr with oprstat acts like ordinary Var.\n\n> examine_variable():\n> XXX Shouldn't this put more restrictions on the OpExpr? E.g. that\n> one of the arguments has to be a Const or something?\n\nThis is simply a responsibility of oprstat.\n\n\nPatch 0004\n\n> XXX Not sure we want to add these functions to jsonb.h, which is the\n> public API. Maybe it belongs rather to jsonb_typanalyze.c or\n> elsewhere, closer to how it's used?\n\nMaybe it needs to be moved the new file jsonb_utils.h. I think these\nfunctions become very useful, when we start to build JSONBs with\npredefined structure.\n\n\n> pushJsonbValue():\n> XXX I'm not quite sure why we actually do this? Why do we need to\n> change how JsonbValue is converted to Jsonb for the statistics patch?\n\nScalars in JSONB are encoded as one-element preudo-array containers.\nSo when we are inserting binary JsonbValues, that was initialized\ndirectly from JSONB datums, inside other non-empty JSONB in\npushJsonbValue(), all values except scalars are inserted as\nexpected but scalars become [scalar]. So we need to extract scalar\nvalues in caller functions or in pushJsonbValue(). I think\nautoextraction in pushJsonbValue() is better. This case simply was not\nused before JSONB stats introduction.\n\n\nPatch 0006\n\nsrc/backend/utils/adt/jsonb_selfuncs.c\n\n> jsonPathStatsGetSpecialStats()\n> XXX This does not really extract any stats, it merely allocates the\n> struct?\n\nRenamed with \"Alloc\" suffix.\n\n> jsonPathStatsGetArrayLengthStats()\n> XXX Why doesn't this do jsonPathStatsGetTypeFreq check similar to\n> what jsonPathStatsGetLengthStats does?\n\n\"length\" stats were collected inside parent paths, but \"array_length\"\nand \"avg_array_length\" stats were collected inside child array paths.\nThis resulted in inconsistencies in TypeFreq checks.\n\nI have removed \"length\" stats, moved \"array_length\" and\n\"avg_array_length\" to the parent path, and added separate\n\"object_length\" stats. TypeFreq checks become consistent.\n\n> XXX Seems to do essentially what jsonStatsFindPath, except that it\n> also considers jsdata->prefix. Seems fairly easy to combine those\n> into a single function.\n\nI don't think that it would be better to combine those function into\none with considerPrefix flag, because jsonStatsFindPathStats() is a\nsome kind of low-level function which is called only in two places.\nIn all other places considerPrefix will be true. Also\njsonStatsFindPathStatsStr() is exported in jsonb_selfuncs.h for to give\nexternal jsonpath-like query operators ability to use JSON statistics.\n\n> jsonPathStatsCompare()\n> XXX Seems a bit convoluted to first cast it to Datum, then Jsonb ...\n> Datum const *pdatum = pv2;\n> Jsonb\t *jsonb = DatumGetJsonbP(*pdatum);\n\nThe problem of simply using 'jsonb = *(Jsonb **) pv2' is that\nDatumGetJsonbP() may deTOAST datums.\n\n> XXX Not sure about this? Does empty path mean global stats?\nEmpty \"path\" is simply a sign of invalid stats data.\n\n\n> jsonStatsConvertArray()\n> FIXME Does this actually work on all 32/64-bit systems? What if typid\n> is FLOAT8OID or something? Should look at TypeCache instead, probably.\n\nUsed get_typlenbyvalalign() instead of hardcoded values.\n\n> jsonb_stats()\n> XXX It might be useful to allow recursion, i.e.\n> get_restriction_variable might derive statistics too.\n> I don't think it does that now, right?\n\nget_restriction_variable() already might derive statistics: it calls\nexamine_variable() on its both operands, and examine_variable() calls\nexamine_operator_expression().\n\n> XXX Could we also get varonleft=false in useful cases?\n\nAll supported JSONB operators have signature jsonb OP arg.\nIf varonleft = false, then we need to derive stats for expression like\n '{\"foo\": \"bar\"}' -> text_column\nhaving stats for text_column. It is possible to implement too, but it\nis not related to JSONB stats and needs to be done in the separate\npatch.\n\n\n> jsonSelectivityContains():\n> XXX This really needs more comments explaining the logic.\n\nI have refactored this function and added comments.\n\n\n> jsonGetFloat4():\n> XXX Not sure assert is a sufficient protection against different\n> types of JSONB values to be passed in.\n\nI have refactored this function by passing a default value for\nnon-numeric JSONB case.\n\n> jsonPathAppendEntryWithLen()\n> XXX Doesn't this need ecape_json too?\n\nComment is removed, because jsonPathAppendEntry() is called inside.\n\n> jsonPathStatsGetTypeFreq()\n> FIXME This is really hard to read/understand, with two nested ternary\n> operators.\n\nI have refactored this place.\n\n> XXX Seems more like an error, no? Why ignore it?\n\nIt's not an error, it's request of non-numeric type. Length values\nare always numeric, and frequence of non-numeric values is 0.\n\nThe possible case when it could happen is jsonpath '$.size() == \"foo\"'.\nEstimator for operator == will check frequence of strings in .size()\nand it will be 0.\n\n\n> jsonPathStatsFormTuple()\n> FIXME What does this mean?\n\nThere is no need to transform ordinary root path stats tuple, it can be\nsimply copied.\n\n\njsonb_typanalyze.c\n\n> XXX We need entry+lenth because JSON path elements may contain null\n> bytes, I guess?\n\n'entry' can be not zero-terminated when it is pointing to JSONB keys,\nso 'len' is necessary field. 'len' is also used for faster entry\n comparison, to distinguish array entries ('len' == -1).\n\n> XXX Sould be JsonPathEntryMatch as it deals with JsonPathEntry nodes\n> not whole paths, no?\n> XXX Again, maybe JsonPathEntryHash would be a better name?\n\nFunctions renamed using JsonPathEntry prefix, JsonPath typedef removed.\n\n\n> JsonPathMatch()\n> XXX Seems a bit silly to return int, when the return statement only\n> really returns bool (because of how it compares paths). It's not really\n> a comparator for sorting, for example.\n\nThis function is implementation of HashCompareFunc and it needs to\nreturn int.\n\n> jsonAnalyzeJson()\n> XXX The name seems a bit weird, with the two json bits.\n\nRenamed to jsonAnalyzeCollectPaths(). The two similar functions were\nrenamed too.\n\n> jsonAnalyzeJson():\n> XXX The mix of break/return statements in this block is really\n> confusing.\n\nI have refactored this place using only breaks.\n\n> XXX not sure why we're doing this?\n\nManual recursion into containers by creating child iterator together\nwith skipNested=true flag is used to give jsonAnalyzeJsonValue()\nability to access jbvBinary containers.\n\n\n> compute_json_stats()\n> XXX Not sure what the first branch is doing (or supposed to)?\n\n> XXX It's not immediately clear why this is (-1) and not simply\n> NULL. It crashes, so presumably it's used to tweak the behavior,\n> but it's not clear why/how, and it affects place that is pretty\n> far away, and so not obvious. We should use some sort of flag\n> with a descriptive name instead.\n\n> XXX If I understand correctly, we simply collect all paths first,\n> without accumulating any Values. And then in the next step we\n> process each path independently, probably to save memory (we\n> don't want to accumulate all values for all paths, with a lot\n> of duplicities).\n\nThere are two variants of stats collection:\n * single-pass - collect all values for all paths\n * multi-pass - collect only values for a one path at each pass\n\nThe first variant can consume too much memory (jsonb iteration produces\na lot of garbage etc.), but works faster than second.\n\nThe first 'if (false)' is used for manual selection of one of this\nvariants. This selection should be controlled by some user-specified\noption (maybe GUC), or the first variant can be simply removed.\n\njsonAnalyzeJson()'s parameter of type JsonPathAnlStats * determines\nwhich paths we need to consider for the value collection:\n * NULL - collect values for all paths\n * -1 - do not collect any values\n * stats - collect values only for a given path\n\nThe last variant really is unused because we already have another\nfunction jsonAnalyzeJsonPath(), which is optimized for selective path\nvalues collection (used object key accessor JSONB functions instead of\nfull JSONB iteration). I have replaced this strange parameter with\nsimple boolean flag.\n\n> XXX Could the parameters be different on other platforms?\n\nUsed get_typlenbyvalalign(JSONBOID) instead of hardcoded values.\n\n> jsonAnalyzePathValues()\n> XXX Do we need to initialize all slots?\n\nI have copied here the following comment from extended_stats.c:\n/*\n * The fields describing the stats->stavalues[n] element types default\n * to the type of the data being analyzed, but the type-specific\n * typanalyze function can change them if it wants to store something\n * else.\n */\n\n> XXX Not sure why we divide it by the number of json values?\n\nWe divide counts of lengths by the total number of json values to\ncompute correct nullfrac. I.e. not all input jsons have lengths,\nlength of scalar jsons is NULL.\n\n\n\n[1]https://github.com/postgrespro/postgres/tree/jsonb_stats_20220122\n\n--\nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 23 Jan 2022 03:24:33 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On 1/23/22 01:24, Nikita Glukhov wrote:\n> Hi!\n> \n> I am glad that you found my very old patch interesting and started to\n> work on it. We failed to post it in 2016 mostly because we were not\n> satisfied with JSONB storage. Also we decided to wait for completion\n> of work on extended statistics as we thought that it could help us.\n> But in early 2017 we switched to SQL/JSON and forgot about this patch.\n>\n\nUnderstood. Let's see how feasible this idea is and if we can move this \nforward.\n\n> \n> I think custom datatype is necessary for better performance. With a\n> plain JSONB we need to do a lot of work for extraction of path stats:\n> - iterate through MCV/Histogram JSONB arrays\n> - cast numeric values to float, string to text etc.\n> - build PG arrays from extracted datums\n> - form pg_statistic tuple.\n> \n> With a custom data type we could store pg_statistic tuple unmodified\n> and use it without any overhead. But then we need modify a bit\n> VariableStatData and several functions to pass additional nullfrac\n> corrections.\n> \n\nI'm not against evaluating/exploring alternative storage formats, but my \nfeeling is the real impact on performance will be fairly low. At least I \nhaven't seen this as very expensive while profiling the patch so far. Of \ncourse, I may be wrong, and it may be more significant in some cases.\n\n> Maybe simple record type (path text, data pg_statistic, ext_data jsonb)\n> would be enough.\n> \nMaybe, but then you still need to store a bunch of those, right? So \neither an array (likely toasted) or 1:M table. I'm not sure it's goiing \nto be much cheaper than JSONB.\n\nI'd suggest we focus on what we need to store first, which seems like \ntha primary question, and worry about the exact storage format then.\n\n\n> Also there is an idea to store per-path separately in pg_statistic_ext\n> rows using expression like (jb #> '{foo,bar}') as stxexprs. This could\n> also help user to select which paths to analyze simply by using some\n> sort of CREATE STATISTICS. But it is really unclear how to:\n> * store pg_statistic_ext rows from typanalyze\n> * form path expressions for array elements (maybe add new jsonpath\n> operator)\n> * match various -> operator chains to stxexprs\n> * jsonpath-like languages need simple API for searching by stxexprs\n> \n\nSure, you can do statistics on expressions, right? Of course, if that \nexpression produces JSONB value, that's not very useful at the moment. \nMaybe we could have two typanalyze functions - one for regular analyze, \none for extended statistics?\n\nThat being said, I'm not sure extended stats are a good match for this. \nMy feeling was we should collect these stats for all JSONB columns, \nwhich is why I argued for putting that in pg_statistic.\n\n> \n> \n> Per-path statistics should only be collected for scalars. This can be\n> enabled by flag JsonAnalyzeContext.scalarsOnly. But there are is a\n> problem: computed MCVs and histograms will be broken and we will not be\n> able to use them for queries like (jb > const) in general case. Also\n> we will not be and use internally in scalarineqsel() and var_eq_const()\n> (see jsonSelectivity()). Instead, we will have to implement own\n> estimator functions for JSONB comparison operators that will correctly\n> use our hacked MCVs and histograms (and maybe not all cases will be\n> supported; for example, comparison to scalars only).\n>\n\nYeah, but maybe that's an acceptable trade-off? I mean, if we can \nimprove estimates for most clauses, and there's a some number of clauses \nthat are estimated just like without stats, that's still an improvement, \nright?\n\n> It's possible to replace objects and arrays with empty ones when\n> scalarsOnly is set to keep correct frequencies of non-scalars.\n> But there is an old bug in JSONB comparison: empty arrays are placed\n> before other values in the JSONB sort order, although they should go\n> after all scalars. So we need also to distinguish empty and non-empty\n> arrays here.\n> \n\nHmmm ...\n> \n> \n> I tried to fix a major part of places marked as XXX and FIXME, the new\n> version of the patches is attached. There are a lot of changes, you\n> can see them in a step-by-step form in the corresponding branch\n> jsonb_stats_20220122 in our GitHub repo [1].\n> \n\nThanks! I'll go through the changes soon.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jan 2022 23:19:49 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Tue, 25 Jan 2022 at 03:50, Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n>\n> On 1/23/22 01:24, Nikita Glukhov wrote:\n> > Hi!\n> >\n> > I am glad that you found my very old patch interesting and started to\n> > work on it. We failed to post it in 2016 mostly because we were not\n> > satisfied with JSONB storage. Also we decided to wait for completion\n> > of work on extended statistics as we thought that it could help us.\n> > But in early 2017 we switched to SQL/JSON and forgot about this patch.\n> >\n>\n> Understood. Let's see how feasible this idea is and if we can move this\n> forward.\n>\n> >\n> > I think custom datatype is necessary for better performance. With a\n> > plain JSONB we need to do a lot of work for extraction of path stats:\n> > - iterate through MCV/Histogram JSONB arrays\n> > - cast numeric values to float, string to text etc.\n> > - build PG arrays from extracted datums\n> > - form pg_statistic tuple.\n> >\n> > With a custom data type we could store pg_statistic tuple unmodified\n> > and use it without any overhead. But then we need modify a bit\n> > VariableStatData and several functions to pass additional nullfrac\n> > corrections.\n> >\n>\n> I'm not against evaluating/exploring alternative storage formats, but my\n> feeling is the real impact on performance will be fairly low. At least I\n> haven't seen this as very expensive while profiling the patch so far. Of\n> course, I may be wrong, and it may be more significant in some cases.\n>\n> > Maybe simple record type (path text, data pg_statistic, ext_data jsonb)\n> > would be enough.\n> >\n> Maybe, but then you still need to store a bunch of those, right? So\n> either an array (likely toasted) or 1:M table. I'm not sure it's goiing\n> to be much cheaper than JSONB.\n>\n> I'd suggest we focus on what we need to store first, which seems like\n> tha primary question, and worry about the exact storage format then.\n>\n>\n> > Also there is an idea to store per-path separately in pg_statistic_ext\n> > rows using expression like (jb #> '{foo,bar}') as stxexprs. This could\n> > also help user to select which paths to analyze simply by using some\n> > sort of CREATE STATISTICS. But it is really unclear how to:\n> > * store pg_statistic_ext rows from typanalyze\n> > * form path expressions for array elements (maybe add new jsonpath\n> > operator)\n> > * match various -> operator chains to stxexprs\n> > * jsonpath-like languages need simple API for searching by stxexprs\n> >\n>\n> Sure, you can do statistics on expressions, right? Of course, if that\n> expression produces JSONB value, that's not very useful at the moment.\n> Maybe we could have two typanalyze functions - one for regular analyze,\n> one for extended statistics?\n>\n> That being said, I'm not sure extended stats are a good match for this.\n> My feeling was we should collect these stats for all JSONB columns,\n> which is why I argued for putting that in pg_statistic.\n>\n> >\n> >\n> > Per-path statistics should only be collected for scalars. This can be\n> > enabled by flag JsonAnalyzeContext.scalarsOnly. But there are is a\n> > problem: computed MCVs and histograms will be broken and we will not be\n> > able to use them for queries like (jb > const) in general case. Also\n> > we will not be and use internally in scalarineqsel() and var_eq_const()\n> > (see jsonSelectivity()). Instead, we will have to implement own\n> > estimator functions for JSONB comparison operators that will correctly\n> > use our hacked MCVs and histograms (and maybe not all cases will be\n> > supported; for example, comparison to scalars only).\n> >\n>\n> Yeah, but maybe that's an acceptable trade-off? I mean, if we can\n> improve estimates for most clauses, and there's a some number of clauses\n> that are estimated just like without stats, that's still an improvement,\n> right?\n>\n> > It's possible to replace objects and arrays with empty ones when\n> > scalarsOnly is set to keep correct frequencies of non-scalars.\n> > But there is an old bug in JSONB comparison: empty arrays are placed\n> > before other values in the JSONB sort order, although they should go\n> > after all scalars. So we need also to distinguish empty and non-empty\n> > arrays here.\n> >\n>\n> Hmmm ...\n> >\n> >\n> > I tried to fix a major part of places marked as XXX and FIXME, the new\n> > version of the patches is attached. There are a lot of changes, you\n> > can see them in a step-by-step form in the corresponding branch\n> > jsonb_stats_20220122 in our GitHub repo [1].\n> >\n>\n> Thanks! I'll go through the changes soon.\n>\n>\n\nThanks, Nikita and Tomas for these patches.\n\nFor the last few days, I was trying to understand these patches, and based\non Tomas's suggestion, I was doing some performance tests.\n\nWith the attached .SQL file, I can see that analyze is taking more time\nwith these patches.\n\n*Setup: *\nautovacuum=off\nrest all are default settings.\n\nInsert attached file with and without the patch to compare the time taken\nby analyze.\n\n*With json patches:*\npostgres=# analyze test ;\nANALYZE\nTime: *28464.062 ms (00:28.464)*\npostgres=#\npostgres=# SELECT pg_size_pretty(\npg_total_relation_size('pg_catalog.pg_statistic') );\n pg_size_pretty\n----------------\n 328 kB\n(1 row)\n-- \n\n*Without json patches:*\npostgres=# analyze test ;\nANALYZE\n*Time: 120.864* ms\npostgres=# SELECT pg_size_pretty(\npg_total_relation_size('pg_catalog.pg_statistic') );\n pg_size_pretty\n----------------\n 272 kB\n\nI haven't found the root cause of this but I feel that this time is due to\na loop of all the paths.\nIn my test data, there is a total of 951 different-2 paths. While doing\nanalysis, first we check all the sample rows(30000) and we collect all the\ndifferent-2 paths (here 951), and after that for every single path, we loop\nover all the sample rows again to collect stats for a particular path. I\nfeel that these loops might be taking time.\n\nI will run perf and will try to find out the root cause of this.\n\nApart from this performance issue, I haven't found any crashes or issues.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 25 Jan 2022 22:26:01 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Thu, 6 Jan 2022 at 14:56, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n>\n> Not sure I understand. I wasn't suggesting any user-defined filtering,\n> but something done by default, similarly to what we do for regular MCV\n> lists, based on frequency. We'd include frequent paths while excluding\n> rare ones.\n>\n> So no need for a user interface.\n\nNot sure but I think he was agreeing with you. That we should figure\nout the baseline behaviour and get it as useful as possible first then\nlater look at adding some way to customize it. I agree -- I don't\nthink the user interface will be hard technically but I think it will\nrequire some good ideas and there could be lots of bikeshedding. And a\nlot of users will never even use it anyways so it's important to get\nthe defaults as useful as possible.\n\n> Similarly for the non-scalar values - I don't think we can really keep\n> regular statistics on such values (for the same reason why it's not\n> enough for whole JSONB columns), so why to build/store that anyway.\n\nFor a default behaviour I wonder if it wouldn't be better to just\nflatten and extract all the scalars. So if there's no matching path\nthen at least we have some way to estimate how often a scalar appears\nanywhere in the json document.\n\nThat amounts to assuming the user knows the right path to find a given\nscalar and there isn't a lot of overlap between keys. So it would at\nleast do something useful if you have something like {gender: female,\nname: {first: nancy, last: reagan], state: california, country: usa}.\nIt might get things slightly wrong if you have some people named\n\"georgia\" or have names that can be first or last names.\n\nBut it would generally be doing something more or less useful as long\nas they look for \"usa\" in the country field and \"male\" in the gender\nfield. If they looked for \"male\" in $.name.first path it would give\nbad estimates but assuming they know their data structure they won't\nbe doing that.\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 25 Jan 2022 15:06:30 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On 1/25/22 17:56, Mahendra Singh Thalor wrote:\n >\n> ...\n> \n> For the last few days, I was trying to understand these patches, and \n> based on Tomas's suggestion, I was doing some performance tests.\n> \n> With the attached .SQL file, I can see that analyze is taking more time \n> with these patches.\n> \n> *Setup: *\n> autovacuum=off\n> rest all are default settings.\n> \n> Insert attached file with and without the patch to compare the time \n> taken by analyze.\n> \n> *With json patches:*\n> postgres=# analyze test ;\n> ANALYZE\n> Time: *28464.062 ms (00:28.464)*\n> postgres=#\n> postgres=# SELECT pg_size_pretty( \n> pg_total_relation_size('pg_catalog.pg_statistic') );\n> pg_size_pretty\n> ----------------\n> 328 kB\n> (1 row)\n> -- \n> \n> *Without json patches:*\n> postgres=# analyze test ;\n> ANALYZE\n> *Time: 120.864* ms\n> postgres=# SELECT pg_size_pretty( \n> pg_total_relation_size('pg_catalog.pg_statistic') );\n> pg_size_pretty\n> ----------------\n> 272 kB\n> \n> I haven't found the root cause of this but I feel that this time is due \n> to a loop of all the paths.\n> In my test data, there is a total of 951 different-2 paths. While doing \n> analysis, first we check all the sample rows(30000) and we collect all \n> the different-2 paths (here 951), and after that for every single path, \n> we loop over all the sample rows again to collect stats for a particular \n> path. I feel that these loops might be taking time.\n> \n> I will run perf and will try to find out the root cause of this.\n> \n\nThanks, I've been doing some performance tests too, and you're right it \ntakes quite a bit of time. I wanted to compare how the timing changes \nwith complexity of the JSON documents (nesting, number of keys, ...) so \nI wrote a simple python script to generate random JSON documents with \ndifferent parameters - see the attached json-generate.py script.\n\nIt's a bit crude, but it generates synthetic documents with a chosen \nnumber of levels, keys per level, distinct key values, etc. The \ngenerated documents are loaded directly into a \"json_test\" database, \ninto a table \"test_table\" with a single jsonb column called \"v\". \nTweaking this to connect to a different database, or just dump the \ngenerated documents to a file, should be trivial.\n\nThe attached bash script runs the data generator for a couple of \ncombinations, and them measures how long it takes to analyze the table, \nhow large the statistics are (in a rather crude way), etc.\n\nThe results look like this (the last two columns are analyze duration in \nmilliseconds, for master and with the patch):\n\n levels keys unique keys paths master patched\n ----------------------------------------------------------\n 1 1 1 2 153 122\n 1 1 1000 1001 134 1590\n 1 8 8 9 157 367\n 1 8 1000 1001 155 1838\n 1 64 64 65 189 2298\n 1 64 1000 1001 46 9322\n 2 1 1 3 237 197\n 2 1 1000 30580 152 46468\n 2 8 8 73 245 1780\n\nSo yeah, it's significantly slower - in most cases not as much as you \nobserved, but an order of magnitude slower than master. For size of the \nstatistics, it's similar:\n\n levels keys unique keys paths table size master patched\n ------------------------------------------------------------------\n 1 1 1 2 1843200 16360 24325\n 1 1 1000 1001 1843200 16819 1425400\n 1 8 8 9 4710400 28948 88837\n 1 8 1000 1001 6504448 42694 3915802\n 1 64 64 65 30154752 209713 689842\n 1 64 1000 1001 49086464 1093 7755214\n 2 1 1 3 2572288 24883 48727\n 2 1 1000 30580 2572288 11422 26396365\n 2 8 8 73 23068672 164785 862258\n\nThis is measured by by dumping pg_statistic for the column, so in \ndatabase it might be compressed etc. It's larger, but that's somewhat \nexpected because we simply store more detailed stats. The size grows \nwith the number of paths extracted - which is expected, of course.\n\nIf you noticed why this doesn't show data for additional combinations \n(e.g. 2 levels 8 keys and 1000 distinct key values), then that's the bad \nnews - that takes ages (multiple minutes) and then it gets killed by OOM \nkiller because it eats gigabytes of memory.\n\nI agree the slowness is largely due to extracting all paths and then \nprocessing them one by one - which means we have to loop over the tuples \nover and over. In this case there's about 850k distinct paths extracted, \nso we do ~850k loops over 30k tuples. That's gotta take time.\n\nI don't know what exactly to do about this, but I already mentioned we \nmay need to pick a subset of paths to keep, similarly to how we pick \nitems for MCV. I mean, if we only saw a path once or twice, it's \nunlikely to be interesting enough to build stats for it. I haven't \ntried, but I'd bet most of the 850k paths might be ignored like this.\n\nThe other thing we might do is making it the loops more efficient. For \nexample, we might track which documents contain each path (by a small \nbitmap or something), so that in the loop we can skip rows that don't \ncontain the path we're currently processing. Or something like that.\n\nOf course, this can't eliminate all the overhead - we've building more \nstats and that has a cost. In the \"common\" case of stable \"fixed\" schema \nwith the same paths in all documents we'll still need to do loop for \neach of them. So it's bound to be slower than master.\n\nWhich probably means it's a bad idea to do this for all JSONB columns, \nbecause in many cases the extra stats are not needed so the extra \nanalyze time would be a waste. So I guess we'll need some way to enable \nthis only for selected columns ... I argued against the idea to \nimplement this as extended statistics in the first message, but it's a \nreasonably nice way to do such stuff (expression stats are a precedent).\n\n\n> Apart from this performance issue, I haven't found any crashes or issues.\n> \n\nWell, I haven't seen any crashes either, but as I mentioned for complex \ndocuments (2 levels, many distinct keys) the ANALYZE starts consuming a \nlot of memory and may get killed by OOM. For example if you generate \ndocuments like this\n\n ./json-generate.py 30000 2 8 1000 6 1000\n\nand then run ANALYZE, that'll take ages and it very quickly gets into a \nsituation like this (generated from gdb by calling MemoryContextStats on \nTopMemoryContext):\n\n\n-------------------------------------------------------------------------\nTopMemoryContext: 80776 total in 6 blocks; 13992 free (18 chunks); 66784 \nused\n ...\n TopPortalContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used\n PortalContext: 1024 total in 1 blocks; 488 free (0 chunks); 536 \nused: <unnamed>\n Analyze: 472726496 total in 150 blocks; 3725776 free (4 chunks); \n469000720 used\n Analyze Column: 921177696 total in 120 blocks; 5123256 free \n(238 chunks); 916054440 used\n Json Analyze Tmp Context: 8192 total in 1 blocks; 5720 free \n(1 chunks); 2472 used\n Json Analyze Pass Context: 8192 total in 1 blocks; 7928 \nfree (0 chunks); 264 used\n JSON analyze path table: 1639706040 total in 25084 blocks; \n1513640 free (33 chunks); 1638192400 used\n Vacuum: 8192 total in 1 blocks; 7448 free (0 chunks); 744 used\n...\nGrand total: 3035316184 bytes in 25542 blocks; 10971120 free (352 \nchunks); 3024345064 used\n-------------------------------------------------------------------------\n\n\nYes, that's backend 3GB of memory, out of which 1.6GB is in \"analyze \npath table\" context, 400MB in \"analyze\" and 900MB in \"analyze column\" \ncontexts. I mean, that seems a bit excessive. And it grows over time, so \nafter a while my laptop gives up and kills the backend.\n\nI'm not sure if it's a memory leak (which would be fixable), or it's due \nto keeping stats for all the extracted paths. I mean, in this particular \ncase we have 850k paths - even if stats are just 1kB per path, that's \n850MB. This requires more investigation.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 4 Feb 2022 03:47:48 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "\n\nOn 2/4/22 03:47, Tomas Vondra wrote:\n> ./json-generate.py 30000 2 8 1000 6 1000\n\nSorry, this should be (different order of parameters):\n\n./json-generate.py 30000 2 1000 8 6 1000\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Feb 2022 04:00:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On 04.02.2022 05:47, Tomas Vondra wrote:\n> On 1/25/22 17:56, Mahendra Singh Thalor wrote:\n> >\n>> ...\n>>\n>> For the last few days, I was trying to understand these patches, and \n>> based on Tomas's suggestion, I was doing some performance tests.\n>>\n>> With the attached .SQL file, I can see that analyze is taking more \n>> time with these patches.\n>>\n>> I haven't found the root cause of this but I feel that this time is \n>> due to a loop of all the paths.\n>> In my test data, there is a total of 951 different-2 paths. While \n>> doing analysis, first we check all the sample rows(30000) and we \n>> collect all the different-2 paths (here 951), and after that for \n>> every single path, we loop over all the sample rows again to collect \n>> stats for a particular path. I feel that these loops might be taking \n>> time.\n>>\n> Thanks, I've been doing some performance tests too, and you're right \n> it takes quite a bit of time.\n\nThat is absolutely not surprising, I have warned about poor performance\nin cases with a large number of paths.\n\n\n> I agree the slowness is largely due to extracting all paths and then \n> processing them one by one - which means we have to loop over the \n> tuples over and over. In this case there's about 850k distinct paths \n> extracted, so we do ~850k loops over 30k tuples. That's gotta take time.\n>\n> I don't know what exactly to do about this, but I already mentioned we \n> may need to pick a subset of paths to keep, similarly to how we pick \n> items for MCV. I mean, if we only saw a path once or twice, it's \n> unlikely to be interesting enough to build stats for it. I haven't \n> tried, but I'd bet most of the 850k paths might be ignored like this.\n>\n> The other thing we might do is making it the loops more efficient. For \n> example, we might track which documents contain each path (by a small \n> bitmap or something), so that in the loop we can skip rows that don't \n> contain the path we're currently processing. Or something like that.\n>\n>> Apart from this performance issue, I haven't found any crashes or \n>> issues.\n>>\n>\n> Well, I haven't seen any crashes either, but as I mentioned for \n> complex documents (2 levels, many distinct keys) the ANALYZE starts \n> consuming a lot of memory and may get killed by OOM. For example if \n> you generate documents like this\n>\n> ./json-generate.py 30000 2 8 1000 6 1000\n>\n> and then run ANALYZE, that'll take ages and it very quickly gets into \n> a situation like this (generated from gdb by calling \n> MemoryContextStats on TopMemoryContext): and then run ANALYZE, that'll \n> take ages and it very quickly gets into a situation like this \n> (generated from gdb by calling MemoryContextStats on TopMemoryContext):\n>\n> -------------------------------------------------------------------------\n> TopMemoryContext: 80776 total in 6 blocks; 13992 free (18 chunks); \n> 66784 used\n> ...\n> TopPortalContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 \n> used\n> PortalContext: 1024 total in 1 blocks; 488 free (0 chunks); 536 \n> used: <unnamed>\n> Analyze: 472726496 total in 150 blocks; 3725776 free (4 chunks); \n> 469000720 used\n> Analyze Column: 921177696 total in 120 blocks; 5123256 free \n> (238 chunks); 916054440 used\n> Json Analyze Tmp Context: 8192 total in 1 blocks; 5720 free \n> (1 chunks); 2472 used\n> Json Analyze Pass Context: 8192 total in 1 blocks; 7928 \n> free (0 chunks); 264 used\n> JSON analyze path table: 1639706040 total in 25084 blocks; \n> 1513640 free (33 chunks); 1638192400 used\n> Vacuum: 8192 total in 1 blocks; 7448 free (0 chunks); 744 used\n> ...\n> Grand total: 3035316184 bytes in 25542 blocks; 10971120 free (352 \n> chunks); 3024345064 used\n> -------------------------------------------------------------------------\n>\n>\n> Yes, that's backend 3GB of memory, out of which 1.6GB is in \"analyze \n> path table\" context, 400MB in \"analyze\" and 900MB in \"analyze column\" \n> contexts. I mean, that seems a bit excessive. And it grows over time, \n> so after a while my laptop gives up and kills the backend.\n>\n> I'm not sure if it's a memory leak (which would be fixable), or it's \n> due to keeping stats for all the extracted paths. I mean, in this \n> particular case we have 850k paths - even if stats are just 1kB per \n> path, that's 850MB. This requires more investigation.\n\nThank you for the tests and investigation.\n\nI have tried to reduce memory consumption and speed up row scanning:\n\n1. \"JSON analyze path table\" context contained ~1KB JsonPathAnlStats\n structure per JSON path in the global hash table. I have moved\n JsonPathAnlStats to the stack of compute_json_stats(), and hash\n table now consumes ~70 bytes per path.\n\n2. I have fixed copying of resulting JSONB stats into context, which\n reduced the size of \"Analyze Column\" context.\n\n3. I have optimized consumption of single-pass algorithm by storing\n only value lists in the non-temporary context. That helped to\n execute \"2 64 64\" test case in 30 seconds. Single-pass is a\n bit faster in non-TOASTed cases, and much faster in TOASTed.\n But it consumes much more memory and still goes to OOM in the\n cases with more than ~100k paths.\n\n4. I have implemented per-path document lists/bitmaps, which really\n speed up the case \"2 8 1000\". List is converted into bitmap when\n it becomes larger than bitmap.\n\n5. Also I have fixed some bugs.\n\n\nAll these changes you can find commit form in our GitHub repository\non the branch jsonb_stats_20220310 [1].\n\n\nUpdated results of the test:\n\nlevels keys uniq keys paths master multi-pass single-pass\n ms MB ms MB\n-------------------------------------------------------------------\n 1 1 1 2 153 122 10 82 14\n 1 1 1000 1001 134 105 11 78 38\n 1 8 8 9 157 384 19 328 32\n 1 8 1000 1001 155 454 23 402 72\n 1 64 64 65 129 2889 45 2386 155\n 1 64 1000 1001 158 3990 94 1447 177\n 2 1 1 3 237 147 10 91 16\n 2 1 1000 30577 152 264 32 394 234\n 2 8 8 72 245 1943 37 1692 139\n 2 8 1000 852333 152 9175 678 OOM\n 2 64 64 4161 1784 ~1 hour 53 30018 1750\n 2 64 1000 1001001 4715 ~4 hours 1600 OOM\n\nThe two last multi-pass results are too slow, because JSONBs becomes\nTOASTed. For measuring master in these tests, I disabled\nWIDTH_THRESHOLD check which skipped TOASTed values > 1KB.\n\n\nNext, I am going to try to disable all-paths collection and implement\ncollection of most common paths (and/or hashed paths maybe).\n\n\n[1]https://github.com/postgrespro/postgres/tree/jsonb_stats_20220310\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 11 Mar 2022 01:58:54 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Fri, 4 Feb 2022 at 08:30, Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n>\n>\n>\n> On 2/4/22 03:47, Tomas Vondra wrote:\n> > ./json-generate.py 30000 2 8 1000 6 1000\n>\n> Sorry, this should be (different order of parameters):\n>\n> ./json-generate.py 30000 2 1000 8 6 1000\n>\n\nThanks, Tomas for this test case.\n\nHi Hackers,\n\nFor the last few days, I was doing testing on the top of these JSON\noptimizers patches and was taking help fro Tomas Vondra to understand\npatches and testing results.\nThanks, Tomas for your feedback and suggestions.\n\nBelow is the summary:\n*Point 1)* analyse is taking very much time for large documents:\nFor large JSON documents, analyze took very large time as compared to the\ncurrent head. For reference, I am attaching test file (./json-generate.py\n30000 2 1000 8 6 1000)\n\nHead: analyze test ; Time: 120.864 ms\nWith patch: analyze test ; Time: more than 2 hours\n\nanalyze is taking a very large time because with these patches, firstly we\niterate over all sample rows (in above case 30000), and we store all the\npaths (here around 850k paths).\nIn another pass, we took 1 path at a time and collects stats for the\nparticular path by analyzing all the sample rows and we continue this\nprocess for all 850k paths or we can say that we do 850k loops, and in each\nloop we extract values for a single path.\n\n*Point 2)* memory consummation increases rapidly for large documents:\nIn the above test case, there are total 851k paths and to keep stats for\none path, we allocate 1120 bytes.\n\nTotal paths : 852689 ~ 852k\n\nMemory for 1 path to keep stats: 1120 ~ 1 KB\n\n(sizeof(JsonValueStats) = 1120 from “Analyze Column”)\n\nTotal memory for all paths: 852689 * 1120 = 955011680 ~ 955 MB\n\nExtra memory for each path will be more. I mean, while analyzing each path,\nwe allocate some more memory based on frequency and others\n\nTo keep all entries(851k paths) in the hash, we use around 1GB memory for\nhash so this is also very large.\n\n*Point 3*) Review comment noticed by Tomas Vondra:\n\n+ oldcxt = MemoryContextSwitchTo(ctx->stats->anl_context);\n+ pstats->stats = jsonAnalyzeBuildPathStats(pstats);\n+ MemoryContextSwitchTo(oldcxt);\n\nAbove should be:\n+ oldcxt = MemoryContextSwitchTo(ctx->mcxt);\n+ pstats->stats = jsonAnalyzeBuildPathStats(pstats);\n+ MemoryContextSwitchTo(oldcxt);\n\n*Response from Tomas Vondra:*\nThe problem is \"anl_context\" is actually \"Analyze\", i.e. the context for\nthe whole ANALYZE command, for all the columns. But we only want to keep\nthose path stats while processing a particular column. At the end, after\nprocessing all paths from a column, we need to \"build\" the final stats in\nthe column, and this result needs to go into \"Analyze\" context. But all the\npartial results need to go into \"Analyze Column\" context.\n\n*Point 4)*\n\n+/*\n\n+ * jsonAnalyzeCollectPath\n\n+ * Extract a single path from JSON documents and collect its\nvalues.\n\n+ */\n\n+static void\n\n+jsonAnalyzeCollectPath(JsonAnalyzeContext *ctx, Jsonb *jb, void *param)\n\n+{\n\n+ JsonPathAnlStats *pstats = (JsonPathAnlStats *) param;\n\n+ JsonbValue jbvtmp;\n\n+ JsonbValue *jbv = JsonValueInitBinary(&jbvtmp, jb);\n\n+ JsonPathEntry *path;\n\n+ JsonPathEntry **entries;\n\n+ int i;\n\n+\n\n+ entries = palloc(sizeof(*entries) * pstats->depth);\n\n+\n\n+ /* Build entry array in direct order */\n\n+ for (path = &pstats->path, i = pstats->depth - 1;\n\n+ path->parent && i >= 0;\n\n+ path = path->parent, i--)\n\n+ entries[i] = path;\n\n+\n\n+ jsonAnalyzeCollectSubpath(ctx, pstats, jbv, entries, 0);\n\n+\n\n+ pfree(entries);\n\n----many times, we are trying to palloc with zero size and entries is\npointing to invalid memory (because pstats->depth=0) so I think, we should\nnot try to palloc with 0??\n\n*Fix:*\n\n+ If (pstats->depth)\n\n + entries = palloc(sizeof(*entries) * pstats->depth);\n\n\n\n From these points, we can say that we should rethink our design to collect\nstats for all paths.\n\nWe can set limits(like MCV) for paths or we can give an explicit path to\ncollect stats for a particular path only or we can pass a subset of the\nJSON values.\n\nIn the above case, there are total 851k paths, but we can collect stats for\nonly 1000 paths that are most common so this way we can minimize time and\nmemory also and we might even keep at\nleast frequencies for the non-analyzed paths.\n\nNext, I will take the latest patches from Nikita's last email and I will do\nmore tests.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, 4 Feb 2022 at 08:30, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:>>>> On 2/4/22 03:47, Tomas Vondra wrote:> > ./json-generate.py 30000 2 8 1000 6 1000>> Sorry, this should be (different order of parameters):>> ./json-generate.py 30000 2 1000 8 6 1000>Thanks, Tomas for this test case.Hi Hackers,For the last few days, I was doing testing on the top of these JSON optimizers patches and was taking help fro Tomas Vondra to understand patches and testing results.Thanks, Tomas for your feedback and suggestions.Below is the summary:Point 1) analyse is taking very much time for large documents:For large JSON documents, analyze took very large time as compared to the current head. For reference, I am attaching test file (./json-generate.py 30000 2 1000 8 6 1000)Head: analyze test ; Time: 120.864 msWith patch: analyze test ; Time: more than 2 hoursanalyze is taking a very large time because with these patches, firstly we iterate over all sample rows (in above case 30000), and we store all the paths (here around 850k paths).In another pass, we took 1 path at a time and collects stats for the particular path by analyzing all the sample rows and we continue this process for all 850k paths or we can say that we do 850k loops, and in each loop we extract values for a single path.Point 2) memory consummation increases rapidly for large documents:In the above test case, there are total 851k paths and to keep stats for one path, we allocate 1120 bytes.Total paths : 852689 ~ 852kMemory for 1 path to keep stats: 1120 ~ 1 KB(sizeof(JsonValueStats) = 1120 from “Analyze Column”)Total memory for all paths: 852689 * 1120 = 955011680 ~ 955 MBExtra memory for each path will be more. I mean, while analyzing each path, we allocate some more memory based on frequency and othersTo keep all entries(851k paths) in the hash, we use around 1GB memory for hash so this is also very large.Point 3) Review comment noticed by Tomas Vondra:+ oldcxt = MemoryContextSwitchTo(ctx->stats->anl_context);+ pstats->stats = jsonAnalyzeBuildPathStats(pstats);+ MemoryContextSwitchTo(oldcxt);Above should be:+ oldcxt = MemoryContextSwitchTo(ctx->mcxt);+ pstats->stats = jsonAnalyzeBuildPathStats(pstats);+ MemoryContextSwitchTo(oldcxt);Response from Tomas Vondra:The problem is \"anl_context\" is actually \"Analyze\", i.e. the context for the whole ANALYZE command, for all the columns. But we only want to keep those path stats while processing a particular column. At the end, after processing all paths from a column, we need to \"build\" the final stats in the column, and this result needs to go into \"Analyze\" context. But all the partial results need to go into \"Analyze Column\" context.Point 4)+/*+ * jsonAnalyzeCollectPath+ * Extract a single path from JSON documents and collect its values.+ */+static void+jsonAnalyzeCollectPath(JsonAnalyzeContext *ctx, Jsonb *jb, void *param)+{+ JsonPathAnlStats *pstats = (JsonPathAnlStats *) param;+ JsonbValue jbvtmp;+ JsonbValue *jbv = JsonValueInitBinary(&jbvtmp, jb);+ JsonPathEntry *path;+ JsonPathEntry **entries;+ int i;++ entries = palloc(sizeof(*entries) * pstats->depth);++ /* Build entry array in direct order */+ for (path = &pstats->path, i = pstats->depth - 1;+ path->parent && i >= 0;+ path = path->parent, i--)+ entries[i] = path;++ jsonAnalyzeCollectSubpath(ctx, pstats, jbv, entries, 0);++ pfree(entries);----many times, we are trying to palloc with zero size and entries is pointing to invalid memory (because pstats->depth=0) so I think, we should not try to palloc with 0??Fix:+ If (pstats->depth) + entries = palloc(sizeof(*entries) * pstats->depth);From these points, we can say that we should rethink our design to collect stats for all paths.We can set limits(like MCV) for paths or we can give an explicit path to collect stats for a particular path only or we can pass a subset of the JSON values.In the above case, there are total 851k paths, but we can collect stats for only 1000 paths that are most common so this way we can minimize time and memory also and we might even keep at\nleast frequencies for the non-analyzed paths.Next, I will take the latest patches from Nikita's last email and I will do more tests.Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 11 Mar 2022 22:40:43 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "This patch has bitrotted, presumably after the other JSON patchset was\napplied. It looks like it's failing in the json header file so it may\nbe as simple as additional functions added on nearby lines.\n\nPlease rebase. Reminder, it's the last week of the commitfest so time\nis of the essence....\n\n\n",
"msg_date": "Fri, 1 Apr 2022 10:51:01 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "I noticed some typos.\n\ndiff --git a/src/backend/utils/adt/jsonb_selfuncs.c b/src/backend/utils/adt/jsonb_selfuncs.c\nindex f5520f88a1d..d98cd7020a1 100644\n--- a/src/backend/utils/adt/jsonb_selfuncs.c\n+++ b/src/backend/utils/adt/jsonb_selfuncs.c\n@@ -1342,7 +1342,7 @@ jsonSelectivityContains(JsonStats stats, Jsonb *jb)\n \t\t\t\t\tpath->stats = jsonStatsFindPath(stats, pathstr.data,\n \t\t\t\t\t\t\t\t\t\t\t\t\tpathstr.len);\n \n-\t\t\t\t/* Appeend path string entry for array elements, get stats. */\n+\t\t\t\t/* Append path string entry for array elements, get stats. */\n \t\t\t\tjsonPathAppendEntry(&pathstr, NULL);\n \t\t\t\tpstats = jsonStatsFindPath(stats, pathstr.data, pathstr.len);\n \t\t\t\tfreq = jsonPathStatsGetFreq(pstats, 0.0);\n@@ -1367,7 +1367,7 @@ jsonSelectivityContains(JsonStats stats, Jsonb *jb)\n \t\t\tcase WJB_END_ARRAY:\n \t\t\t{\n \t\t\t\tstruct Path *p = path;\n-\t\t\t\t/* Absoulte selectivity of the path with its all subpaths */\n+\t\t\t\t/* Absolute selectivity of the path with its all subpaths */\n \t\t\t\tSelectivity abs_sel = p->sel * p->freq;\n \n \t\t\t\t/* Pop last path entry */\ndiff --git a/src/backend/utils/adt/jsonb_typanalyze.c b/src/backend/utils/adt/jsonb_typanalyze.c\nindex 7882db23a87..9a759aadafb 100644\n--- a/src/backend/utils/adt/jsonb_typanalyze.c\n+++ b/src/backend/utils/adt/jsonb_typanalyze.c\n@@ -123,10 +123,9 @@ typedef struct JsonScalarStats\n /*\n * Statistics calculated for a set of values.\n *\n- *\n * XXX This seems rather complicated and needs simplification. We're not\n * really using all the various JsonScalarStats bits, there's a lot of\n- * duplication (e.g. each JsonScalarStats contains it's own array, which\n+ * duplication (e.g. each JsonScalarStats contains its own array, which\n * has a copy of data from the one in \"jsons\").\n */\n typedef struct JsonValueStats\n@@ -849,7 +848,7 @@ jsonAnalyzePathValues(JsonAnalyzeContext *ctx, JsonScalarStats *sstats,\n \tstats->stanullfrac = (float4)(1.0 - freq);\n \n \t/*\n-\t * Similarly, we need to correct the MCV frequencies, becuse those are\n+\t * Similarly, we need to correct the MCV frequencies, because those are\n \t * also calculated only from the non-null values. All we need to do is\n \t * simply multiply that with the non-NULL frequency.\n \t */\n@@ -1015,7 +1014,7 @@ jsonAnalyzeBuildPathStats(JsonPathAnlStats *pstats)\n \n \t/*\n \t * We keep array length stats here for queries like jsonpath '$.size() > 5'.\n-\t * Object lengths stats can be useful for other query lanuages.\n+\t * Object lengths stats can be useful for other query languages.\n \t */\n \tif (vstats->arrlens.values.count)\n \t\tjsonAnalyzeMakeScalarStats(&ps, \"array_length\", &vstats->arrlens.stats);\n@@ -1069,7 +1068,7 @@ jsonAnalyzeCalcPathFreq(JsonAnalyzeContext *ctx, JsonPathAnlStats *pstats,\n * We're done with accumulating values for this path, so calculate the\n * statistics for the various arrays.\n *\n- * XXX I wonder if we could introduce some simple heuristict on which\n+ * XXX I wonder if we could introduce some simple heuristic on which\n * paths to keep, similarly to what we do for MCV lists. For example a\n * path that occurred just once is not very interesting, so we could\n * decide to ignore it and not build the stats. Although that won't\n@@ -1414,7 +1413,7 @@ compute_json_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,\n \n \t/*\n \t * Collect and analyze JSON path values in single or multiple passes.\n-\t * Sigle-pass collection is faster but consumes much more memory than\n+\t * Single-pass collection is faster but consumes much more memory than\n \t * collecting and analyzing by the one path at pass.\n \t */\n \tif (ctx.single_pass)\n\n\n",
"msg_date": "Thu, 7 Apr 2022 19:31:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Fri, 1 Apr 2022 at 20:21, Greg Stark <stark@mit.edu> wrote:\n>\n> This patch has bitrotted, presumably after the other JSON patchset was\n> applied. It looks like it's failing in the json header file so it may\n> be as simple as additional functions added on nearby lines.\n>\n> Please rebase. Reminder, it's the last week of the commitfest so time\n> is of the essence....\n\nThanks, Greg for the report.\n\nHere, I am attaching re-based patches of the v05 series. These patches\nare re-based on the commit 7dd3ee508432730d15c5.\n\n> I noticed some typos.\n\n> diff --git a/src/backend/utils/adt/jsonb_selfuncs.c b/src/backend/utils/adt/jsonb_selfuncs.c\n> index f5520f88a1d..d98cd7020a1 100644\n\nThanks, Justin for the review. We will fix these comments in the next version.\n\n\n> Next, I am going to try to disable all-paths collection and implement\n> collection of most common paths (and/or hashed paths maybe).\n\nThanks, Nikita for the v04 series of patches. I tested on the top of\nyour patches and verified that time taken by analyse is reduced for\nlarge complex json docs.\n\nIn v03 patches, it was more than 2 hours, and in v04 patches, it is 39\nsec only (time for Tomas's test case).\n\nI am waiting for your patches (disable all-paths collection and\nimplement collection of most common paths)\n\nJust for testing purposes, I am posting re-based patches here.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 10 May 2022 16:49:04 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On Fri, 11 Mar 2022 at 04:29, Nikita Glukhov <n.gluhov@postgrespro.ru>\nwrote:\n>\n>\n> On 04.02.2022 05:47, Tomas Vondra wrote:\n>\n> On 1/25/22 17:56, Mahendra Singh Thalor wrote:\n> >\n>\n> ...\n>\n> For the last few days, I was trying to understand these patches, and\nbased on Tomas's suggestion, I was doing some performance tests.\n>\n> With the attached .SQL file, I can see that analyze is taking more time\nwith these patches.\n>\n> I haven't found the root cause of this but I feel that this time is due\nto a loop of all the paths.\n> In my test data, there is a total of 951 different-2 paths. While doing\nanalysis, first we check all the sample rows(30000) and we collect all the\ndifferent-2 paths (here 951), and after that for every single path, we loop\nover all the sample rows again to collect stats for a particular path. I\nfeel that these loops might be taking time.\n>\n> Thanks, I've been doing some performance tests too, and you're right it\ntakes quite a bit of time.\n>\n>\n> That is absolutely not surprising, I have warned about poor performance\n> in cases with a large number of paths.\n>\n>\n> I agree the slowness is largely due to extracting all paths and then\nprocessing them one by one - which means we have to loop over the tuples\nover and over. In this case there's about 850k distinct paths extracted, so\nwe do ~850k loops over 30k tuples. That's gotta take time.\n>\n> I don't know what exactly to do about this, but I already mentioned we\nmay need to pick a subset of paths to keep, similarly to how we pick items\nfor MCV. I mean, if we only saw a path once or twice, it's unlikely to be\ninteresting enough to build stats for it. I haven't tried, but I'd bet most\nof the 850k paths might be ignored like this.\n>\n> The other thing we might do is making it the loops more efficient. For\nexample, we might track which documents contain each path (by a small\nbitmap or something), so that in the loop we can skip rows that don't\ncontain the path we're currently processing. Or something like that.\n>\n> Apart from this performance issue, I haven't found any crashes or issues.\n>\n>\n> Well, I haven't seen any crashes either, but as I mentioned for complex\ndocuments (2 levels, many distinct keys) the ANALYZE starts consuming a lot\nof memory and may get killed by OOM. For example if you generate documents\nlike this\n>\n> ./json-generate.py 30000 2 8 1000 6 1000\n>\n> and then run ANALYZE, that'll take ages and it very quickly gets into a\nsituation like this (generated from gdb by calling MemoryContextStats on\nTopMemoryContext): and then run ANALYZE, that'll take ages and it very\nquickly gets into a situation like this (generated from gdb by calling\nMemoryContextStats on TopMemoryContext):\n>\n> -------------------------------------------------------------------------\n> TopMemoryContext: 80776 total in 6 blocks; 13992 free (18 chunks); 66784\nused\n> ...\n> TopPortalContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used\n> PortalContext: 1024 total in 1 blocks; 488 free (0 chunks); 536 used:\n<unnamed>\n> Analyze: 472726496 total in 150 blocks; 3725776 free (4 chunks);\n469000720 used\n> Analyze Column: 921177696 total in 120 blocks; 5123256 free (238\nchunks); 916054440 used\n> Json Analyze Tmp Context: 8192 total in 1 blocks; 5720 free (1\nchunks); 2472 used\n> Json Analyze Pass Context: 8192 total in 1 blocks; 7928 free\n(0 chunks); 264 used\n> JSON analyze path table: 1639706040 total in 25084 blocks;\n1513640 free (33 chunks); 1638192400 used\n> Vacuum: 8192 total in 1 blocks; 7448 free (0 chunks); 744 used\n> ...\n> Grand total: 3035316184 bytes in 25542 blocks; 10971120 free (352\nchunks); 3024345064 used\n> -------------------------------------------------------------------------\n>\n>\n> Yes, that's backend 3GB of memory, out of which 1.6GB is in \"analyze path\ntable\" context, 400MB in \"analyze\" and 900MB in \"analyze column\" contexts.\nI mean, that seems a bit excessive. And it grows over time, so after a\nwhile my laptop gives up and kills the backend.\n>\n> I'm not sure if it's a memory leak (which would be fixable), or it's due\nto keeping stats for all the extracted paths. I mean, in this particular\ncase we have 850k paths - even if stats are just 1kB per path, that's\n850MB. This requires more investigation.\n>\n> Thank you for the tests and investigation.\n>\n> I have tried to reduce memory consumption and speed up row scanning:\n>\n> 1. \"JSON analyze path table\" context contained ~1KB JsonPathAnlStats\n> structure per JSON path in the global hash table. I have moved\n> JsonPathAnlStats to the stack of compute_json_stats(), and hash\n> table now consumes ~70 bytes per path.\n>\n> 2. I have fixed copying of resulting JSONB stats into context, which\n> reduced the size of \"Analyze Column\" context.\n>\n> 3. I have optimized consumption of single-pass algorithm by storing\n> only value lists in the non-temporary context. That helped to\n> execute \"2 64 64\" test case in 30 seconds. Single-pass is a\n> bit faster in non-TOASTed cases, and much faster in TOASTed.\n> But it consumes much more memory and still goes to OOM in the\n> cases with more than ~100k paths.\n>\n> 4. I have implemented per-path document lists/bitmaps, which really\n> speed up the case \"2 8 1000\". List is converted into bitmap when\n> it becomes larger than bitmap.\n>\n> 5. Also I have fixed some bugs.\n>\n>\n> All these changes you can find commit form in our GitHub repository\n> on the branch jsonb_stats_20220310 [1].\n>\n>\n> Updated results of the test:\n>\n> levels keys uniq keys paths master multi-pass single-pass\n> ms MB ms MB\n> -------------------------------------------------------------------\n> 1 1 1 2 153 122 10 82 14\n> 1 1 1000 1001 134 105 11 78 38\n> 1 8 8 9 157 384 19 328 32\n> 1 8 1000 1001 155 454 23 402 72\n> 1 64 64 65 129 2889 45 2386 155\n> 1 64 1000 1001 158 3990 94 1447 177\n> 2 1 1 3 237 147 10 91 16\n> 2 1 1000 30577 152 264 32 394 234\n> 2 8 8 72 245 1943 37 1692 139\n> 2 8 1000 852333 152 9175 678 OOM\n> 2 64 64 4161 1784 ~1 hour 53 30018 1750\n> 2 64 1000 1001001 4715 ~4 hours 1600 OOM\n>\n> The two last multi-pass results are too slow, because JSONBs becomes\n> TOASTed. For measuring master in these tests, I disabled\n> WIDTH_THRESHOLD check which skipped TOASTed values > 1KB.\n>\n>\n> Next, I am going to try to disable all-paths collection and implement\n> collection of most common paths (and/or hashed paths maybe).\n\nHi Nikita,\nI and Tomas discussed the design for disabling all-paths collection(collect\nstats for only some paths). Below are some thoughts/doubts/questions.\n\n*Point 1)* Please can you elaborate more that how are you going to\nimplement this(collect stats for only some paths).\n*Point 2) *As JSON stats are taking time so should we add an on/off switch\nto collect JSON stats?\n*Point 3)* We thought of one more design: we can give an explicit path to\ncollect stats for a particular path only or we can pass a subset of the\nJSON values but this may require a lot of code changes like syntax and all\nso we are thinking that it will be good if we can collect stats only for\nsome common paths(by limit or any other way)\n\nThoughts?\n\n--\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, 11 Mar 2022 at 04:29, Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:>>> On 04.02.2022 05:47, Tomas Vondra wrote:>> On 1/25/22 17:56, Mahendra Singh Thalor wrote:> >>> ...>> For the last few days, I was trying to understand these patches, and based on Tomas's suggestion, I was doing some performance tests.>> With the attached .SQL file, I can see that analyze is taking more time with these patches.>> I haven't found the root cause of this but I feel that this time is due to a loop of all the paths.> In my test data, there is a total of 951 different-2 paths. While doing analysis, first we check all the sample rows(30000) and we collect all the different-2 paths (here 951), and after that for every single path, we loop over all the sample rows again to collect stats for a particular path. I feel that these loops might be taking time.>> Thanks, I've been doing some performance tests too, and you're right it takes quite a bit of time.>>> That is absolutely not surprising, I have warned about poor performance> in cases with a large number of paths.>>> I agree the slowness is largely due to extracting all paths and then processing them one by one - which means we have to loop over the tuples over and over. In this case there's about 850k distinct paths extracted, so we do ~850k loops over 30k tuples. That's gotta take time.>> I don't know what exactly to do about this, but I already mentioned we may need to pick a subset of paths to keep, similarly to how we pick items for MCV. I mean, if we only saw a path once or twice, it's unlikely to be interesting enough to build stats for it. I haven't tried, but I'd bet most of the 850k paths might be ignored like this.>> The other thing we might do is making it the loops more efficient. For example, we might track which documents contain each path (by a small bitmap or something), so that in the loop we can skip rows that don't contain the path we're currently processing. Or something like that.>> Apart from this performance issue, I haven't found any crashes or issues.>>> Well, I haven't seen any crashes either, but as I mentioned for complex documents (2 levels, many distinct keys) the ANALYZE starts consuming a lot of memory and may get killed by OOM. For example if you generate documents like this>> ./json-generate.py 30000 2 8 1000 6 1000>> and then run ANALYZE, that'll take ages and it very quickly gets into a situation like this (generated from gdb by calling MemoryContextStats on TopMemoryContext): and then run ANALYZE, that'll take ages and it very quickly gets into a situation like this (generated from gdb by calling MemoryContextStats on TopMemoryContext):>> -------------------------------------------------------------------------> TopMemoryContext: 80776 total in 6 blocks; 13992 free (18 chunks); 66784 used> ...> TopPortalContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used> PortalContext: 1024 total in 1 blocks; 488 free (0 chunks); 536 used: <unnamed>> Analyze: 472726496 total in 150 blocks; 3725776 free (4 chunks); 469000720 used> Analyze Column: 921177696 total in 120 blocks; 5123256 free (238 chunks); 916054440 used> Json Analyze Tmp Context: 8192 total in 1 blocks; 5720 free (1 chunks); 2472 used> Json Analyze Pass Context: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used> JSON analyze path table: 1639706040 total in 25084 blocks; 1513640 free (33 chunks); 1638192400 used> Vacuum: 8192 total in 1 blocks; 7448 free (0 chunks); 744 used> ...> Grand total: 3035316184 bytes in 25542 blocks; 10971120 free (352 chunks); 3024345064 used> ------------------------------------------------------------------------->>> Yes, that's backend 3GB of memory, out of which 1.6GB is in \"analyze path table\" context, 400MB in \"analyze\" and 900MB in \"analyze column\" contexts. I mean, that seems a bit excessive. And it grows over time, so after a while my laptop gives up and kills the backend.>> I'm not sure if it's a memory leak (which would be fixable), or it's due to keeping stats for all the extracted paths. I mean, in this particular case we have 850k paths - even if stats are just 1kB per path, that's 850MB. This requires more investigation.>> Thank you for the tests and investigation.>> I have tried to reduce memory consumption and speed up row scanning:>> 1. \"JSON analyze path table\" context contained ~1KB JsonPathAnlStats> structure per JSON path in the global hash table. I have moved> JsonPathAnlStats to the stack of compute_json_stats(), and hash> table now consumes ~70 bytes per path.>> 2. I have fixed copying of resulting JSONB stats into context, which> reduced the size of \"Analyze Column\" context.>> 3. I have optimized consumption of single-pass algorithm by storing> only value lists in the non-temporary context. That helped to> execute \"2 64 64\" test case in 30 seconds. Single-pass is a> bit faster in non-TOASTed cases, and much faster in TOASTed.> But it consumes much more memory and still goes to OOM in the> cases with more than ~100k paths.>> 4. I have implemented per-path document lists/bitmaps, which really> speed up the case \"2 8 1000\". List is converted into bitmap when> it becomes larger than bitmap.>> 5. Also I have fixed some bugs.>>> All these changes you can find commit form in our GitHub repository> on the branch jsonb_stats_20220310 [1].>>> Updated results of the test:>> levels keys uniq keys paths master multi-pass single-pass> ms MB ms MB> -------------------------------------------------------------------> 1 1 1 2 153 122 10 82 14> 1 1 1000 1001 134 105 11 78 38> 1 8 8 9 157 384 19 328 32> 1 8 1000 1001 155 454 23 402 72> 1 64 64 65 129 2889 45 2386 155> 1 64 1000 1001 158 3990 94 1447 177> 2 1 1 3 237 147 10 91 16> 2 1 1000 30577 152 264 32 394 234> 2 8 8 72 245 1943 37 1692 139> 2 8 1000 852333 152 9175 678 OOM> 2 64 64 4161 1784 ~1 hour 53 30018 1750> 2 64 1000 1001001 4715 ~4 hours 1600 OOM>> The two last multi-pass results are too slow, because JSONBs becomes> TOASTed. For measuring master in these tests, I disabled> WIDTH_THRESHOLD check which skipped TOASTed values > 1KB.>>> Next, I am going to try to disable all-paths collection and implement> collection of most common paths (and/or hashed paths maybe).Hi Nikita,I and Tomas discussed the design for disabling all-paths collection(collect stats for only some paths). Below are some thoughts/doubts/questions.Point 1) Please can you elaborate more that how are you going to implement this(collect stats for only some paths).Point 2) As JSON stats are taking time so should we add an on/off switch to collect JSON stats?Point 3) We thought of one more design: we can give an explicit path to collect stats for a particular path only or we can pass a subset of the JSON values but this may require a lot of code changes like syntax and all so we are thinking that it will be good if we can collect stats only for some common paths(by limit or any other way)Thoughts?--Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 17 May 2022 17:14:23 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
},
{
"msg_contents": "On 5/17/22 13:44, Mahendra Singh Thalor wrote:\n> ...\n>\n> Hi Nikita,\n> I and Tomas discussed the design for disabling all-paths\n> collection(collect stats for only some paths). Below are some\n> thoughts/doubts/questions.\n> \n> *Point 1)* Please can you elaborate more that how are you going to\n> implement this(collect stats for only some paths).\n\nI think Nikita mentioned he plans to only build stats only for most\ncommon paths, which seems generally straightforward:\n\n1) first pass over the documents, collect distinct paths and track how\nmany times we saw each one\n\n2) in the second pass extract stats only for the most common paths (e.g.\nthe top 100 most common ones, or whatever the statistics target says)\n\nI guess we might store at least the frequencing for uncommon paths,\nwhich seems somewhat useful for selectivity estimation.\n\n\nI wonder if we might further optimize this for less common paths. AFAICS\none of the reasons why we process the paths one by one (in the second\npass) is to limit memory consumption. By processing a single path, we\nonly need to accumulate values for that path.\n\nBut if we know the path is uncommon, we know there'll be few values. For\nexample the path may be only in 100 documents, not the whole sample. So\nmaybe we might process multiple paths at once (which would mean we don't\nneed to detoast the JSON documents that often, etc.).\n\nOTOH that may be pointless, because if the paths are uncommon, chances\nare the subsets of documents will be different, in which case it's\nprobably cheaper to just process the paths one by one.\n\n\n> *Point 2) *As JSON stats are taking time so should we add an on/off\n> switch to collect JSON stats?\n\nIMHO we should think about doing that. I think it's not really possible\nto eliminate (significant) regressions for all corner cases, and in many\ncases people don't even need this statistics (e.g. when just storing and\nretrieving JSON docs, without querying contents of the docs).\n\nI don't know how exactly to enable/disable this - it very much depends\non how we store the stats. If we store that in pg_statistic, then ALTER\nTABLE ... ALTER COLUMN seems like the right way to enable/disable these\npath stats. We might also have a new \"json\" stats and do this through\nCREATE STATISTICS. Or something else, not sure.\n\n\n> *Point 3)* We thought of one more design: we can give an explicit path\n> to collect stats for a particular path only or we can pass a subset of\n> the JSON values but this may require a lot of code changes like syntax\n> and all so we are thinking that it will be good if we can collect stats\n> only for some common paths(by limit or any other way)\n> \n\nI'm not sure I understand what this is saying, particularly the part\nabout subset of JSON values. Can you elaborate?\n\nI can imagine specifying a list of interesting paths, and we'd only\ncollect stats for the matching subset of the JSON documents. So if you\nhave huge JSON documents with complex schema, but you only query a very\nlimited subset of paths, we could restrict ANALYZE to this subset.\n\nIn fact, that's what the 'selective analyze' commit [1] in Nikita's\noriginal patch set does in principle. We'd probably need to improve this\nin some ways (e.g. to allow defining the column filter not just in\nANALYZE itself). I left it out of this patch to keep the patch as simple\nas possible.\n\nBut why/how exactly would we limit the \"JSON values\"? Can you give some\nexample demonstrating that in practice?\n\nregards\n\n\n\n[1]\nhttps://github.com/postgrespro/postgres/commit/7ab7397450df153e5a8563c978728cb731a0df33\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 May 2022 18:06:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Collecting statistics about contents of JSONB columns"
}
] |
[
{
"msg_contents": "Hi,\n\nI note it's not yet possible to INSERT INTO an Updatable View using the ON CONFLICT feature.\n\nOne imaginable pattern is when a user wants to refactor by renaming a table,\nbut for some reason cannot refactor some specific application and want to\nallow it to continue to use the table's old name.\n\nOne approach to do so would be to create a an Updatable View (aka Simple view) [1],\ngiven the same name as the table's old name.\n\nThis is ugly and not something I would do myself, but I've read about how others describe this pattern, not in the context of ON CONFLICT, but in general, when refactoring.\n\nAre there reasons why it would not be possible to develop support INSERT INTO ... ON CONFLICT for Updatable Views?\n\nNot saying it is desired, just trying to better understand the limits of Updatable Views.\n\n/Joel\nHi,I note it's not yet possible to INSERT INTO an Updatable View using the ON CONFLICT feature.One imaginable pattern is when a user wants to refactor by renaming a table,but for some reason cannot refactor some specific application and want toallow it to continue to use the table's old name.One approach to do so would be to create a an Updatable View (aka Simple view) [1],given the same name as the table's old name.This is ugly and not something I would do myself, but I've read about how others describe this pattern, not in the context of ON CONFLICT, but in general, when refactoring.Are there reasons why it would not be possible to develop support INSERT INTO ... ON CONFLICT for Updatable Views?Not saying it is desired, just trying to better understand the limits of Updatable Views./Joel",
"msg_date": "Sat, 01 Jan 2022 13:52:43 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Updatable Views and INSERT INTO ... ON CONFLICT"
},
{
"msg_contents": "Joel Jacobson:\n> I note it's not yet possible to INSERT INTO an Updatable View using the \n> ON CONFLICT feature.\n\nTo be clear, it seems to be supported for AUTO-updatable views and for \nviews with manually created RULES, but not for views with INSTEAD OF \ntriggers.\n\n> Not saying it is desired, just trying to better understand the limits of \n> Updatable Views.\n\nIt's certainly desired. I tried to use it in the past.\n\n > Are there reasons why it would not be possible to develop support INSERT\n > INTO ... ON CONFLICT for Updatable Views?\n\nI think the main challenge is, that when a view has an INSTEAD OF insert \ntrigger, the INSERT statement that is in the trigger function is not the \nsame statement that is called on the view. Auto-updatable views rewrite \nthe original query, so they can support this.\n\nFor this to work, the outer INSERT would have to \"catch\" the error that \nthe trigger function throws on a conflict - and then the outer INSERT \nwould have to execute an UPDATE on the view instead.\n\nI don't know about the internals of INSERT .. ON CONFLICT, but I'd \nassume the conflict handling + update happens much later than calling \nthe instead of trigger, so that makes it impossible to do it right now.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Fri, 2 Sep 2022 12:04:59 +0200",
"msg_from": "walther@technowledgy.de",
"msg_from_op": false,
"msg_subject": "Re: Updatable Views and INSERT INTO ... ON CONFLICT"
}
] |
[
{
"msg_contents": "Yesterday I pushed what I thought was a quick fix for bug #17350 [1].\nIn short, if we have an index that stores both \"x\" and \"f(x)\",\nwhere the \"x\" column can be retrieved in index-only scans but \"f(x)\"\ncannot, it's possible for the planner to generate an IOS plan that\nnonetheless tries to read the f(x) index column. The bug report\nconcerns the case where f(x) is needed in the IOS plan node's targetlist,\nand I did fix that --- but I now realize that we still have a problem\nwith respect to rechecks of the plan node's indexquals. Here's\nan example:\n\nregression=# create extension pg_trgm;\nCREATE EXTENSION\nregression=# create table t(a text);\nCREATE TABLE\nregression=# create index on t using gist(lower(a) gist_trgm_ops) include (a);\nCREATE INDEX\nregression=# insert into t values('zed');\nINSERT 0 1\nregression=# insert into t values('z');\nINSERT 0 1\nregression=# select * from t where lower(a) like 'z';\n a \n---\n z\n(1 row)\n\nThat's the correct answer, but we're using a bitmap scan to get it.\nIf we force an IOS plan:\n\nregression=# set enable_bitmapscan = 0;\nSET\nregression=# explain select * from t where lower(a) like 'z';\n QUERY PLAN \n------------------------------------------------------------------------------\n Index Only Scan using t_lower_a_idx on t (cost=0.14..28.27 rows=7 width=32)\n Index Cond: ((lower(a)) ~~ 'z'::text)\n(2 rows)\n\nregression=# select * from t where lower(a) like 'z';\n a \n---\n(0 rows)\n\nThat's from a build a few days old. As of HEAD it's even worse;\nnot only do we fail to return the rows we should, but EXPLAIN says\n\nregression=# explain select * from t where lower(a) like 'z';\n QUERY PLAN \n------------------------------------------------------------------------------\n Index Only Scan using t_lower_a_idx on t (cost=0.14..28.27 rows=7 width=32)\n Index Cond: ((NULL::text) ~~ 'z'::text)\n(2 rows)\n\nAt least this is showing us what's happening: the index recheck condition\nsees a NULL for the value of lower(a). That's because it's trying to\nget the value of lower(a) out of the index, instead of recomputing it\nfrom the value of a.\n\nAFAICS this has been broken since 9.5 allowed indexes to contain\nboth retrievable and non-retrievable columns, so it's a bit surprising\nthat it hasn't been reported before. I suppose that the case was\nharder to hit before we introduced INCLUDE columns. The relevant\ncode actually claims that it's impossible:\n\n /*\n * If the index was lossy, we have to recheck the index quals.\n * (Currently, this can never happen, but we should support the case\n * for possible future use, eg with GiST indexes.)\n */\n if (scandesc->xs_recheck)\n {\n econtext->ecxt_scantuple = slot;\n if (!ExecQualAndReset(node->indexqual, econtext))\n {\n /* Fails recheck, so drop it and loop back for another */\n InstrCountFiltered2(node, 1);\n continue;\n }\n }\n\nThat comment may have been true when written (it dates to 9.2) but\nit's demonstrably not true now; the test case I just gave traverses\nthis code, and gets the wrong answer.\n\nI don't think there is any way to fix this that doesn't involve\nadding another field to structs IndexOnlyScan and IndexOnlyScanState.\nWe need a version of the indexqual that references the retrievable\nindex column x and computes f(x) from that, but the indexqual that's\npassed to the index AM still has to reference the f(x) index column.\nThat's annoying from an API stability standpoint. In the back\nbranches, we can add the new fields at the end to minimize ABI\nbreakage, but we will still be breaking any extension code that thinks\nit knows how to generate an IndexOnlyScan node directly. (But maybe\nthere isn't any. The Path representation doesn't need to change, so\ntypical planner extensions should be OK.)\n\nUnless somebody's got a better idea, I'll push forward with making\nthis happen.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/17350-b5bdcf476e5badbb%40postgresql.org\n\n\n",
"msg_date": "Sun, 02 Jan 2022 14:14:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Index-only scans vs. partially-retrievable indexes"
},
{
"msg_contents": "I wrote:\n> Unless somebody's got a better idea, I'll push forward with making\n> this happen.\n\nHere's a proposed patch for that. I ended up reverting the code\nchanges of 4ace45677 in favor of an alternative I'd considered\npreviously, which is to mark the indextlist elements as resjunk\nif they're non-retrievable, and then make setrefs.c deal with\nnot relying on those elements. This avoids the EXPLAIN breakage\nI showed, since now we still have the indextlist elements needed\nto interpret the indexqual and indexorderby expressions.\n\n0001 is what I propose to back-patch (modulo putting the new\nIndexOnlyScan.recheckqual field at the end, in the back branches).\n\nIn addition, it seems to me that we can simplify check_index_only()\nby reverting b5febc1d1's changes, because we've now been forced to\nput in a non-half-baked solution for the problem it addressed.\nThat's 0002 attached. I'd be inclined not to back-patch that though.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 02 Jan 2022 18:22:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Index-only scans vs. partially-retrievable indexes"
},
{
"msg_contents": "\n\n\n> regression=# explain select * from t where lower(a) like 'z';\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> Index Only Scan using t_lower_a_idx on t (cost=0.14..28.27 rows=7 width=32)\n> Index Cond: ((lower(a)) ~~ 'z'::text)\n> (2 rows)\n> \n\nI've tried to toy with the patch and remembered one related caveat.\nIf we have index for both returnable and nonreturnable attributes, IOS will not be choosen:\n\npostgres=# create index on t using gist(a gist_trgm_ops) include (a);\npostgres=# explain select * from t where a like 'z';\n QUERY PLAN \n---------------------------------------------------------------------\n Index Scan using t_a_a1_idx on t (cost=0.12..8.14 rows=1 width=32)\n Index Cond: (a ~~ 'z'::text)\n(2 rows)\n\nBut with index\ncreate index on t using gist(lower(a) gist_trgm_ops) include (a);\nI observe IOS for\nselect * from t where lower(a) like 'z';\n\nSo lossiness of opclass kind of \"defeats\" returnable attribute. But lossiness of expression does not. I don't feel condifent in surrounding code to say is it a bug or just a lack of feature. But maybe we would like to have equal behavior in both cases...\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 03 Jan 2022 16:13:10 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scans vs. partially-retrievable indexes"
},
{
"msg_contents": "Andrey Borodin <x4mmm@yandex-team.ru> writes:\n> I've tried to toy with the patch and remembered one related caveat.\n> If we have index for both returnable and nonreturnable attributes, IOS will not be choosen:\n\n> postgres=# create index on t using gist(a gist_trgm_ops) include (a);\n> postgres=# explain select * from t where a like 'z';\n> QUERY PLAN \n> ---------------------------------------------------------------------\n> Index Scan using t_a_a1_idx on t (cost=0.12..8.14 rows=1 width=32)\n> Index Cond: (a ~~ 'z'::text)\n> (2 rows)\n\nThis case is improved by 0002, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 09:57:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Index-only scans vs. partially-retrievable indexes"
},
{
"msg_contents": "\n\n> Andrey Borodin <x4mmm@yandex-team.ru> writes:\n> \n>> I've tried to toy with the patch and remembered one related caveat.\n>> If we have index for both returnable and nonreturnable attributes, IOS will not be choosen:\n> \n>> postgres=# create index on t using gist(a gist_trgm_ops) include (a);\n>> postgres=# explain select * from t where a like 'z';\n>> QUERY PLAN\n>> ---------------------------------------------------------------------\n>> Index Scan using t_a_a1_idx on t (cost=0.12..8.14 rows=1 width=32)\n>> Index Cond: (a ~~ 'z'::text)\n>> (2 rows)\n> \n> This case is improved by 0002, no?\n> \n\nUhmm, yes, you are right. Works as expected with the second patch.\nI tried first patch against this before writing. But did not expect much from a revert...\n\nThanks you!\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 03 Jan 2022 21:36:53 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scans vs. partially-retrievable indexes"
}
] |
[
{
"msg_contents": "The attached patch corrects a very minor typographical inconsistency\nwhen date_part is invoked with invalid units on time/timetz data vs\ntimestamp/timestamptz/interval data.\n\n(If stuff like this is too minor to bother with, let me know and I'll\nhold off in the future... but since this was pointed out to me I can't\nunsee it.)\n\nNikhil",
"msg_date": "Sun, 2 Jan 2022 23:47:32 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "On Sun, Jan 02, 2022 at 11:47:32PM -0500, Nikhil Benesch wrote:\n> The attached patch corrects a very minor typographical inconsistency\n> when date_part is invoked with invalid units on time/timetz data vs\n> timestamp/timestamptz/interval data.\n\nHmm, you are right that this is inconsistent, but I don't think that\nwhat you are doing is completely correct either. First, from what I\ncan see from the core code, we don't apply quotes to types in error\nmessages. So your patch is going in the right direction.\n\nHowever, there is a specific routine called format_type_be() aimed at\nformatting type names for error strings. If you switch to that, my\nguess is that this makes the error messages of time/timetz and\ntimestamp/timestamptz/interval more consistent, while reducing the\neffort of translation because we'd finish with the same base error\nstring, as of \"%s units \\\"%s\\\" not recognized\".\n\nIf we rework this part, we could even rework this error message more.\nOne suggestion I have would be \"units of type %s not recognized\", for\nexample.\n\n> (If stuff like this is too minor to bother with, let me know and I'll\n> hold off in the future... but since this was pointed out to me I can't\n> unsee it.)\n\nThis usually comes down to a case-by-case analysis. Now, in this\ncase, your suggestion looks right to me.\n--\nMichael",
"msg_date": "Mon, 3 Jan 2022 17:17:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "On Mon, Jan 3, 2022 at 3:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n> However, there is a specific routine called format_type_be() aimed at\n> formatting type names for error strings. If you switch to that, my\n> guess is that this makes the error messages of time/timetz and\n> timestamp/timestamptz/interval more consistent, while reducing the\n> effort of translation because we'd finish with the same base error\n> string, as of \"%s units \\\"%s\\\" not recognized\".\n\nI could find only a tiny smattering of examples where format_type_be()\nis invoked with a constant OID. In almost all error messages where the\ntype is statically known, it seems the type name is hardcoded into the\nerror message rather than generated via format_type_be(). For example,\nall of the \"TYPE out of range\" errors.\n\nI'm happy to rework the patch to use format_type_be(), but wanted to\ndouble check first that it is the preferred approach in this\nsituation.\n\n\n",
"msg_date": "Mon, 3 Jan 2022 09:53:52 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> I could find only a tiny smattering of examples where format_type_be()\n> is invoked with a constant OID. In almost all error messages where the\n> type is statically known, it seems the type name is hardcoded into the\n> error message rather than generated via format_type_be(). For example,\n> all of the \"TYPE out of range\" errors.\n\nYeah, but we've been slowly converting to that method to reduce the number\nof distinct translatable strings for error messages. If doing so here\nwould cut the labor for translators, I'm for it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 10:11:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "On Mon, Jan 3, 2022 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, but we've been slowly converting to that method to reduce the number\n> of distinct translatable strings for error messages. If doing so here\n> would cut the labor for translators, I'm for it.\n\nGreat! I'll update the patch. Thanks for confirming.\n\n\n",
"msg_date": "Mon, 3 Jan 2022 10:12:59 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> - errmsg(\"\\\"time with time zone\\\" units \\\"%s\\\" not recognized\",\n> + errmsg(\"time with time zone units \\\"%s\\\" not recognized\",\n> [ etc ]\n\nBTW, if you want to get rid of the quotes, I think that something\nelse has to be done to set off the type name from the rest. In\nthis instance someone might think that we're complaining about a\n\"time zone unit\", whatever that is. I suggest swapping it around to\n\n\tunits \\\"%s\\\" not recognized for type %s\n\nAlso, personally, I'd write unit not units, but that's\nmore debatable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 10:20:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "On Mon, Jan 3, 2022 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, if you want to get rid of the quotes, I think that something\n> else has to be done to set off the type name from the rest. In\n> this instance someone might think that we're complaining about a\n> \"time zone unit\", whatever that is. I suggest swapping it around to\n>\n> units \\\"%s\\\" not recognized for type %s\n>\n> Also, personally, I'd write unit not units, but that's\n> more debatable.\n\nYour suggestion sounds good to me. I'll update the patch with that.\n\nNot that it changes anything, I think, but the wording ambiguity you\nmention is present today in the timestamptz error message:\n\n benesch=> select extract('nope' from now());\n ERROR: timestamp with time zone units \"nope\" not recognized\n\n\n",
"msg_date": "Mon, 3 Jan 2022 10:26:08 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "Updated patch attached.\n\nOn Mon, Jan 3, 2022 at 10:26 AM Nikhil Benesch <nikhil.benesch@gmail.com> wrote:\n>\n> On Mon, Jan 3, 2022 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > BTW, if you want to get rid of the quotes, I think that something\n> > else has to be done to set off the type name from the rest. In\n> > this instance someone might think that we're complaining about a\n> > \"time zone unit\", whatever that is. I suggest swapping it around to\n> >\n> > units \\\"%s\\\" not recognized for type %s\n> >\n> > Also, personally, I'd write unit not units, but that's\n> > more debatable.\n>\n> Your suggestion sounds good to me. I'll update the patch with that.\n>\n> Not that it changes anything, I think, but the wording ambiguity you\n> mention is present today in the timestamptz error message:\n>\n> benesch=> select extract('nope' from now());\n> ERROR: timestamp with time zone units \"nope\" not recognized",
"msg_date": "Mon, 3 Jan 2022 10:57:47 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> Updated patch attached.\n\nHmm, I think you went a bit too far here. The existing code intends\nto draw a distinction between \"not recognized\" (i.e., \"we don't know\nwhat that word was you used\") and \"not supported\" (i.e., \"we know\nthat word, but it doesn't seem to make sense in context, or we\nhaven't got round to the case yet\"). You've mashed those into the\nsame error text, which I don't think we should do, especially\nsince we're using distinct ERRCODE values for them.\n\nAttached v3 restores that distinction, and makes some other small\ntweaks. (I found that there were actually a couple of spots in\ndate.c that got it backwards, so admittedly this is a fine point\nthat not everybody is on board with. But let's make it consistent\nnow.)\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 03 Jan 2022 13:14:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "On 2022-Jan-03, Tom Lane wrote:\n\n> Attached v3 restores that distinction, and makes some other small\n> tweaks. (I found that there were actually a couple of spots in\n> date.c that got it backwards, so admittedly this is a fine point\n> that not everybody is on board with. But let's make it consistent\n> now.)\n\nLGTM.\n\n> @@ -2202,9 +2204,9 @@ time_part_common(PG_FUNCTION_ARGS, bool retnumeric)\n> \t\t\tcase DTK_ISOYEAR:\n> \t\t\tdefault:\n> \t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\t\t errmsg(\"\\\"time\\\" units \\\"%s\\\" not recognized\",\n> -\t\t\t\t\t\t\t\tlowunits)));\n> +\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t\t\t errmsg(\"unit \\\"%s\\\" not supported for type %s\",\n> +\t\t\t\t\t\t\t\tlowunits, format_type_be(TIMEOID))));\n\nI agree that these changes are an improvement.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 3 Jan 2022 15:35:33 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "On Mon, Jan 3, 2022 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm, I think you went a bit too far here. The existing code intends\n> to draw a distinction between \"not recognized\" (i.e., \"we don't know\n> what that word was you used\") and \"not supported\" (i.e., \"we know\n> that word, but it doesn't seem to make sense in context, or we\n> haven't got round to the case yet\"). You've mashed those into the\n> same error text, which I don't think we should do, especially\n> since we're using distinct ERRCODE values for them.\n\nOops. I noticed that \"inconsistency\" between\nERRCODE_FEATURE_NOT_SUPPORTED and ERRCODE_INVALID_PARAMETER_VALUE and\nthen promptly blazed past it. Thanks for catching that.\n\n> Attached v3 restores that distinction, and makes some other small\n> tweaks. (I found that there were actually a couple of spots in\n> date.c that got it backwards, so admittedly this is a fine point\n> that not everybody is on board with. But let's make it consistent\n> now.)\n\nLGTM too, for whatever that's worth.\n\n\n",
"msg_date": "Mon, 3 Jan 2022 13:54:58 -0500",
"msg_from": "Nikhil Benesch <nikhil.benesch@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
},
{
"msg_contents": "Nikhil Benesch <nikhil.benesch@gmail.com> writes:\n> On Mon, Jan 3, 2022 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Attached v3 restores that distinction, and makes some other small\n>> tweaks. (I found that there were actually a couple of spots in\n>> date.c that got it backwards, so admittedly this is a fine point\n>> that not everybody is on board with. But let's make it consistent\n>> now.)\n\n> LGTM too, for whatever that's worth.\n\nOK, pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 14:05:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove inconsistent quotes from date_part error"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI noticed that pg_receivewal fails to stream when the partial file to write\nis not fully initialized and fails with the error message something like\nbelow. This requires an extra step of deleting the partial file that is not\nfully initialized before starting the pg_receivewal. Attaching a simple\npatch that creates a temp file, fully initialize it and rename the file to\nthe desired wal segment name.\n\n\"error: write-ahead log file \"000000010000000000000003.partial\" has 8396800\nbytes, should be 0 or 16777216\"\n\nThanks,\nSatya",
"msg_date": "Sun, 2 Jan 2022 21:27:43 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_receivewal fail to streams when the partial file to write is not\n fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Sun, Jan 02, 2022 at 09:27:43PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> I noticed that pg_receivewal fails to stream when the partial file to write\n> is not fully initialized and fails with the error message something like\n> below. This requires an extra step of deleting the partial file that is not\n> fully initialized before starting the pg_receivewal. Attaching a simple\n> patch that creates a temp file, fully initialize it and rename the file to\n> the desired wal segment name.\n\nAre you referring to the pre-padding when creating a new partial\nsegment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\nthe file is fully created? What kind of error did you see? I guess\nthat a write() with ENOSPC would be more likely, but you got a\ndifferent problem? I don't disagree with improving such cases, but we\nshould not do things so as there is a risk of leaving behind an\ninfinite set of segments in case of repeated errors, and partial\nsegments are already a kind of temporary file.\n\n- if (dir_data->sync)\n+ if (shouldcreatetempfile)\n+ {\n+ if (durable_rename(tmpsuffixpath, targetpath) != 0)\n+ {\n+ close(fd);\n+ unlink(tmpsuffixpath);\n+ return NULL;\n+ }\n+ }\n\ndurable_rename() does a set of fsync()'s, but --no-sync should not\nflush any data.\n--\nMichael",
"msg_date": "Mon, 3 Jan 2022 16:56:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "Thanks Michael!\n\nOn Sun, Jan 2, 2022 at 11:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Jan 02, 2022 at 09:27:43PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > I noticed that pg_receivewal fails to stream when the partial file to\n> write\n> > is not fully initialized and fails with the error message something like\n> > below. This requires an extra step of deleting the partial file that is\n> not\n> > fully initialized before starting the pg_receivewal. Attaching a simple\n> > patch that creates a temp file, fully initialize it and rename the file\n> to\n> > the desired wal segment name.\n>\n> Are you referring to the pre-padding when creating a new partial\n> segment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\n> the file is fully created? What kind of error did you see? I guess\n> that a write() with ENOSPC would be more likely, but you got a\n> different problem?\n\n\nI see two cases, 1/ when no space is left on the device and 2/ when the\nprocess is taken down forcibly (a VM/container crash)\n\n\n> I don't disagree with improving such cases, but we\n> should not do things so as there is a risk of leaving behind an\n> infinite set of segments in case of repeated errors\n\n\nDo you see a problem with the proposed patch that leaves the files behind,\nat least in my testing I don't see any files left behind?\n\n\n> , and partial\n> segments are already a kind of temporary file.\n>\n\nif the .partial file exists with not zero-padded up to the wal segment size\n(WalSegSz), then open_walfile fails with the below error. I have two\noptions here, 1/ to continue padding the existing partial file and let it\nzero up to WalSegSz , 2/create a temp file as I did in the patch. I thought\nthe latter is safe because it can handle corrupt cases as described below.\nThoughts?\n\n* When streaming to files, if an existing file exists we verify that it's\n* either empty (just created), or a complete WalSegSz segment (in which\n* case it has been created and padded). Anything else indicates a corrupt\n* file. Compressed files have no need for padding, so just ignore this\n* case.\n\n\n>\n> - if (dir_data->sync)\n> + if (shouldcreatetempfile)\n> + {\n> + if (durable_rename(tmpsuffixpath, targetpath) != 0)\n> + {\n> + close(fd);\n> + unlink(tmpsuffixpath);\n> + return NULL;\n> + }\n> + }\n>\n> durable_rename() does a set of fsync()'s, but --no-sync should not\n> flush any data.\n>\nI need to look into this further, without this I am seeing random file\nclose and rename failures and disconnecting the stream. Also it appears we\nare calling durable_rename when we are closing the file (dir_close) even\nwithout --no-sync. Should we worry about the padding case?\n\n> --\n> Michael\n>\n\nThanks Michael!On Sun, Jan 2, 2022 at 11:56 PM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Jan 02, 2022 at 09:27:43PM -0800, SATYANARAYANA NARLAPURAM wrote:\n> I noticed that pg_receivewal fails to stream when the partial file to write\n> is not fully initialized and fails with the error message something like\n> below. This requires an extra step of deleting the partial file that is not\n> fully initialized before starting the pg_receivewal. Attaching a simple\n> patch that creates a temp file, fully initialize it and rename the file to\n> the desired wal segment name.\n\nAre you referring to the pre-padding when creating a new partial\nsegment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\nthe file is fully created? What kind of error did you see? I guess\nthat a write() with ENOSPC would be more likely, but you got a\ndifferent problem?I see two cases, 1/ when no space is left on the device and 2/ when the process is taken down forcibly (a VM/container crash) I don't disagree with improving such cases, but we\nshould not do things so as there is a risk of leaving behind an\ninfinite set of segments in case of repeated errors Do you see a problem with the proposed patch that leaves the files behind, at least in my testing I don't see any files left behind? , and partial\nsegments are already a kind of temporary file.if the .partial file exists with not zero-padded up to the wal segment size (WalSegSz), then open_walfile fails with the below error. I have two options here, 1/ to continue padding the existing partial file and let it zero up to WalSegSz , 2/create a temp file as I did in the patch. I thought the latter is safe because it can handle corrupt cases as described below. Thoughts?* When streaming to files, if an existing file exists we verify that it's\t * either empty (just created), or a complete WalSegSz segment (in which\t * case it has been created and padded). Anything else indicates a corrupt\t * file. Compressed files have no need for padding, so just ignore this\t * case. \n\n- if (dir_data->sync)\n+ if (shouldcreatetempfile)\n+ {\n+ if (durable_rename(tmpsuffixpath, targetpath) != 0)\n+ {\n+ close(fd);\n+ unlink(tmpsuffixpath);\n+ return NULL;\n+ }\n+ }\n\ndurable_rename() does a set of fsync()'s, but --no-sync should not\nflush any data.I need to look into this further, without this I am seeing random file close and rename failures and disconnecting the stream. Also it appears we are calling durable_rename when we are closing the file (dir_close) even without --no-sync. Should we worry about the padding case?\n--\nMichael",
"msg_date": "Mon, 3 Jan 2022 12:10:32 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "Hello\r\n\r\npg_receivewal creates this .partial WAL file during WAL streaming and it is already treating this file as a temporary file. It will fill this .partial file with zeroes up to 16777216 by default before streaming real WAL data on it. \r\n\r\nIf your .partial file is only 8396800 bytes, then this could mean that pg_receivewal is terminated abruptly while it is appending zeroes or your system runs out of disk space. Do you have any error message? \r\n\r\nIf this is case, the uninitialized .partial file should still be all zeroes, so it should be ok to delete it and have pg_receivewal to recreate a new .partial file.\r\n\r\nAlso, in your patch, you are using pad_to_size argument in function dir_open_for_write to determine if it needs to create a temp file, but I see that this function is always given a pad_to_size = 16777216 , and never 0. Am I missing something?\r\n\r\nCary Huang\r\n===========\r\nHighGo Software Canada",
"msg_date": "Thu, 31 Mar 2022 22:01:04 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not\n fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Tue, Jan 4, 2022 at 1:40 AM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> On Sun, Jan 2, 2022 at 11:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Sun, Jan 02, 2022 at 09:27:43PM -0800, SATYANARAYANA NARLAPURAM wrote:\n>> > I noticed that pg_receivewal fails to stream when the partial file to write\n>> > is not fully initialized and fails with the error message something like\n>> > below. This requires an extra step of deleting the partial file that is not\n>> > fully initialized before starting the pg_receivewal. Attaching a simple\n>> > patch that creates a temp file, fully initialize it and rename the file to\n>> > the desired wal segment name.\n>>\n>> Are you referring to the pre-padding when creating a new partial\n>> segment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\n>> the file is fully created? What kind of error did you see? I guess\n>> that a write() with ENOSPC would be more likely, but you got a\n>> different problem?\n>\n> I see two cases, 1/ when no space is left on the device and 2/ when the process is taken down forcibly (a VM/container crash)\n\nYeah, these cases can occur leaving uninitialized .partial files which\ncan be a problem for both pg_receivewal and pg_basebackup that uses\ndir_open_for_write (CreateWalDirectoryMethod).\n\n>> I don't disagree with improving such cases, but we\n>> should not do things so as there is a risk of leaving behind an\n>> infinite set of segments in case of repeated errors\n>\n> Do you see a problem with the proposed patch that leaves the files behind, at least in my testing I don't see any files left behind?\n\nWith the proposed patch, it doesn't leave the unpadded .partial files.\nAlso, the v2 patch always removes a leftover .partial.temp file before\nit creates a new one.\n\n>> , and partial\n>> segments are already a kind of temporary file.\n>\n>\n> if the .partial file exists with not zero-padded up to the wal segment size (WalSegSz), then open_walfile fails with the below error. I have two options here, 1/ to continue padding the existing partial file and let it zero up to WalSegSz , 2/create a temp file as I did in the patch. I thought the latter is safe because it can handle corrupt cases as described below. Thoughts?\n\nThe temp file approach looks clean.\n\n>> - if (dir_data->sync)\n>> + if (shouldcreatetempfile)\n>> + {\n>> + if (durable_rename(tmpsuffixpath, targetpath) != 0)\n>> + {\n>> + close(fd);\n>> + unlink(tmpsuffixpath);\n>> + return NULL;\n>> + }\n>> + }\n>>\n>> durable_rename() does a set of fsync()'s, but --no-sync should not\n>> flush any data.\n\nFixed this in v2.\n\nAnother thing I found while working on this is the way the\ndir_open_for_write does padding - it doesn't retry in case of partial\nwrites of blocks of size XLOG_BLCKSZ, unlike what core postgres does\nwith pg_pwritev_with_retry in XLogFileInitInternal. Maybe\ndir_open_for_write should use the same approach. Thoughts?\n\nI fixed couple of issues with v1 (which was using static local\nvariables in dir_open_for_write, not using durable_rename/rename for\ndir_data->sync true/false cases, not considering compression method\nnone while setting shouldcreatetempfile true), improved comments and\nadded commit message.\n\nPlease review the v2 further.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 9 Apr 2022 18:03:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "At Sat, 9 Apr 2022 18:03:01 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Jan 4, 2022 at 1:40 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> > On Sun, Jan 2, 2022 at 11:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Are you referring to the pre-padding when creating a new partial\n> >> segment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\n> >> the file is fully created? What kind of error did you see? I guess\n> >> that a write() with ENOSPC would be more likely, but you got a\n> >> different problem?\n> >\n> > I see two cases, 1/ when no space is left on the device and 2/ when the process is taken down forcibly (a VM/container crash)\n> \n> Yeah, these cases can occur leaving uninitialized .partial files which\n> can be a problem for both pg_receivewal and pg_basebackup that uses\n> dir_open_for_write (CreateWalDirectoryMethod).\n> \n> >> I don't disagree with improving such cases, but we\n> >> should not do things so as there is a risk of leaving behind an\n> >> infinite set of segments in case of repeated errors\n> >\n> > Do you see a problem with the proposed patch that leaves the files behind, at least in my testing I don't see any files left behind?\n\nI guess that Michael took this patch as creating a temp file name such\nlike \"tmp.n\" very time finding an incomplete file.\n\n> With the proposed patch, it doesn't leave the unpadded .partial files.\n> Also, the v2 patch always removes a leftover .partial.temp file before\n> it creates a new one.\n>\n> >> , and partial\n> >> segments are already a kind of temporary file.\n\nI'm not sure this is true for pg_receivewal case. The .partial file\nis not a temporary file but the current working file for the tool.\n\n> > if the .partial file exists with not zero-padded up to the wal segment size (WalSegSz), then open_walfile fails with the below error. I have two options here, 1/ to continue padding the existing partial file and let it zero up to WalSegSz , 2/create a temp file as I did in the patch. I thought the latter is safe because it can handle corrupt cases as described below. Thoughts?\n\nI think this patch shouldn't involve pg_basebackup. I agree to Cary\nthat deleting the erroring file should be fine.\n\nWe already \"skipping\" (complete = non-.partial) WAL files with a wrong\nsize in FindStreamingStart so we can error-out with suggesting a hint.\n\n$ pg_receivewal -D xlog -p 5432 -h /tmp\npg_receivewal: error: segment file \"0000000100000022000000F5.partial\" has incorrect size 8404992\nhint: You can continue after removing the file.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 11 Apr 2022 15:13:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write\n is not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "Sorry for the terrible typos..\n\nAt Sat, 9 Apr 2022 18:03:01 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Jan 4, 2022 at 1:40 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> > On Sun, Jan 2, 2022 at 11:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Are you referring to the pre-padding when creating a new partial\n> >> segment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\n> >> the file is fully created? What kind of error did you see? I guess\n> >> that a write() with ENOSPC would be more likely, but you got a\n> >> different problem?\n> >\n> > I see two cases, 1/ when no space is left on the device and 2/ when the process is taken down forcibly (a VM/container crash)\n> \n> Yeah, these cases can occur leaving uninitialized .partial files which\n> can be a problem for both pg_receivewal and pg_basebackup that uses\n> dir_open_for_write (CreateWalDirectoryMethod).\n> \n> >> I don't disagree with improving such cases, but we\n> >> should not do things so as there is a risk of leaving behind an\n> >> infinite set of segments in case of repeated errors\n> >\n> > Do you see a problem with the proposed patch that leaves the files behind, at least in my testing I don't see any files left behind?\n\nI guess that Michael took this patch as creating a temp file with a\nname such like \"tmp.n\" every time finding an incomplete file.\n\n> With the proposed patch, it doesn't leave the unpadded .partial files.\n> Also, the v2 patch always removes a leftover .partial.temp file before\n> it creates a new one.\n>\n> >> , and partial\n> >> segments are already a kind of temporary file.\n\nI'm not sure this is true for pg_receivewal case. The .partial file\nis not a temporary file but the current working file for the tool.\n\n> > if the .partial file exists with not zero-padded up to the wal segment size (WalSegSz), then open_walfile fails with the below error. I have two options here, 1/ to continue padding the existing partial file and let it zero up to WalSegSz , 2/create a temp file as I did in the patch. I thought the latter is safe because it can handle corrupt cases as described below. Thoughts?\n\nI think this patch shouldn't involve pg_basebackup. I agree to Cary\nthat deleting the erroring file should be fine.\n\nWe already \"skipping\" (complete = non-.partial) WAL files with a wrong\nsize in FindStreamingStart so we can error-out with suggesting a hint.\n\n$ pg_receivewal -D xlog -p 5432 -h /tmp\npg_receivewal: error: segment file \"0000000100000022000000F5.partial\" has incorrect size 8404992\nhint: You can continue after removing the file.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 11 Apr 2022 15:16:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write\n is not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Sun, Apr 10, 2022 at 11:16 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> Sorry for the terrible typos..\n>\n> At Sat, 9 Apr 2022 18:03:01 +0530, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Tue, Jan 4, 2022 at 1:40 AM SATYANARAYANA NARLAPURAM\n> > <satyanarlapuram@gmail.com> wrote:\n> > >\n> > > On Sun, Jan 2, 2022 at 11:56 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> > >> Are you referring to the pre-padding when creating a new partial\n> > >> segment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\n> > >> the file is fully created? What kind of error did you see? I guess\n> > >> that a write() with ENOSPC would be more likely, but you got a\n> > >> different problem?\n> > >\n> > > I see two cases, 1/ when no space is left on the device and 2/ when\n> the process is taken down forcibly (a VM/container crash)\n> >\n> > Yeah, these cases can occur leaving uninitialized .partial files which\n> > can be a problem for both pg_receivewal and pg_basebackup that uses\n> > dir_open_for_write (CreateWalDirectoryMethod).\n> >\n> > >> I don't disagree with improving such cases, but we\n> > >> should not do things so as there is a risk of leaving behind an\n> > >> infinite set of segments in case of repeated errors\n> > >\n> > > Do you see a problem with the proposed patch that leaves the files\n> behind, at least in my testing I don't see any files left behind?\n>\n> I guess that Michael took this patch as creating a temp file with a\n> name such like \"tmp.n\" every time finding an incomplete file.\n>\n> > With the proposed patch, it doesn't leave the unpadded .partial files.\n> > Also, the v2 patch always removes a leftover .partial.temp file before\n> > it creates a new one.\n> >\n> > >> , and partial\n> > >> segments are already a kind of temporary file.\n>\n> I'm not sure this is true for pg_receivewal case. The .partial file\n> is not a temporary file but the current working file for the tool.\n>\n\nCorrect. The idea is to make sure the file is fully allocated before\ntreating it as a current file.\n\n\n>\n> > > if the .partial file exists with not zero-padded up to the wal segment\n> size (WalSegSz), then open_walfile fails with the below error. I have two\n> options here, 1/ to continue padding the existing partial file and let it\n> zero up to WalSegSz , 2/create a temp file as I did in the patch. I thought\n> the latter is safe because it can handle corrupt cases as described below.\n> Thoughts?\n>\n> I think this patch shouldn't involve pg_basebackup. I agree to Cary\n> that deleting the erroring file should be fine.\n>\n> We already \"skipping\" (complete = non-.partial) WAL files with a wrong\n> size in FindStreamingStart so we can error-out with suggesting a hint.\n>\n> $ pg_receivewal -D xlog -p 5432 -h /tmp\n> pg_receivewal: error: segment file \"0000000100000022000000F5.partial\" has\n> incorrect size 8404992\n> hint: You can continue after removing the file.\n>\n\nThe idea here is to make pg_receivewal self sufficient and reduce\nhuman/third party tool interaction. Ideal case is running pg_Receivewal as\na service for wal archiving.\n\n\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nOn Sun, Apr 10, 2022 at 11:16 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:Sorry for the terrible typos..\n\nAt Sat, 9 Apr 2022 18:03:01 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Jan 4, 2022 at 1:40 AM SATYANARAYANA NARLAPURAM\n> <satyanarlapuram@gmail.com> wrote:\n> >\n> > On Sun, Jan 2, 2022 at 11:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Are you referring to the pre-padding when creating a new partial\n> >> segment, aka when we write chunks of XLOG_BLCKSZ full of zeros until\n> >> the file is fully created? What kind of error did you see? I guess\n> >> that a write() with ENOSPC would be more likely, but you got a\n> >> different problem?\n> >\n> > I see two cases, 1/ when no space is left on the device and 2/ when the process is taken down forcibly (a VM/container crash)\n> \n> Yeah, these cases can occur leaving uninitialized .partial files which\n> can be a problem for both pg_receivewal and pg_basebackup that uses\n> dir_open_for_write (CreateWalDirectoryMethod).\n> \n> >> I don't disagree with improving such cases, but we\n> >> should not do things so as there is a risk of leaving behind an\n> >> infinite set of segments in case of repeated errors\n> >\n> > Do you see a problem with the proposed patch that leaves the files behind, at least in my testing I don't see any files left behind?\n\nI guess that Michael took this patch as creating a temp file with a\nname such like \"tmp.n\" every time finding an incomplete file.\n\n> With the proposed patch, it doesn't leave the unpadded .partial files.\n> Also, the v2 patch always removes a leftover .partial.temp file before\n> it creates a new one.\n>\n> >> , and partial\n> >> segments are already a kind of temporary file.\n\nI'm not sure this is true for pg_receivewal case. The .partial file\nis not a temporary file but the current working file for the tool.Correct. The idea is to make sure the file is fully allocated before treating it as a current file. \n\n> > if the .partial file exists with not zero-padded up to the wal segment size (WalSegSz), then open_walfile fails with the below error. I have two options here, 1/ to continue padding the existing partial file and let it zero up to WalSegSz , 2/create a temp file as I did in the patch. I thought the latter is safe because it can handle corrupt cases as described below. Thoughts?\n\nI think this patch shouldn't involve pg_basebackup. I agree to Cary\nthat deleting the erroring file should be fine.\n\nWe already \"skipping\" (complete = non-.partial) WAL files with a wrong\nsize in FindStreamingStart so we can error-out with suggesting a hint.\n\n$ pg_receivewal -D xlog -p 5432 -h /tmp\npg_receivewal: error: segment file \"0000000100000022000000F5.partial\" has incorrect size 8404992\nhint: You can continue after removing the file.The idea here is to make pg_receivewal self sufficient and reduce human/third party tool interaction. Ideal case is running pg_Receivewal as a service for wal archiving. \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 11 Apr 2022 13:21:23 -0700",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 01:21:23PM -0700, SATYANARAYANA NARLAPURAM wrote:\n> Correct. The idea is to make sure the file is fully allocated before\n> treating it as a current file.\n\nAnother problem comes to compression, as the pre-padding cannot be\napplied in this case because zlib and lz4 don't know the size of the\ncompressed segment until we reach 16MB of data received, but you can\nget a good estimate as long as you know how much space is left on a\ndevice. FWIW, I had to deal with this problem a couple of years ago\nfor the integration of an archiver in a certain thing, and the\nrequirement was that the WAL archiver service had to be a maximum\nself-aware and automated, which is what you wish to achieve here. It\nbasically came down to measure how much WAL one wishes to keep in the\nWAL archives for the sizing of the disk partition storing the\narchives (aka how much back in time you want to go), in combination to\nhow much WAL would get produced on a rather-linear production load.\n\nAnother thing is that you never really want to stress too much your\npartition so as it gets filled at 100%, as there could be opened files\nand the kind that consume more space than the actual amount of data\nstored, but you'd usually want to keep up to 70~90% of it. At the\nend, we finished with:\n- A dependency to statvfs(), which is not portable on WIN32, to find\nout how much space was left on the partition (f_blocks*f_bsize for\nthe total size and f_bfree*f_bsize for the free size I guess, by\nlooking at its man page).\n- Control the amount of WAL to keep around using a percentage rate of\nmaximum disk space allowed (or just a percentage of free disk space),\nwith pg_receivewal doing a cleanup of up to WalSegSz worth of data for\nthe oldest segments. The segments of the oldest TLIs are removed\nfirst. For any compression algorithm, unlinking this much amount of\ndata is not necessary but that's fine as you usually just remove one\ncompressed or uncompressed segment per cycle, at it does not matter\nwith dozens of gigs worth of WAL archives, or even more.\n--\nMichael",
"msg_date": "Tue, 12 Apr 2022 09:03:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 5:34 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Apr 11, 2022 at 01:21:23PM -0700, SATYANARAYANA NARLAPURAM wrote:\n> > Correct. The idea is to make sure the file is fully allocated before\n> > treating it as a current file.\n>\n> Another problem comes to compression, as the pre-padding cannot be\n> applied in this case because zlib and lz4 don't know the size of the\n> compressed segment until we reach 16MB of data received, but you can\n> get a good estimate as long as you know how much space is left on a\n> device. FWIW, I had to deal with this problem a couple of years ago\n> for the integration of an archiver in a certain thing, and the\n> requirement was that the WAL archiver service had to be a maximum\n> self-aware and automated, which is what you wish to achieve here. It\n> basically came down to measure how much WAL one wishes to keep in the\n> WAL archives for the sizing of the disk partition storing the\n> archives (aka how much back in time you want to go), in combination to\n> how much WAL would get produced on a rather-linear production load.\n>\n> Another thing is that you never really want to stress too much your\n> partition so as it gets filled at 100%, as there could be opened files\n> and the kind that consume more space than the actual amount of data\n> stored, but you'd usually want to keep up to 70~90% of it. At the\n> end, we finished with:\n> - A dependency to statvfs(), which is not portable on WIN32, to find\n> out how much space was left on the partition (f_blocks*f_bsize for\n> the total size and f_bfree*f_bsize for the free size I guess, by\n> looking at its man page).\n> - Control the amount of WAL to keep around using a percentage rate of\n> maximum disk space allowed (or just a percentage of free disk space),\n> with pg_receivewal doing a cleanup of up to WalSegSz worth of data for\n> the oldest segments. The segments of the oldest TLIs are removed\n> first. For any compression algorithm, unlinking this much amount of\n> data is not necessary but that's fine as you usually just remove one\n> compressed or uncompressed segment per cycle, at it does not matter\n> with dozens of gigs worth of WAL archives, or even more.\n\nThanks for sharing this. Will the write operations (in\ndir_open_for_write) for PG_COMPRESSION_GZIP and PG_COMPRESSION_LZ4\ntake longer compared to prepadding for non-compressed files?\n\nI would like to know if there's any problem with the proposed fix.\n\nI think we need the same fix proposed in this thread for\ntar_open_for_write as well because it also does prepadding for\nnon-compressed files.\n\nIn general, I agree that making pg_receivewal self-aware and\nautomating things by itself is really a great idea. This will avoid\nmanual effort. For instance, pg_receivewal can try with different\nstreaming start LSNs (restart_lsn of its slot or server insert LSN)\nnot just the latest LSN found in its target directory which will\nparticularly be helpful in case its source server has changed the\ntimeline or for some reason unable to serve the WAL.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 14:50:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 02:50:17PM +0530, Bharath Rupireddy wrote:\n> Thanks for sharing this. Will the write operations (in\n> dir_open_for_write) for PG_COMPRESSION_GZIP and PG_COMPRESSION_LZ4\n> take longer compared to prepadding for non-compressed files?\n\nThe first write operations for gzip and lz4 consists in writing their\nrespective headers in the resulting file, which should be a couple of\ndozen bytes, at most. So that's surely going to be cheaper than the\npre-padding done for a full segment with the flush induced after\nwriting WalSegSz bytes worth of zeros.\n\n> I would like to know if there's any problem with the proposed fix.\n\nThere is nothing done for the case of compressed segments, meaning\nthat you would see the same problem when being in the middle of\nwriting a segment compressed with gzip or lz4 in the middle of writing\nit, and that's what you want to avoid here. So the important part is\nnot the pre-padding, it is to make sure that there is enough space\nreserved for the handling of a full segment before beginning the work\non it.\n--\nMichael",
"msg_date": "Tue, 19 Apr 2022 14:12:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 10:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > I would like to know if there's any problem with the proposed fix.\n>\n> There is nothing done for the case of compressed segments, meaning\n> that you would see the same problem when being in the middle of\n> writing a segment compressed with gzip or lz4 in the middle of writing\n> it, and that's what you want to avoid here. So the important part is\n> not the pre-padding, it is to make sure that there is enough space\n> reserved for the handling of a full segment before beginning the work\n> on it.\n\nRight. We find enough disk space and go to write and suddenly the\nwrite operations fail for some reason or the VM crashes because of a\nreason other than disk space. I think the foolproof solution is to\nfigure out the available disk space before prepadding or compressing\nand also use the\nwrite-first-to-temp-file-and-then-rename-it-to-original-file as\nproposed in the earlier patches in this thread.\n\nHaving said that, is there a portable way that we can find out the\ndisk space available? I read your response upthread that statvfs isn't\nportable to WIN32 platforms. So, we just say that part of the fix we\nproposed here (checking disk space before prepadding or compressing)\nisn't supported on WIN32 and we just do the temp file thing for WIN32\nalone?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 22 Apr 2022 19:17:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Fri, Apr 22, 2022 at 07:17:37PM +0530, Bharath Rupireddy wrote:\n> Right. We find enough disk space and go to write and suddenly the\n> write operations fail for some reason or the VM crashes because of a\n> reason other than disk space. I think the foolproof solution is to\n> figure out the available disk space before prepadding or compressing\n> and also use the\n> write-first-to-temp-file-and-then-rename-it-to-original-file as\n> proposed in the earlier patches in this thread.\n\nYes, what would count here is only the amount of free space in a\npartition. The total amount of space available becomes handy once you\nbegin introducing things like percentage-based quota policies for the\ndisk when archiving. The free amount of space could be used to define\na policy based on the maximum number of bytes you need to leave\naround, as well, but this is not perfect science as this depends of\nwhat FSes decide to do underneath. There are a couple of designs\npossible here. When I had to deal with my upthread case I have chosen\none as I had no need to worry only about Linux, it does not mean that\nthis is the best choice that would fit with the long-term community\npicture. This comes down to how much pg_receivewal should handle\nautomatically, and how it should handle it.\n\n> Having said that, is there a portable way that we can find out the\n> disk space available? I read your response upthread that statvfs isn't\n> portable to WIN32 platforms. So, we just say that part of the fix we\n> proposed here (checking disk space before prepadding or compressing)\n> isn't supported on WIN32 and we just do the temp file thing for WIN32\n> alone?\n\nSomething like GetDiskFreeSpaceA() would do the trick on WIN32.\nhttps://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getdiskfreespacea\n\nWhen it comes to compression, creating a temporary file would only\nlead to its deletion, mostly, and extra I/O is never free. Anyway,\nwhy would you need the extra logic of the temporary file at all?\nThat's basically what the .partial file is as pg_receivewal begins\nstreaming at the beginning of a segment, to the partial file, each\ntime it sees fit to restart a streaming cycle.\n--\nMichael",
"msg_date": "Mon, 25 Apr 2022 10:08:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 6:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 22, 2022 at 07:17:37PM +0530, Bharath Rupireddy wrote:\n> > Right. We find enough disk space and go to write and suddenly the\n> > write operations fail for some reason or the VM crashes because of a\n> > reason other than disk space. I think the foolproof solution is to\n> > figure out the available disk space before prepadding or compressing\n> > and also use the\n> > write-first-to-temp-file-and-then-rename-it-to-original-file as\n> > proposed in the earlier patches in this thread.\n>\n> Yes, what would count here is only the amount of free space in a\n> partition. The total amount of space available becomes handy once you\n> begin introducing things like percentage-based quota policies for the\n> disk when archiving. The free amount of space could be used to define\n> a policy based on the maximum number of bytes you need to leave\n> around, as well, but this is not perfect science as this depends of\n> what FSes decide to do underneath. There are a couple of designs\n> possible here. When I had to deal with my upthread case I have chosen\n> one as I had no need to worry only about Linux, it does not mean that\n> this is the best choice that would fit with the long-term community\n> picture. This comes down to how much pg_receivewal should handle\n> automatically, and how it should handle it.\n\nThanks. I'm not sure why we are just thinking of crashes due to\nout-of-disk space. Figuring out free disk space before writing a huge\nfile (say a WAL file) is a problem in itself to the core postgres as\nwell, not just pg_receivewal.\n\nI think we are off-track a bit here. Let me illustrate what's the\nwhole problem is and the idea:\n\nIf the node/VM on which pg_receivewal runs, goes down/crashes or fails\nduring write operation while padding the target WAL file (the .partial\nfile) with zeros, the unfilled target WAL file ((let me call this file\na partially padded .partial file) will be left over and subsequent\nreads/writes to that it will fail with \"write-ahead log file \\\"%s\\\"\nhas %zd bytes, should be 0 or %d\" error which requires manual\nintervention to remove it. In a service, this manual intervention is\nwhat we would like to avoid. Let's not much bother right now for\ncompressed file writes (for now at least) as they don't have a\nprepadding phase.\n\nThe proposed solution is to make the prepadding atomic - prepad the\nXXXX.partial file as XXXX.partial.tmp name and after the prepadding\nrename (durably if sync option is chosen for pg_receivewal) to\nXXXX.partial. Before prepadding XXXX.partial.tmp, delete the\nXXXX.partial.tmp if it exists.\n\nThe above problem isn't unique to pg_receivewal alone, pg_basebackup\ntoo uses CreateWalDirectoryMethod and dir_open_for_write via\nReceiveXlogStream.\n\nIMHO, pg_receivewal checking for available disk space before writing\nany file should better be discussed separately?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Apr 2022 17:17:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 5:17 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 25, 2022 at 6:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Apr 22, 2022 at 07:17:37PM +0530, Bharath Rupireddy wrote:\n> > > Right. We find enough disk space and go to write and suddenly the\n> > > write operations fail for some reason or the VM crashes because of a\n> > > reason other than disk space. I think the foolproof solution is to\n> > > figure out the available disk space before prepadding or compressing\n> > > and also use the\n> > > write-first-to-temp-file-and-then-rename-it-to-original-file as\n> > > proposed in the earlier patches in this thread.\n> >\n> > Yes, what would count here is only the amount of free space in a\n> > partition. The total amount of space available becomes handy once you\n> > begin introducing things like percentage-based quota policies for the\n> > disk when archiving. The free amount of space could be used to define\n> > a policy based on the maximum number of bytes you need to leave\n> > around, as well, but this is not perfect science as this depends of\n> > what FSes decide to do underneath. There are a couple of designs\n> > possible here. When I had to deal with my upthread case I have chosen\n> > one as I had no need to worry only about Linux, it does not mean that\n> > this is the best choice that would fit with the long-term community\n> > picture. This comes down to how much pg_receivewal should handle\n> > automatically, and how it should handle it.\n>\n> Thanks. I'm not sure why we are just thinking of crashes due to\n> out-of-disk space. Figuring out free disk space before writing a huge\n> file (say a WAL file) is a problem in itself to the core postgres as\n> well, not just pg_receivewal.\n>\n> I think we are off-track a bit here. Let me illustrate what's the\n> whole problem is and the idea:\n>\n> If the node/VM on which pg_receivewal runs, goes down/crashes or fails\n> during write operation while padding the target WAL file (the .partial\n> file) with zeros, the unfilled target WAL file ((let me call this file\n> a partially padded .partial file) will be left over and subsequent\n> reads/writes to that it will fail with \"write-ahead log file \\\"%s\\\"\n> has %zd bytes, should be 0 or %d\" error which requires manual\n> intervention to remove it. In a service, this manual intervention is\n> what we would like to avoid. Let's not much bother right now for\n> compressed file writes (for now at least) as they don't have a\n> prepadding phase.\n>\n> The proposed solution is to make the prepadding atomic - prepad the\n> XXXX.partial file as XXXX.partial.tmp name and after the prepadding\n> rename (durably if sync option is chosen for pg_receivewal) to\n> XXXX.partial. Before prepadding XXXX.partial.tmp, delete the\n> XXXX.partial.tmp if it exists.\n>\n> The above problem isn't unique to pg_receivewal alone, pg_basebackup\n> too uses CreateWalDirectoryMethod and dir_open_for_write via\n> ReceiveXlogStream.\n>\n> IMHO, pg_receivewal checking for available disk space before writing\n> any file should better be discussed separately?\n\nHere's the v3 patch after rebasing.\n\nI just would like to reiterate the issue the patch is trying to solve:\nAt times (when no space is left on the device or when the process is\ntaken down forcibly (VM/container crash)), there can be leftover\nuninitialized .partial files (note that padding i.e. filling 16MB WAL\nfiles with all zeros is done in non-compression cases) due to which\npg_receivewal fails to come up after the crash. To address this, the\nproposed patch makes the padding 16MB WAL files atomic (first write a\n.temp file, pad it and durably rename it to the original .partial\nfile, ready to be filled with received WAL records). This approach is\nsimilar to what core postgres achieves atomicity while creating new\nWAL file (see how XLogFileInitInternal() creates xlogtemp.%d file\nfirst and then how InstallXLogFileSegment() durably renames it to\noriginal WAL file).\n\nThoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 25 Jul 2022 16:42:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "Hi Bharath,\n\nIdea to atomically allocate WAL file by creating tmp file and renaming it\nis nice.\nI have one question though:\nHow is partially temp file created will be cleaned if the VM crashes or out\nof disk space cases? Does it endup creating multiple files for every VM\ncrash/disk space during process of pg_receivewal?\n\nThoughts?\n\nThanks,\nMahendrakar.\n\nOn Mon, 25 Jul 2022 at 16:42, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Apr 25, 2022 at 5:17 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Apr 25, 2022 at 6:38 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> > >\n> > > On Fri, Apr 22, 2022 at 07:17:37PM +0530, Bharath Rupireddy wrote:\n> > > > Right. We find enough disk space and go to write and suddenly the\n> > > > write operations fail for some reason or the VM crashes because of a\n> > > > reason other than disk space. I think the foolproof solution is to\n> > > > figure out the available disk space before prepadding or compressing\n> > > > and also use the\n> > > > write-first-to-temp-file-and-then-rename-it-to-original-file as\n> > > > proposed in the earlier patches in this thread.\n> > >\n> > > Yes, what would count here is only the amount of free space in a\n> > > partition. The total amount of space available becomes handy once you\n> > > begin introducing things like percentage-based quota policies for the\n> > > disk when archiving. The free amount of space could be used to define\n> > > a policy based on the maximum number of bytes you need to leave\n> > > around, as well, but this is not perfect science as this depends of\n> > > what FSes decide to do underneath. There are a couple of designs\n> > > possible here. When I had to deal with my upthread case I have chosen\n> > > one as I had no need to worry only about Linux, it does not mean that\n> > > this is the best choice that would fit with the long-term community\n> > > picture. This comes down to how much pg_receivewal should handle\n> > > automatically, and how it should handle it.\n> >\n> > Thanks. I'm not sure why we are just thinking of crashes due to\n> > out-of-disk space. Figuring out free disk space before writing a huge\n> > file (say a WAL file) is a problem in itself to the core postgres as\n> > well, not just pg_receivewal.\n> >\n> > I think we are off-track a bit here. Let me illustrate what's the\n> > whole problem is and the idea:\n> >\n> > If the node/VM on which pg_receivewal runs, goes down/crashes or fails\n> > during write operation while padding the target WAL file (the .partial\n> > file) with zeros, the unfilled target WAL file ((let me call this file\n> > a partially padded .partial file) will be left over and subsequent\n> > reads/writes to that it will fail with \"write-ahead log file \\\"%s\\\"\n> > has %zd bytes, should be 0 or %d\" error which requires manual\n> > intervention to remove it. In a service, this manual intervention is\n> > what we would like to avoid. Let's not much bother right now for\n> > compressed file writes (for now at least) as they don't have a\n> > prepadding phase.\n> >\n> > The proposed solution is to make the prepadding atomic - prepad the\n> > XXXX.partial file as XXXX.partial.tmp name and after the prepadding\n> > rename (durably if sync option is chosen for pg_receivewal) to\n> > XXXX.partial. Before prepadding XXXX.partial.tmp, delete the\n> > XXXX.partial.tmp if it exists.\n> >\n> > The above problem isn't unique to pg_receivewal alone, pg_basebackup\n> > too uses CreateWalDirectoryMethod and dir_open_for_write via\n> > ReceiveXlogStream.\n> >\n> > IMHO, pg_receivewal checking for available disk space before writing\n> > any file should better be discussed separately?\n>\n> Here's the v3 patch after rebasing.\n>\n> I just would like to reiterate the issue the patch is trying to solve:\n> At times (when no space is left on the device or when the process is\n> taken down forcibly (VM/container crash)), there can be leftover\n> uninitialized .partial files (note that padding i.e. filling 16MB WAL\n> files with all zeros is done in non-compression cases) due to which\n> pg_receivewal fails to come up after the crash. To address this, the\n> proposed patch makes the padding 16MB WAL files atomic (first write a\n> .temp file, pad it and durably rename it to the original .partial\n> file, ready to be filled with received WAL records). This approach is\n> similar to what core postgres achieves atomicity while creating new\n> WAL file (see how XLogFileInitInternal() creates xlogtemp.%d file\n> first and then how InstallXLogFileSegment() durably renames it to\n> original WAL file).\n>\n> Thoughts?\n>\n> Regards,\n> Bharath Rupireddy.\n>\n\nHi Bharath,Idea to atomically allocate WAL file by creating tmp file and renaming it is nice.I have one question though: How is partially temp file created will be cleaned if the VM crashes or out of disk space cases? Does it endup creating multiple files for every VM crash/disk space during process of pg_receivewal?Thoughts?Thanks,Mahendrakar.On Mon, 25 Jul 2022 at 16:42, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Mon, Apr 25, 2022 at 5:17 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 25, 2022 at 6:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Apr 22, 2022 at 07:17:37PM +0530, Bharath Rupireddy wrote:\n> > > Right. We find enough disk space and go to write and suddenly the\n> > > write operations fail for some reason or the VM crashes because of a\n> > > reason other than disk space. I think the foolproof solution is to\n> > > figure out the available disk space before prepadding or compressing\n> > > and also use the\n> > > write-first-to-temp-file-and-then-rename-it-to-original-file as\n> > > proposed in the earlier patches in this thread.\n> >\n> > Yes, what would count here is only the amount of free space in a\n> > partition. The total amount of space available becomes handy once you\n> > begin introducing things like percentage-based quota policies for the\n> > disk when archiving. The free amount of space could be used to define\n> > a policy based on the maximum number of bytes you need to leave\n> > around, as well, but this is not perfect science as this depends of\n> > what FSes decide to do underneath. There are a couple of designs\n> > possible here. When I had to deal with my upthread case I have chosen\n> > one as I had no need to worry only about Linux, it does not mean that\n> > this is the best choice that would fit with the long-term community\n> > picture. This comes down to how much pg_receivewal should handle\n> > automatically, and how it should handle it.\n>\n> Thanks. I'm not sure why we are just thinking of crashes due to\n> out-of-disk space. Figuring out free disk space before writing a huge\n> file (say a WAL file) is a problem in itself to the core postgres as\n> well, not just pg_receivewal.\n>\n> I think we are off-track a bit here. Let me illustrate what's the\n> whole problem is and the idea:\n>\n> If the node/VM on which pg_receivewal runs, goes down/crashes or fails\n> during write operation while padding the target WAL file (the .partial\n> file) with zeros, the unfilled target WAL file ((let me call this file\n> a partially padded .partial file) will be left over and subsequent\n> reads/writes to that it will fail with \"write-ahead log file \\\"%s\\\"\n> has %zd bytes, should be 0 or %d\" error which requires manual\n> intervention to remove it. In a service, this manual intervention is\n> what we would like to avoid. Let's not much bother right now for\n> compressed file writes (for now at least) as they don't have a\n> prepadding phase.\n>\n> The proposed solution is to make the prepadding atomic - prepad the\n> XXXX.partial file as XXXX.partial.tmp name and after the prepadding\n> rename (durably if sync option is chosen for pg_receivewal) to\n> XXXX.partial. Before prepadding XXXX.partial.tmp, delete the\n> XXXX.partial.tmp if it exists.\n>\n> The above problem isn't unique to pg_receivewal alone, pg_basebackup\n> too uses CreateWalDirectoryMethod and dir_open_for_write via\n> ReceiveXlogStream.\n>\n> IMHO, pg_receivewal checking for available disk space before writing\n> any file should better be discussed separately?\n\nHere's the v3 patch after rebasing.\n\nI just would like to reiterate the issue the patch is trying to solve:\nAt times (when no space is left on the device or when the process is\ntaken down forcibly (VM/container crash)), there can be leftover\nuninitialized .partial files (note that padding i.e. filling 16MB WAL\nfiles with all zeros is done in non-compression cases) due to which\npg_receivewal fails to come up after the crash. To address this, the\nproposed patch makes the padding 16MB WAL files atomic (first write a\n.temp file, pad it and durably rename it to the original .partial\nfile, ready to be filled with received WAL records). This approach is\nsimilar to what core postgres achieves atomicity while creating new\nWAL file (see how XLogFileInitInternal() creates xlogtemp.%d file\nfirst and then how InstallXLogFileSegment() durably renames it to\noriginal WAL file).\n\nThoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sun, 31 Jul 2022 20:35:58 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Sun, Jul 31, 2022 at 8:36 PM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n>\n>> On Mon, 25 Jul 2022 at 16:42, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Here's the v3 patch after rebasing.\n>>\n>> I just would like to reiterate the issue the patch is trying to solve:\n>> At times (when no space is left on the device or when the process is\n>> taken down forcibly (VM/container crash)), there can be leftover\n>> uninitialized .partial files (note that padding i.e. filling 16MB WAL\n>> files with all zeros is done in non-compression cases) due to which\n>> pg_receivewal fails to come up after the crash. To address this, the\n>> proposed patch makes the padding 16MB WAL files atomic (first write a\n>> .temp file, pad it and durably rename it to the original .partial\n>> file, ready to be filled with received WAL records). This approach is\n>> similar to what core postgres achieves atomicity while creating new\n>> WAL file (see how XLogFileInitInternal() creates xlogtemp.%d file\n>> first and then how InstallXLogFileSegment() durably renames it to\n>> original WAL file).\n>>\n>> Thoughts?\n>\n> Hi Bharath,\n>\n> Idea to atomically allocate WAL file by creating tmp file and renaming it is nice.\n\nThanks for reviewing it.\n\n> I have one question though:\n> How is partially temp file created will be cleaned if the VM crashes or out of disk space cases? Does it endup creating multiple files for every VM crash/disk space during process of pg_receivewal?\n>\n> Thoughts?\n\nIt is handled in the patch, see [1].\n\nAttaching v4 patch herewith which now uses the temporary file suffix\n'.tmp' as opposed to v3 patch '.temp'. This is just to be in sync with\nother atomic file write codes in the core - autoprewarm,\npg_stat_statement, slot, basebacup, replorigin, snapbuild, receivelog\nand so on.\n\nPlease review the v4 patch.\n\n[1]\n+ /*\n+ * Actual file doesn't exist. Now, create a temporary file pad it\n+ * and rename to the target file. The temporary file may exist from\n+ * the last failed attempt (failed during partial padding or\n+ * renaming or some other). If it exists, let's play safe and\n+ * delete it before creating a new one.\n+ */\n+ snprintf(tmpsuffixpath, sizeof(tmpsuffixpath), \"%s.%s\",\n+ targetpath, \"temp\");\n+\n+ if (dir_existsfile(tmpsuffixpath))\n+ {\n+ if (unlink(tmpsuffixpath) != 0)\n+ {\n+ dir_data->lasterrno = errno;\n\n+ return NULL;\n+ }\n+ }\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Thu, 4 Aug 2022 11:59:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 11:59 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Jul 31, 2022 at 8:36 PM mahendrakar s\n> <mahendrakarforpg@gmail.com> wrote:\n> >\n> >> On Mon, 25 Jul 2022 at 16:42, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Here's the v3 patch after rebasing.\n> >>\n> >> I just would like to reiterate the issue the patch is trying to solve:\n> >> At times (when no space is left on the device or when the process is\n> >> taken down forcibly (VM/container crash)), there can be leftover\n> >> uninitialized .partial files (note that padding i.e. filling 16MB WAL\n> >> files with all zeros is done in non-compression cases) due to which\n> >> pg_receivewal fails to come up after the crash. To address this, the\n> >> proposed patch makes the padding 16MB WAL files atomic (first write a\n> >> .temp file, pad it and durably rename it to the original .partial\n> >> file, ready to be filled with received WAL records). This approach is\n> >> similar to what core postgres achieves atomicity while creating new\n> >> WAL file (see how XLogFileInitInternal() creates xlogtemp.%d file\n> >> first and then how InstallXLogFileSegment() durably renames it to\n> >> original WAL file).\n> >>\n> >> Thoughts?\n> >\n> > Hi Bharath,\n> >\n> > Idea to atomically allocate WAL file by creating tmp file and renaming it is nice.\n>\n> Thanks for reviewing it.\n>\n> > I have one question though:\n> > How is partially temp file created will be cleaned if the VM crashes or out of disk space cases? Does it endup creating multiple files for every VM crash/disk space during process of pg_receivewal?\n> >\n> > Thoughts?\n>\n> It is handled in the patch, see [1].\n>\n> Attaching v4 patch herewith which now uses the temporary file suffix\n> '.tmp' as opposed to v3 patch '.temp'. This is just to be in sync with\n> other atomic file write codes in the core - autoprewarm,\n> pg_stat_statement, slot, basebacup, replorigin, snapbuild, receivelog\n> and so on.\n>\n> Please review the v4 patch.\n\nI've done some more testing today (hacked the code a bit by adding\npg_usleep(10000L); in pre-padding loop and crashing the pg_receivewal\nprocess to produce the warning [1]) and found that the better place to\nremove \".partial.tmp\" leftover files is in FindStreamingStart()\nbecause there we do a traversal of all the files in target directory\nalong the way to remove if \".partial.tmp\" file(s) is/are found.\n\nPlease review the v5 patch further.\n\n[1] pg_receivewal: warning: segment file\n\"0000000100000006000000B9.partial\" has incorrect size 15884288,\nskipping\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Mon, 8 Aug 2022 11:59:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "Hi Bharath,\nI reviewed your patch. Minor comments.\n\n1. Why are we not using durable_unlink instead of unlink to remove the\npartial tmp files?\n\n2. Below could be a simple if(shouldcreatetempfile){} else{} as in error\ncase we need to return NULL.\n+ if (errno != ENOENT || !shouldcreatetempfile)\n+ {\n+ dir_data->lasterrno = errno;\n+ return NULL;\n+ }\n+ else if (shouldcreatetempfile)\n+ {\n+ /*\n+ * Actual file doesn't exist. Now, create a temporary file pad it\n+ * and rename to the target file. The temporary file may exist from\n+ * the last failed attempt (failed during partial padding or\n+ * renaming or some other). If it exists, let's play safe and\n+ * delete it before creating a new one.\n+ */\n+ snprintf(tmpsuffixpath, MAXPGPATH, \"%s.tmp\", targetpath);\n+\n+ if (dir_existsfile(tmpsuffixpath))\n+ {\n+ if (unlink(tmpsuffixpath) != 0)\n+ {\n+ dir_data->lasterrno = errno;\n+ return NULL;\n+ }\n+ }\n+\n+ fd = open(tmpsuffixpath, flags | O_CREAT, pg_file_create_mode);\n+ if (fd < 0)\n+ {\n+ dir_data->lasterrno = errno;\n+ return NULL;\n+ }\n+ }\n\n\nOn Mon, 8 Aug 2022 at 11:59, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Aug 4, 2022 at 11:59 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Sun, Jul 31, 2022 at 8:36 PM mahendrakar s\n> > <mahendrakarforpg@gmail.com> wrote:\n> > >\n> > >> On Mon, 25 Jul 2022 at 16:42, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >> Here's the v3 patch after rebasing.\n> > >>\n> > >> I just would like to reiterate the issue the patch is trying to solve:\n> > >> At times (when no space is left on the device or when the process is\n> > >> taken down forcibly (VM/container crash)), there can be leftover\n> > >> uninitialized .partial files (note that padding i.e. filling 16MB WAL\n> > >> files with all zeros is done in non-compression cases) due to which\n> > >> pg_receivewal fails to come up after the crash. To address this, the\n> > >> proposed patch makes the padding 16MB WAL files atomic (first write a\n> > >> .temp file, pad it and durably rename it to the original .partial\n> > >> file, ready to be filled with received WAL records). This approach is\n> > >> similar to what core postgres achieves atomicity while creating new\n> > >> WAL file (see how XLogFileInitInternal() creates xlogtemp.%d file\n> > >> first and then how InstallXLogFileSegment() durably renames it to\n> > >> original WAL file).\n> > >>\n> > >> Thoughts?\n> > >\n> > > Hi Bharath,\n> > >\n> > > Idea to atomically allocate WAL file by creating tmp file and renaming\n> it is nice.\n> >\n> > Thanks for reviewing it.\n> >\n> > > I have one question though:\n> > > How is partially temp file created will be cleaned if the VM crashes\n> or out of disk space cases? Does it endup creating multiple files for\n> every VM crash/disk space during process of pg_receivewal?\n> > >\n> > > Thoughts?\n> >\n> > It is handled in the patch, see [1].\n> >\n> > Attaching v4 patch herewith which now uses the temporary file suffix\n> > '.tmp' as opposed to v3 patch '.temp'. This is just to be in sync with\n> > other atomic file write codes in the core - autoprewarm,\n> > pg_stat_statement, slot, basebacup, replorigin, snapbuild, receivelog\n> > and so on.\n> >\n> > Please review the v4 patch.\n>\n> I've done some more testing today (hacked the code a bit by adding\n> pg_usleep(10000L); in pre-padding loop and crashing the pg_receivewal\n> process to produce the warning [1]) and found that the better place to\n> remove \".partial.tmp\" leftover files is in FindStreamingStart()\n> because there we do a traversal of all the files in target directory\n> along the way to remove if \".partial.tmp\" file(s) is/are found.\n>\n> Please review the v5 patch further.\n>\n> [1] pg_receivewal: warning: segment file\n> \"0000000100000006000000B9.partial\" has incorrect size 15884288,\n> skipping\n>\n> --\n> Bharath Rupireddy\n> RDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n>\n\nHi Bharath,I reviewed your patch. Minor comments.1. Why are we not using durable_unlink instead of unlink to remove the partial tmp files?2. Below could be a simple if(shouldcreatetempfile){} else{} as in error case we need to return NULL.+ if (errno != ENOENT || !shouldcreatetempfile)+ {+ dir_data->lasterrno = errno;+ return NULL;+ }+ else if (shouldcreatetempfile)+ {+ /*+ * Actual file doesn't exist. Now, create a temporary file pad it+ * and rename to the target file. The temporary file may exist from+ * the last failed attempt (failed during partial padding or+ * renaming or some other). If it exists, let's play safe and+ * delete it before creating a new one.+ */+ snprintf(tmpsuffixpath, MAXPGPATH, \"%s.tmp\", targetpath);++ if (dir_existsfile(tmpsuffixpath))+ {+ if (unlink(tmpsuffixpath) != 0)+ {+ dir_data->lasterrno = errno;+ return NULL;+ }+ }++ fd = open(tmpsuffixpath, flags | O_CREAT, pg_file_create_mode);+ if (fd < 0)+ {+ dir_data->lasterrno = errno;+ return NULL;+ }+ }On Mon, 8 Aug 2022 at 11:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Aug 4, 2022 at 11:59 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sun, Jul 31, 2022 at 8:36 PM mahendrakar s\n> <mahendrakarforpg@gmail.com> wrote:\n> >\n> >> On Mon, 25 Jul 2022 at 16:42, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Here's the v3 patch after rebasing.\n> >>\n> >> I just would like to reiterate the issue the patch is trying to solve:\n> >> At times (when no space is left on the device or when the process is\n> >> taken down forcibly (VM/container crash)), there can be leftover\n> >> uninitialized .partial files (note that padding i.e. filling 16MB WAL\n> >> files with all zeros is done in non-compression cases) due to which\n> >> pg_receivewal fails to come up after the crash. To address this, the\n> >> proposed patch makes the padding 16MB WAL files atomic (first write a\n> >> .temp file, pad it and durably rename it to the original .partial\n> >> file, ready to be filled with received WAL records). This approach is\n> >> similar to what core postgres achieves atomicity while creating new\n> >> WAL file (see how XLogFileInitInternal() creates xlogtemp.%d file\n> >> first and then how InstallXLogFileSegment() durably renames it to\n> >> original WAL file).\n> >>\n> >> Thoughts?\n> >\n> > Hi Bharath,\n> >\n> > Idea to atomically allocate WAL file by creating tmp file and renaming it is nice.\n>\n> Thanks for reviewing it.\n>\n> > I have one question though:\n> > How is partially temp file created will be cleaned if the VM crashes or out of disk space cases? Does it endup creating multiple files for every VM crash/disk space during process of pg_receivewal?\n> >\n> > Thoughts?\n>\n> It is handled in the patch, see [1].\n>\n> Attaching v4 patch herewith which now uses the temporary file suffix\n> '.tmp' as opposed to v3 patch '.temp'. This is just to be in sync with\n> other atomic file write codes in the core - autoprewarm,\n> pg_stat_statement, slot, basebacup, replorigin, snapbuild, receivelog\n> and so on.\n>\n> Please review the v4 patch.\n\nI've done some more testing today (hacked the code a bit by adding\npg_usleep(10000L); in pre-padding loop and crashing the pg_receivewal\nprocess to produce the warning [1]) and found that the better place to\nremove \".partial.tmp\" leftover files is in FindStreamingStart()\nbecause there we do a traversal of all the files in target directory\nalong the way to remove if \".partial.tmp\" file(s) is/are found.\n\nPlease review the v5 patch further.\n\n[1] pg_receivewal: warning: segment file\n\"0000000100000006000000B9.partial\" has incorrect size 15884288,\nskipping\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Fri, 19 Aug 2022 13:36:54 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 1:37 PM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n>\n> Hi Bharath,\n> I reviewed your patch. Minor comments.\n\nThanks.\n\n> 1. Why are we not using durable_unlink instead of unlink to remove the partial tmp files?\n\ndurable_unlink() issues fsync on the parent directory, if used, those\nfsync() calls will be per partial.tmp file. Moreover, durable_unlink()\nis backend-only, not available for tools i.e. FRONTEND code. If we\ndon't durably remove the pratial.tmp file, it will get deleted in the\nnext cycle anyways, so no problem there.\n\n> 2. Below could be a simple if(shouldcreatetempfile){} else{} as in error case we need to return NULL.\n\nYeah, that way it is much simpler.\n\nPlease review the attached v6 patch.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Fri, 19 Aug 2022 17:27:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "Changes look good to me.\n\nThanks,\nMahendrakar.\n\nOn Fri, 19 Aug 2022 at 17:28, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Fri, Aug 19, 2022 at 1:37 PM mahendrakar s\n> <mahendrakarforpg@gmail.com> wrote:\n> >\n> > Hi Bharath,\n> > I reviewed your patch. Minor comments.\n>\n> Thanks.\n>\n> > 1. Why are we not using durable_unlink instead of unlink to remove the\n> partial tmp files?\n>\n> durable_unlink() issues fsync on the parent directory, if used, those\n> fsync() calls will be per partial.tmp file. Moreover, durable_unlink()\n> is backend-only, not available for tools i.e. FRONTEND code. If we\n> don't durably remove the pratial.tmp file, it will get deleted in the\n> next cycle anyways, so no problem there.\n>\n> > 2. Below could be a simple if(shouldcreatetempfile){} else{} as in error\n> case we need to return NULL.\n>\n> Yeah, that way it is much simpler.\n>\n> Please review the attached v6 patch.\n>\n> --\n> Bharath Rupireddy\n> RDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n>\n\nChanges look good to me.Thanks,Mahendrakar.On Fri, 19 Aug 2022 at 17:28, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Fri, Aug 19, 2022 at 1:37 PM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n>\n> Hi Bharath,\n> I reviewed your patch. Minor comments.\n\nThanks.\n\n> 1. Why are we not using durable_unlink instead of unlink to remove the partial tmp files?\n\ndurable_unlink() issues fsync on the parent directory, if used, those\nfsync() calls will be per partial.tmp file. Moreover, durable_unlink()\nis backend-only, not available for tools i.e. FRONTEND code. If we\ndon't durably remove the pratial.tmp file, it will get deleted in the\nnext cycle anyways, so no problem there.\n\n> 2. Below could be a simple if(shouldcreatetempfile){} else{} as in error case we need to return NULL.\n\nYeah, that way it is much simpler.\n\nPlease review the attached v6 patch.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Sat, 20 Aug 2022 00:05:50 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 5:27 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please review the attached v6 patch.\n\nI'm attaching the v7 patch rebased on to the latest HEAD.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Sep 2022 08:03:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 8:03 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Aug 19, 2022 at 5:27 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Please review the attached v6 patch.\n>\n> I'm attaching the v7 patch rebased on to the latest HEAD.\n\nv7 patch was failing on Windows [1]. This is because of the full path\nname being sent to dir_existsfile() instead of just sending the temp\nfile name. I fixed the issue. Thanks Michael Paquier for an offlist\nchat.\n\nPSA v8 patch.\n\n[1]\nt/010_pg_basebackup.pl ... 134/?\n# Failed test 'pg_basebackup reports checksum mismatch stderr\n/(?^s:^WARNING.*checksum verification failed)/'\n# at t/010_pg_basebackup.pl line 769.\n# 'unrecognized win32 error code: 123WARNING:\nchecksum verification failed in file \"./base/5/16399\", block 0:\ncalculated 4C09 but expected B3F6\n# pg_basebackup: error: checksum error occurred\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 22 Sep 2022 07:16:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 07:16:41AM +0530, Bharath Rupireddy wrote:\n> t/010_pg_basebackup.pl ... 134/?\n> # Failed test 'pg_basebackup reports checksum mismatch stderr\n> /(?^s:^WARNING.*checksum verification failed)/'\n> # at t/010_pg_basebackup.pl line 769.\n> # 'unrecognized win32 error code: 123WARNING:\n> checksum verification failed in file \"./base/5/16399\", block 0:\n> calculated 4C09 but expected B3F6\n> # pg_basebackup: error: checksum error occurred\n\nShouldn't we extend the mapping table in win32error.c so as the\ninformation provided is more useful when seeing this error, then?\nThere could be other code path able to trigger this failure, or other\nhackers working on separate features that could benefit from this\nextra information.\n--\nMichael",
"msg_date": "Thu, 22 Sep 2022 13:37:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 10:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 22, 2022 at 07:16:41AM +0530, Bharath Rupireddy wrote:\n> > t/010_pg_basebackup.pl ... 134/?\n> > # Failed test 'pg_basebackup reports checksum mismatch stderr\n> > /(?^s:^WARNING.*checksum verification failed)/'\n> > # at t/010_pg_basebackup.pl line 769.\n> > # 'unrecognized win32 error code: 123WARNING:\n> > checksum verification failed in file \"./base/5/16399\", block 0:\n> > calculated 4C09 but expected B3F6\n> > # pg_basebackup: error: checksum error occurred\n>\n> Shouldn't we extend the mapping table in win32error.c so as the\n> information provided is more useful when seeing this error, then?\n> There could be other code path able to trigger this failure, or other\n> hackers working on separate features that could benefit from this\n> extra information.\n\nThanks. I will start a separate thread to discuss that.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Sep 2022 11:07:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 7:16 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> PSA v8 patch.\n\nI simplified the code a bit by using a fixed temporary file name\n(thanks Michael Paquier for the suggestion) much like the core does in\nXLogFileInitInternal(). If there's any left-over temp file from a\ncrash or failure in dir_open_for_write(), that gets deleted in the\nnext call.\n\nPlease review the v9 patch further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 13 Oct 2022 13:28:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\", but there has been\nno activity on this thread for 15+ months.\n\nSince there seems not much interest, I have changed the status to\n\"Returned with Feedback\" [1]. Feel free to propose a stronger use case\nfor the patch and add an entry for the same.\n\n======\n[1] https://commitfest.postgresql.org/46/3503/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 12:05:31 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal fail to streams when the partial file to write is\n not fully initialized present in the wal receiver directory"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reading some code around I noticed that b04aeb0a053e7 added a MaxLockMode\nbut didn't update the lock methods initialization. It shouldn't make much\ndifference in the long run but some consistency seems better to me.",
"msg_date": "Mon, 3 Jan 2022 14:47:22 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use MaxLockMode in lock methods initialization"
},
{
"msg_contents": "On Mon, Jan 03, 2022 at 02:47:22PM +0800, Julien Rouhaud wrote:\n> While reading some code around I noticed that b04aeb0a053e7 added a MaxLockMode\n> but didn't update the lock methods initialization. It shouldn't make much\n> difference in the long run but some consistency seems better to me.\n\nMakes sense to me. MaxLockMode is here for the same purpose as this\ninitialization area.\n--\nMichael",
"msg_date": "Mon, 3 Jan 2022 17:00:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use MaxLockMode in lock methods initialization"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Jan 03, 2022 at 02:47:22PM +0800, Julien Rouhaud wrote:\n>> While reading some code around I noticed that b04aeb0a053e7 added a MaxLockMode\n>> but didn't update the lock methods initialization. It shouldn't make much\n>> difference in the long run but some consistency seems better to me.\n\n> Makes sense to me. MaxLockMode is here for the same purpose as this\n> initialization area.\n\nAgreed. That aspect of b04aeb0a053e7 was a bit of a quick hack,\nand it didn't occur to me to look for other places where the symbol\ncould be used. But these two places are spot-on for it.\n\nPushed with a bit of comment-fiddling.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 12:27:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use MaxLockMode in lock methods initialization"
}
] |
[
{
"msg_contents": "Hi,\n\nplanner hook is frequently used in monitoring and advising extensions. \nThe call to this hook is implemented in the way, that the \nstandard_planner routine must be called at least once in the hook's call \nchain.\n\nBut, as I see in [1], it should allow us \"... replace the planner \naltogether\".\nIn such situation it haven't sense to call standard_planner at all. \nMoreover, if an extension make some expensive planning activity, \nmonitoring tools, like pg_stat_statements, can produce different \nresults, depending on a hook calling order.\nI thought about additional hooks, explicit hook priorities and so on. \nBut, maybe more simple solution is to describe requirements to such kind \nof extensions in the code and documentation (See patch in attachment)?\nIt would allow an extension developer legally check and log a situation, \nwhen the extension doesn't last in the call chain.\n\n\n[1] \nhttps://www.postgresql.org/message-id/flat/27516.1180053940%40sss.pgh.pa.us\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Mon, 3 Jan 2022 12:33:13 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Clarify planner_hook calling convention"
},
{
"msg_contents": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> writes:\n> planner hook is frequently used in monitoring and advising extensions. \n\nYeah.\n\n> The call to this hook is implemented in the way, that the \n> standard_planner routine must be called at least once in the hook's call \n> chain.\n> But, as I see in [1], it should allow us \"... replace the planner \n> altogether\".\n> In such situation it haven't sense to call standard_planner at all. \n\nThat's possible in theory, but who's going to do it in practice?\nThere is a monstrous amount of code you'd have to replace.\nMoreover, if you felt compelled to do it, it's likely because you\nare making fundamental changes elsewhere too, which means you are\nmore likely going to end up with a fork not an extension.\n\n> But, maybe more simple solution is to describe requirements to such kind \n> of extensions in the code and documentation (See patch in attachment)?\n> + * 2. If your extension implements some planning activity, write in the extension\n> + * docs a requirement to set the extension at the begining of shared libraries list.\n\nThis advice seems pretty unhelpful. If more than one extension is\ngetting into the planner_hook, they can't all be first.\n\n(Also, largely the same issue applies to very many of our other\nhooks.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 10:59:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Clarify planner_hook calling convention"
},
{
"msg_contents": "On 1/3/22 8:59 PM, Tom Lane wrote:\n> \"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> writes:\n>> planner hook is frequently used in monitoring and advising extensions.\n> \n> Yeah.\n> \n>> The call to this hook is implemented in the way, that the\n>> standard_planner routine must be called at least once in the hook's call\n>> chain.\n>> But, as I see in [1], it should allow us \"... replace the planner\n>> altogether\".\n>> In such situation it haven't sense to call standard_planner at all.\n> \n> That's possible in theory, but who's going to do it in practice?\n\nWe use it in an extension that freezes a plan for specific parameterized \nquery (using plancache + shared storage) - exactly the same technique as \nextended query protocol does, but spreading across all backends.\nAs I know, the community doesn't like such features, and we use it in \nenterprise code only.\n\n>> But, maybe more simple solution is to describe requirements to such kind\n>> of extensions in the code and documentation (See patch in attachment)?\n>> + * 2. If your extension implements some planning activity, write in the extension\n>> + * docs a requirement to set the extension at the begining of shared libraries list.\n> \n> This advice seems pretty unhelpful. If more than one extension is\n> getting into the planner_hook, they can't all be first.\n\nI want to check planner_hook on startup and log an error if it isn't \nNULL and give a user an advice how to fix it. I want to legalize this \nlogic, if permissible.\n\n> \n> (Also, largely the same issue applies to very many of our other\n> hooks.)\n\nAgreed. Interference between extensions is a very annoying issue now.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Tue, 4 Jan 2022 15:44:53 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Clarify planner_hook calling convention"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile browsing the code, noticed the extra spaces after the function name.\nRemoved the same in the attached patch.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com",
"msg_date": "Mon, 3 Jan 2022 18:35:55 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Remove extra spaces"
},
{
"msg_contents": "Suraj Kharage <suraj.kharage@enterprisedb.com> writes:\n> While browsing the code, noticed the extra spaces after the function name.\n> Removed the same in the attached patch.\n\nI'm afraid that's a waste of time because the next pgindent run\nwill just put them back. \"numeric\" is also a typedef name and\nthis usage of it seems to confuse pgindent. If you wanted to\ndive into the pgindent code and fix that bug in it, that'd be\ngreat, but the return-on-effort is probably going to be poor.\n\n(Another possibility is to change the C function name.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 11:23:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove extra spaces"
}
] |
[
{
"msg_contents": "I tried debugging PostgreSQL to better understand how it works. It worked\nfine a day ago, but for some reason I have issues with debugging now:\n\n- If I put a breakpoint before I start the process then everything works\nfine\n- But if I put/remove a breakpoint after it's fully initialized - the\nprocess just stops\n\nAnd when reading the next command postgres.c, I see that input_message is\nempty. I assume CLion sends a signal which awakens PostgreSQL, but there's\nno data on the input? But should PostgreSQL quit in such a situation?\n\nThe way I build and start:\nmake clean\n./configure --enable-cassert --enable-debug CFLAGS=\"-ggdb -O0 -g3\n-fno-omit-frame-pointer\"\nmake\nmake install\n/usr/local/pgsql/bin/initdb -D /Users/stas/projects/postgres/data\n\nStarting command:\n/usr/local/pgsql/bin/postgres --single -D\n/Users/stas/projects/postgres/data postgres\n\nI tried debugging PostgreSQL to better understand how it works. It worked fine a day ago, but for some reason I have issues with debugging now:- If I put a breakpoint before I start the process then everything works fine- But if I put/remove a breakpoint after it's fully initialized - the process just stopsAnd when reading the next command postgres.c, I see that input_message is empty. I assume CLion sends a signal which awakens PostgreSQL, but there's no data on the input? But should PostgreSQL quit in such a situation?The way I build and start:make clean./configure --enable-cassert --enable-debug CFLAGS=\"-ggdb -O0 -g3 -fno-omit-frame-pointer\"makemake install/usr/local/pgsql/bin/initdb -D /Users/stas/projects/postgres/dataStarting command:/usr/local/pgsql/bin/postgres --single -D /Users/stas/projects/postgres/data postgres",
"msg_date": "Mon, 3 Jan 2022 18:54:49 +0300",
"msg_from": "Stanislav Bashkyrtsev <stanislav.bashkirtsev@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL stops when adding a breakpoint in CLion"
},
{
"msg_contents": "On 1/3/22 16:54, Stanislav Bashkyrtsev wrote:\n> I tried debugging PostgreSQL to better understand how it works. It \n> worked fine a day ago, but for some reason I have issues with debugging now:\n> \n> - If I put a breakpoint before I start the process then everything works \n> fine\n> - But if I put/remove a breakpoint after it's fully initialized - the \n> process just stops\n> \n> And when reading the next command postgres.c, I see that \n> input_message is empty. I assume CLion sends a signal which awakens \n> PostgreSQL, but there's no data on the input? But should PostgreSQL quit \n> in such a situation?\n> \n\nWhy do you think postgres quits? AFAIK CLion uses gdb or lldb for \ndebugging, which are the debugger of choice for many (most?) hackers on \nthis list. So that should work fine.\n\n> The way I build and start:\n> make clean\n> ./configure --enable-cassert --enable-debug CFLAGS=\"-ggdb -O0 -g3 \n> -fno-omit-frame-pointer\"\n> make\n> make install\n> /usr/local/pgsql/bin/initdb -D /Users/stas/projects/postgres/data\n> \n> Starting command:\n> /usr/local/pgsql/bin/postgres --single -D \n> /Users/stas/projects/postgres/data postgres\n\nNow sure why you start it in single-user mode, but I don't think that \nshould affect debugging. Try redirecting the output to a log file, maybe \nthat'll tell you what happened.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jan 2022 19:52:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL stops when adding a breakpoint in CLion"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 1/3/22 16:54, Stanislav Bashkyrtsev wrote:\n>> - If I put a breakpoint before I start the process then everything works \n>> fine\n>> - But if I put/remove a breakpoint after it's fully initialized - the \n>> process just stops\n\n> Why do you think postgres quits? AFAIK CLion uses gdb or lldb for \n> debugging, which are the debugger of choice for many (most?) hackers on \n> this list. So that should work fine.\n\nFWIW, it's normal in gdb that if you attach to an existing process,\nthe process stops until you say \"continue\". I know nothing of CLion,\nbut it likely follows that convention too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 14:08:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL stops when adding a breakpoint in CLion"
},
{
"msg_contents": "> Why do you think postgres quits?\nThe process was running and then it stopped. And in the console I see:\n2022-01-03 23:23:29.495 MSK [76717] LOG: checkpoint starting: shutdown\nimmediate\n2022-01-03 23:23:29.498 MSK [76717] LOG: checkpoint complete: wrote 3\nbuffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s,\nsync=0.001 s, total=0.005 s; sync files=2, longest=0.001 s, average=0.001\ns; distance=0 kB, estimate=0 kB\n\n> AFAIK CLion uses gdb or lldb for\n> debugging, which are the debugger of choice for many (most?) hackers on\n> this list. So that should work fine.\nYep, and it worked for me too.. Yesterday :) I see that CLion uses LLDB on\nMacOS by default.\n\n> Now sure why you start it in single-user mode, but I don't think that\n> should affect debugging.\nWell, --single seems convenient because CLion starts that process and\nattaches to it right away. I don't have to look for a way of attaching to\nthe forks. Maybe it's a good point to mention that I'm not very familiar\nwith developing in C/C++ and therefore have a vague understanding of how to\nset up an efficient dev environment. Moreover in multi-user mode CLion/LLDB\nkeeps stopping in postmaster.c:\nselres = select(nSockets, &rmask, NULL, NULL, &timeout);\n\n>Try redirecting the output to a log file, maybe\n> that'll tell you what happened.\nI see all the output in the console, so not sure what redirecting to a file\nwould achieve.\n\n\nOn Mon, Jan 3, 2022 at 10:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > On 1/3/22 16:54, Stanislav Bashkyrtsev wrote:\n> >> - If I put a breakpoint before I start the process then everything\n> works\n> >> fine\n> >> - But if I put/remove a breakpoint after it's fully initialized - the\n> >> process just stops\n>\n> > Why do you think postgres quits? AFAIK CLion uses gdb or lldb for\n> > debugging, which are the debugger of choice for many (most?) hackers on\n> > this list. So that should work fine.\n>\n> FWIW, it's normal in gdb that if you attach to an existing process,\n> the process stops until you say \"continue\". I know nothing of CLion,\n> but it likely follows that convention too.\n>\n> regards, tom lane\n>\n\n> Why do you think postgres quits? The process was running and then it stopped. And in the console I see:2022-01-03 23:23:29.495 MSK [76717] LOG: checkpoint starting: shutdown immediate2022-01-03 23:23:29.498 MSK [76717] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.005 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB > AFAIK CLion uses gdb or lldb for> debugging, which are the debugger of choice for many (most?) hackers on> this list. So that should work fine.Yep, and it worked for me too.. Yesterday :) I see that CLion uses LLDB on MacOS by default. > Now sure why you start it in single-user mode, but I don't think that> should affect debugging.Well, --single seems convenient because CLion starts that process and attaches to it right away. I don't have to look for a way of attaching to the forks. Maybe it's a good point to mention that I'm not very familiar with developing in C/C++ and therefore have a vague understanding of how to set up an efficient dev environment. Moreover in multi-user mode CLion/LLDB keeps stopping in postmaster.c:selres = select(nSockets, &rmask, NULL, NULL, &timeout);>Try redirecting the output to a log file, maybe> that'll tell you what happened.I see all the output in the console, so not sure what redirecting to a file would achieve.On Mon, Jan 3, 2022 at 10:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 1/3/22 16:54, Stanislav Bashkyrtsev wrote:\n>> - If I put a breakpoint before I start the process then everything works \n>> fine\n>> - But if I put/remove a breakpoint after it's fully initialized - the \n>> process just stops\n\n> Why do you think postgres quits? AFAIK CLion uses gdb or lldb for \n> debugging, which are the debugger of choice for many (most?) hackers on \n> this list. So that should work fine.\n\nFWIW, it's normal in gdb that if you attach to an existing process,\nthe process stops until you say \"continue\". I know nothing of CLion,\nbut it likely follows that convention too.\n\n regards, tom lane",
"msg_date": "Mon, 3 Jan 2022 23:26:32 +0300",
"msg_from": "Stanislav Bashkyrtsev <stanislav.bashkirtsev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL stops when adding a breakpoint in CLion"
},
{
"msg_contents": "Stanislav Bashkyrtsev <stanislav.bashkirtsev@gmail.com> writes:\n>> Why do you think postgres quits?\n\n> The process was running and then it stopped. And in the console I see:\n> 2022-01-03 23:23:29.495 MSK [76717] LOG: checkpoint starting: shutdown\n> immediate\n\nIn a standalone backend, I think there are only 3 ways to get to\nnormal shutdown:\n\t* SIGTERM\n\t* SIGQUIT\n\t* EOF on stdin\n\nIt's not very clear which of those your setup is triggering.\n\nIn any case, debugging standalone mode is very very rarely\nwhat you should be doing; it's only vaguely related to normal\noperation, plus you lack all the creature comforts of psql.\nThe usual thing is to start a normal interactive session, find out\nthe PID of its connected backend process (\"select pg_backend_pid()\"\nis a reliable way), and then attach to that process with GDB or your\ndebugger of choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 17:02:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL stops when adding a breakpoint in CLion"
},
{
"msg_contents": "> In a standalone backend, I think there are only 3 ways to get to\n> normal shutdown:\n> * SIGTERM\n> * SIGQUIT\n> * EOF on stdin\n\nI debugged a bit more and I see that getc() returns with -1 in\ninteractive_getc() which is interpreted as EOF:\nc = getc(stdin);\n\nI see that errno == EINTR when it happens. This is as much as I can figure\nout in C, so I'm leaving it at that. Your advice about debugging the\nbackend process (\"select pg_backend_pid()\") instead of running in a\nsingle-user mode worked for me, thank you!\n\nOn Tue, Jan 4, 2022 at 1:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Stanislav Bashkyrtsev <stanislav.bashkirtsev@gmail.com> writes:\n> >> Why do you think postgres quits?\n>\n> > The process was running and then it stopped. And in the console I see:\n> > 2022-01-03 23:23:29.495 MSK [76717] LOG: checkpoint starting: shutdown\n> > immediate\n>\n> In a standalone backend, I think there are only 3 ways to get to\n> normal shutdown:\n> * SIGTERM\n> * SIGQUIT\n> * EOF on stdin\n>\n> It's not very clear which of those your setup is triggering.\n>\n> In any case, debugging standalone mode is very very rarely\n> what you should be doing; it's only vaguely related to normal\n> operation, plus you lack all the creature comforts of psql.\n> The usual thing is to start a normal interactive session, find out\n> the PID of its connected backend process (\"select pg_backend_pid()\"\n> is a reliable way), and then attach to that process with GDB or your\n> debugger of choice.\n>\n> regards, tom lane\n>\n\n> In a standalone backend, I think there are only 3 ways to get to> normal shutdown:> * SIGTERM> * SIGQUIT> * EOF on stdinI debugged a bit more and I see that getc() returns with -1 in interactive_getc() which is interpreted as EOF:c = getc(stdin);I see that errno == EINTR when it happens. This is as much as I can figure out in C, so I'm leaving it at that. Your advice about debugging the backend process (\"select pg_backend_pid()\") instead of running in a single-user mode worked for me, thank you!On Tue, Jan 4, 2022 at 1:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Stanislav Bashkyrtsev <stanislav.bashkirtsev@gmail.com> writes:\n>> Why do you think postgres quits?\n\n> The process was running and then it stopped. And in the console I see:\n> 2022-01-03 23:23:29.495 MSK [76717] LOG: checkpoint starting: shutdown\n> immediate\n\nIn a standalone backend, I think there are only 3 ways to get to\nnormal shutdown:\n * SIGTERM\n * SIGQUIT\n * EOF on stdin\n\nIt's not very clear which of those your setup is triggering.\n\nIn any case, debugging standalone mode is very very rarely\nwhat you should be doing; it's only vaguely related to normal\noperation, plus you lack all the creature comforts of psql.\nThe usual thing is to start a normal interactive session, find out\nthe PID of its connected backend process (\"select pg_backend_pid()\"\nis a reliable way), and then attach to that process with GDB or your\ndebugger of choice.\n\n regards, tom lane",
"msg_date": "Tue, 4 Jan 2022 11:26:28 +0300",
"msg_from": "Stanislav Bashkyrtsev <stanislav.bashkirtsev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL stops when adding a breakpoint in CLion"
}
] |
[
{
"msg_contents": "The attached proposed patch removes some ancient infrastructure for\nmanually testing hot standby. I doubt anyone has used this in years,\nbecause AFAICS there is nothing here that's not done better by the\nsrc/test/recovery TAP tests. (Or if there is, we ought to migrate\nit into the TAP tests.)\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 03 Jan 2022 16:50:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Proposal: remove obsolete hot-standby testing infrastructure"
},
{
"msg_contents": "Hello Tom,\n04.01.2022 00:50, Tom Lane wrote:\n> The attached proposed patch removes some ancient infrastructure for\n> manually testing hot standby. I doubt anyone has used this in years,\n> because AFAICS there is nothing here that's not done better by the\n> src/test/recovery TAP tests. (Or if there is, we ought to migrate\n> it into the TAP tests.)\n>\n> Thoughts?\nIt's hardly that important, but we (Postgres Pro) run this test\nregularly to check for primary-standby compatibility. It's useful when\nchecking binary packages from different minor versions. For example, we\nsetup postgresql-14.0 and postgresql-14.1 aside (renaming one\ninstallation' directory and changing it's port) and perform the test.\nWhat've found with it was e.g. incompatibility due to linkage of\ndifferent libicu versions (that was PgPro-only issue). I don't remember\nwhether we found something related to PostgreSQL itself, but we\ndefinitely use this test and I'm not sure how to replace it in our setup\nwith a TAP test. On the other hand, testing binaries is not accustomed\nin the community yet, so when such testing will be adopted, probably a\nbrand new set of tests should emerge.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 4 Jan 2022 11:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: remove obsolete hot-standby testing infrastructure"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 04.01.2022 00:50, Tom Lane wrote:\n>> The attached proposed patch removes some ancient infrastructure for\n>> manually testing hot standby. I doubt anyone has used this in years,\n>> because AFAICS there is nothing here that's not done better by the\n>> src/test/recovery TAP tests. (Or if there is, we ought to migrate\n>> it into the TAP tests.)\n\n> It's hardly that important, but we (Postgres Pro) run this test\n> regularly to check for primary-standby compatibility. It's useful when\n> checking binary packages from different minor versions. For example, we\n> setup postgresql-14.0 and postgresql-14.1 aside (renaming one\n> installation' directory and changing it's port) and perform the test.\n> What've found with it was e.g. incompatibility due to linkage of\n> different libicu versions (that was PgPro-only issue). I don't remember\n> whether we found something related to PostgreSQL itself, but we\n> definitely use this test and I'm not sure how to replace it in our setup\n> with a TAP test. On the other hand, testing binaries is not accustomed\n> in the community yet, so when such testing will be adopted, probably a\n> brand new set of tests should emerge.\n\nOh, interesting. I definitely concur that testing compatibility of\ndifferent builds or minor versions is an important use-case. And\nI concede that making src/test/recovery do it would be tricky and\na bit out-of-scope. But having said that, the hs_standby_* scripts\nseem like a poor fit for the job too. AFAICS they don't really\ntest any user data type except integer (so I'm surprised that they\nlocated an ICU incompatibility for you); and they spend a lot of\neffort on stuff that I doubt is relevant because it *is* covered\nby the TAP tests.\n\nIf I were trying to test that topic using available spare parts,\nwhat I'd do is run the regular regression tests on the primary\nand see if the standby could track it. Maybe pg_dump from both\nservers afterwards and see if the results match, a la the pg_upgrade\ntest. Bonus points for a script that could run some other pg_regress\nsuite such as one of the contrib modules, because then you could\ncheck compatibility of those too.\n\nI'm happy to keep the hs_standby_* scripts if there's a live use-case\nfor them; but I don't see what they're doing for you that wouldn't be\ndone better by other pg_regress suites.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jan 2022 10:33:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: remove obsolete hot-standby testing infrastructure"
},
{
"msg_contents": "04.01.2022 18:33, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> It's hardly that important, but we (Postgres Pro) run this test\n>> regularly to check for primary-standby compatibility. It's useful when\n>> checking binary packages from different minor versions. For example, we\n>> setup postgresql-14.0 and postgresql-14.1 aside (renaming one\n>> installation' directory and changing it's port) and perform the test.\n>> What've found with it was e.g. incompatibility due to linkage of\n>> different libicu versions (that was PgPro-only issue). I don't remember\n>> whether we found something related to PostgreSQL itself, but we\n>> definitely use this test and I'm not sure how to replace it in our setup\n>> with a TAP test. On the other hand, testing binaries is not accustomed\n>> in the community yet, so when such testing will be adopted, probably a\n>> brand new set of tests should emerge.\n> Oh, interesting. I definitely concur that testing compatibility of\n> different builds or minor versions is an important use-case. And\n> I concede that making src/test/recovery do it would be tricky and\n> a bit out-of-scope. But having said that, the hs_standby_* scripts\n> seem like a poor fit for the job too. AFAICS they don't really\n> test any user data type except integer (so I'm surprised that they\n> located an ICU incompatibility for you); and they spend a lot of\n> effort on stuff that I doubt is relevant because it *is* covered\n> by the TAP tests.\nAn ICU incompatibility was detected due to our invention [1] \"default\ncollation\" that is checked upon connection (before any query processing):\n--- C:/tmp/.../src/test/regress/expected/hs_standby_check.out \n2021-10-14 04:07:38.000000000 +0200\n+++ C:/tmp/.../src/test/regress/results/hs_standby_check.out \n2021-10-14 06:06:12.004043500 +0200\n@@ -1,3 +1,6 @@\n+WARNING: collation \"default\" has version mismatch\n\n+DETAIL: The collation in the database was created using version\n153.64, but the operating system provides version 153.14.\n\n+HINT: Check all objects affected by this collation and run ALTER\nCOLLATION pg_catalog.\"default\" REFRESH VERSION\n\n --\n -- Hot Standby tests\n --\n\nI admit that we decided to use this test mainly because it exists and\ndescribed in the documentation, not because it seemed very useful. It's\nusage increased test coverage without a doubt, as it requires a rather\nnon-trivial setup (similar setups performed by TAP tests, but not with\npre-packaged binaries).\n> If I were trying to test that topic using available spare parts,\n> what I'd do is run the regular regression tests on the primary\n> and see if the standby could track it. Maybe pg_dump from both\n> servers afterwards and see if the results match, a la the pg_upgrade\n> test. Bonus points for a script that could run some other pg_regress\n> suite such as one of the contrib modules, because then you could\n> check compatibility of those too.\nThanks for the idea! We certainly will implement something like that\nwhen we start testing packages for v15. We've already learned to compare\ndumps before/after minor upgrade, so we could reuse that logic for this\ntest too.\n> I'm happy to keep the hs_standby_* scripts if there's a live use-case\n> for them; but I don't see what they're doing for you that wouldn't be\n> done better by other pg_regress suites.\nYes, I will not miss the test in case you will remove it. I just wanted\nto mention that we use(d) it in our testing more or less successfully.\n\n[1]\nhttps://www.postgresql.org/message-id/37A534BE-CBF7-467C-B096-0AAD25091A9F%40yandex-team.ru\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 4 Jan 2022 22:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: remove obsolete hot-standby testing infrastructure"
},
{
"msg_contents": "On 03.01.22 22:50, Tom Lane wrote:\n> The attached proposed patch removes some ancient infrastructure for\n> manually testing hot standby. I doubt anyone has used this in years,\n> because AFAICS there is nothing here that's not done better by the\n> src/test/recovery TAP tests. (Or if there is, we ought to migrate\n> it into the TAP tests.)\n\nI looked into this some time ago and concluded that this test contains a \nsignificant amount of testing that isn't obviously done anywhere else. \nI don't have the notes anymore, and surely some things have progressed \nsince, but I wouldn't just throw the old test suite away without \nactually checking.\n\n\n",
"msg_date": "Wed, 5 Jan 2022 12:27:33 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: remove obsolete hot-standby testing infrastructure"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 03.01.22 22:50, Tom Lane wrote:\n>> The attached proposed patch removes some ancient infrastructure for\n>> manually testing hot standby.\n\n> I looked into this some time ago and concluded that this test contains a \n> significant amount of testing that isn't obviously done anywhere else. \n> I don't have the notes anymore, and surely some things have progressed \n> since, but I wouldn't just throw the old test suite away without \n> actually checking.\n\nFair enough ... so I looked, and there's not much at all that\nI'm worried about.\n\nhs_standby_allowed:\nThis is basically checking that the standby can see data from\nthe primary, which we surely have covered. Although it does\nalso cover propagation of nextval, which AFAICS is not tested\nin src/test/recovery, so perhaps that's worth troubling over.\n\nThere are also some checks that particular commands are allowed\non the standby, which seem to me to be not too helpful;\nsee also comments on the next file.\n\nhs_standby_disallowed:\nInverse of the above: check that some commands are disallowed.\nWe check some of these in 001_stream_rep.pl, and given the current\ncode structure in utility.c (ClassifyUtilityCommandAsReadOnly etc),\nI do not see much point in adding more test cases of the same sort.\nThe only likely new bug in that area would be misclassification of\nsome new command, and no amount of testing of existing cases will\ncatch that.\n\nThere are also tests that particular functions are disallowed, which\nisn't something that goes through ClassifyUtilityCommandAsReadOnly.\nNonetheless, adding more test cases here wouldn't help catch future\noversights of that type, so I remain unexcited.\n\nhs_standby_functions:\nMostly also checking that things are disallowed. There's also\na test of pg_cancel_backend, which is cute but probably suffers\nfrom timing instability (i.e., delayed arrival of the signal\nmight change the output). Moreover, pg_cancel_backend is already\ncovered in the isolation tests, and I see no reason to think\nit'd operate differently on a standby.\n\nhs_standby_check:\nChecks pg_is_in_recovery(), which is checked far more thoroughly\nby pg_ctl/t/003_promote.pl.\n\nhs_primary_extremes:\nChecks that we can cope with deep subtransaction nesting.\nMaybe this is worth preserving, but I sort of doubt it ---\nthe standby doesn't even see the nesting does it?\nAlso checks that the standby can cope with 257 exclusive\nlocks at once, which corresponds to no interesting limit\nthat I know of.\n\n\nSo basically, I'd be inclined to add a couple of tests of\nsequence-update propagation to src/test/recovery and\ncall it good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Jan 2022 20:18:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: remove obsolete hot-standby testing infrastructure"
}
] |
[
{
"msg_contents": "I'm trying to write a C-language function to be compiled into a\nshared module to be loaded by Postgres. In it, I have the OID of a function\nand I need to get information from the pg_proc table.\n\nSo far, I have:\n\nHeapTuple procTuple;\n> Form_pg_proc procStruct;\n\n\n> procTuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcoid));\n> if(!HeapTupleIsValid(procTuple))\n> ereport(ERROR, errmsg(\"cache lookup failed for function %u.\",\n> funcoid));\n>\n\n\nprocStruct = (Form_pg_proc) GETSTRUCT(procTuple);\n\n\nI can get fields like procStruct->prokind and procStruct->proretset.\n\nHowever, I get a compiler error when I try to access procStruct->proargmodes.\nI know that this is because this field is in the CATALOG_VARLEN block which\nmakes it invisible to the compiler.\n\nWhat is the proper way to get this field?\n\n -Ed\n\nI'm trying to write a C-language function to be compiled into a shared module to be loaded by Postgres. In it, I have the OID of a function and I need to get information from the pg_proc table. So far, I have:HeapTuple procTuple;Form_pg_proc procStruct;procTuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcoid));if(!HeapTupleIsValid(procTuple)) ereport(ERROR, errmsg(\"cache lookup failed for function %u.\", funcoid)); procStruct = (Form_pg_proc) GETSTRUCT(procTuple);I can get fields like procStruct->prokind and procStruct->proretset.However, I get a compiler error when I try to access procStruct->proargmodes. I know that this is because this field is in the CATALOG_VARLEN block which makes it invisible to the compiler. What is the proper way to get this field? -Ed",
"msg_date": "Mon, 3 Jan 2022 17:23:54 -0500",
"msg_from": "Ed Behn <ed@behn.us>",
"msg_from_op": true,
"msg_subject": "Accessing fields past CATALOG_VARLEN"
},
{
"msg_contents": "On 01/03/22 17:23, Ed Behn wrote:\n> However, I get a compiler error when I try to access procStruct->proargmodes.\n> I know that this is because this field is in the CATALOG_VARLEN block which\n> makes it invisible to the compiler.\n> \n> What is the proper way to get this field?\n\nYou can use SysCacheGetAttr with the attribute number. It knows all the\nmagic needed to find the right offset, possibly decompress, etc.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 3 Jan 2022 17:32:56 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Accessing fields past CATALOG_VARLEN"
},
{
"msg_contents": "Ed Behn <ed@behn.us> writes:\n> I can get fields like procStruct->prokind and procStruct->proretset.\n> However, I get a compiler error when I try to access procStruct->proargmodes.\n> I know that this is because this field is in the CATALOG_VARLEN block which\n> makes it invisible to the compiler.\n> What is the proper way to get this field?\n\nSysCacheGetAttr(). There are examples all over the tree, but\none that's specific to proargmodes (and also illustrates the\nbest practices for deciphering its value) is in\nparser/analyze.c's transformCallStmt().\n\nYou should also ask yourself if you really *need* to examine\nproargmodes for yourself, or if there's a utility function\nsomewhere that will compute what you need to know.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jan 2022 17:35:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Accessing fields past CATALOG_VARLEN"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWhile testing the index-only scan fix, I've discovered that replacing\nthe index-only scan with the index scan changes contrib/btree_gist\noutput because index-only scan for btree_gist returns a string without\npadding.\nA simple demonstration (based on btree_gist/sql/char.sql):\nCREATE EXTENSION btree_gist;\n\nCREATE TABLE chartmp (a char(32));\nINSERT INTO chartmp VALUES('31b0');\n\nCREATE INDEX charidx ON chartmp USING gist ( a );\nSET enable_seqscan=off;\nEXPLAIN VERBOSE SELECT *, octet_length(a) FROM chartmp WHERE a BETWEEN\n'31a' AND '31c';\nSELECT *, octet_length(a) FROM chartmp WHERE a BETWEEN '31a' AND '31c';\n\nSET enable_indexonlyscan=off;\nEXPLAIN VERBOSE SELECT *, octet_length(a) FROM chartmp WHERE a BETWEEN\n'31a' AND '31c';\nSELECT *, octet_length(a) FROM chartmp WHERE a BETWEEN '31a' AND '31c';\n\n\n QUERY\nPLAN \n------------------------------------------------------------------------------\n Index Only Scan using charidx on chartmp (cost=0.12..8.15 rows=1\nwidth=136)\n Index Cond: ((a >= '31a'::bpchar) AND (a <= '31c'::bpchar))\n(2 rows)\n\n a | octet_length\n------+--------------\n 31b0 | 4\n(1 row)\n\n\n QUERY PLAN \n-------------------------------------------------------------------------\n Index Scan using charidx on chartmp (cost=0.12..8.15 rows=1 width=136)\n Index Cond: ((a >= '31a'::bpchar) AND (a <= '31c'::bpchar))\n(2 rows)\n\n a | octet_length\n----------------------------------+--------------\n 31b0 | 32\n(1 row)\n\n\nIt seems that loosing blank padding is incorrect (btree and btree_gin\npreserve padding with index-only scan) but it's recorded in\ncontrib/btree_gist/expected/char.out.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 4 Jan 2022 17:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> While testing the index-only scan fix, I've discovered that replacing\n> the index-only scan with the index scan changes contrib/btree_gist\n> output because index-only scan for btree_gist returns a string without\n> padding.\n\nUgh, yeah. This seems to be because gbt_bpchar_compress() strips\ntrailing spaces (using rtrim1) before storing the value. The\nidea evidently is to simplify gbt_bpchar_consistent, but it's not\nacceptable if the opclass is supposed to support index-only scan.\n\nI see two ways to fix this:\n\n* Disallow index-only scan, by removing the fetch function for this\nopclass. This'd require a module version bump, so people wouldn't\nget that fix automatically.\n\n* Change gbt_bpchar_compress to not trim spaces (it becomes just\nlike gbt_text_compress), and adapt gbt_bpchar_consistent to cope.\nThis does nothing for the problem immediately, unless you REINDEX\naffected indexes --- but over time an index's entries would get\nreplaced with untrimmed versions.\n\nI also wondered if we could make the fetch function reconstruct the\npadding, but it doesn't seem to have access to the necessary info.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Jan 2022 14:19:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "04.01.2022 22:19, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> While testing the index-only scan fix, I've discovered that replacing\n>> the index-only scan with the index scan changes contrib/btree_gist\n>> output because index-only scan for btree_gist returns a string without\n>> padding.\n> Ugh, yeah. This seems to be because gbt_bpchar_compress() strips\n> trailing spaces (using rtrim1) before storing the value. The\n> idea evidently is to simplify gbt_bpchar_consistent, but it's not\n> acceptable if the opclass is supposed to support index-only scan.\n>\n> I see two ways to fix this:\n>\n> * Disallow index-only scan, by removing the fetch function for this\n> opclass. This'd require a module version bump, so people wouldn't\n> get that fix automatically.\n>\n> * Change gbt_bpchar_compress to not trim spaces (it becomes just\n> like gbt_text_compress), and adapt gbt_bpchar_consistent to cope.\n> This does nothing for the problem immediately, unless you REINDEX\n> affected indexes --- but over time an index's entries would get\n> replaced with untrimmed versions.\nI think that the second way is preferable in the long run. It doesn't\nneed an explanation after years, why index-only scan is not supported\nfor that type. One-time mentioning the change and the need for REINDEX\nin release notes seems more future-oriented to me.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 5 Jan 2022 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "\nOn Wed, 05 Jan 2022 at 03:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> While testing the index-only scan fix, I've discovered that replacing\n>> the index-only scan with the index scan changes contrib/btree_gist\n>> output because index-only scan for btree_gist returns a string without\n>> padding.\n>\n> Ugh, yeah. This seems to be because gbt_bpchar_compress() strips\n> trailing spaces (using rtrim1) before storing the value. The\n> idea evidently is to simplify gbt_bpchar_consistent, but it's not\n> acceptable if the opclass is supposed to support index-only scan.\n>\n> I see two ways to fix this:\n>\n> * Disallow index-only scan, by removing the fetch function for this\n> opclass. This'd require a module version bump, so people wouldn't\n> get that fix automatically.\n>\n> * Change gbt_bpchar_compress to not trim spaces (it becomes just\n> like gbt_text_compress), and adapt gbt_bpchar_consistent to cope.\n> This does nothing for the problem immediately, unless you REINDEX\n> affected indexes --- but over time an index's entries would get\n> replaced with untrimmed versions.\n>\n> I also wondered if we could make the fetch function reconstruct the\n> padding, but it doesn't seem to have access to the necessary info.\n>\n\nIf we fix this In the second way, the range query has the same results\nin both seq scan and index only scan. However, it will incur other\nproblems. For the following query:\n\nSELECT *, octet_length(a) FROM chartmp WHERE a = '31b0';\n\nCurrently, we can get\n\n a | octet_length\n------+--------------\n 31b0 | 4\n\nAfter fixed, we cannot get any result. For the equal condition,\nwe must put the extra spaces to make it work.\n\nHere is a patch for POC testing.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\ndiff --git a/contrib/btree_gist/btree_text.c b/contrib/btree_gist/btree_text.c\nindex 8019d11281..5fd425047f 100644\n--- a/contrib/btree_gist/btree_text.c\n+++ b/contrib/btree_gist/btree_text.c\n@@ -121,16 +121,7 @@ gbt_bpchar_compress(PG_FUNCTION_ARGS)\n \t}\n \n \tif (entry->leafkey)\n-\t{\n-\n-\t\tDatum\t\td = DirectFunctionCall1(rtrim1, entry->key);\n-\t\tGISTENTRY\ttrim;\n-\n-\t\tgistentryinit(trim, d,\n-\t\t\t\t\t entry->rel, entry->page,\n-\t\t\t\t\t entry->offset, true);\n-\t\tretval = gbt_var_compress(&trim, &tinfo);\n-\t}\n+\t\tretval = gbt_var_compress(entry, &tinfo);\n \telse\n \t\tretval = entry;\n \n@@ -179,7 +170,6 @@ gbt_bpchar_consistent(PG_FUNCTION_ARGS)\n \tbool\t\tretval;\n \tGBT_VARKEY *key = (GBT_VARKEY *) DatumGetPointer(entry->key);\n \tGBT_VARKEY_R r = gbt_var_key_readable(key);\n-\tvoid\t *trim = (void *) DatumGetPointer(DirectFunctionCall1(rtrim1, PointerGetDatum(query)));\n \n \t/* All cases served by this function are exact */\n \t*recheck = false;\n@@ -189,7 +179,7 @@ gbt_bpchar_consistent(PG_FUNCTION_ARGS)\n \t\ttinfo.eml = pg_database_encoding_max_length();\n \t}\n \n-\tretval = gbt_var_consistent(&r, trim, strategy, PG_GET_COLLATION(),\n+\tretval = gbt_var_consistent(&r, query, strategy, PG_GET_COLLATION(),\n \t\t\t\t\t\t\t\tGIST_LEAF(entry), &tinfo, fcinfo->flinfo);\n \tPG_RETURN_BOOL(retval);\n }\n\n\n",
"msg_date": "Thu, 06 Jan 2022 00:24:40 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Here is a patch for POC testing.\n\nThis is certainly not right. You've made gbt_bpchar_consistent\nwork identically to gbt_text_consistent, but it needs to implement\na test equivalent to bpchareq, ie ignore trailing spaces in both\ninputs.\n\nThe minimum-effort fix would be to apply rtrim1 to both strings\nin gbt_bpchar_consistent, but I wonder if we can improve on that\nby pushing the ignore-trailing-spaces behavior further down.\nI didn't look yet at whether gbt_var_consistent can support\nany type-specific behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Jan 2022 11:34:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "\nOn Thu, 06 Jan 2022 at 00:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Here is a patch for POC testing.\n>\n> This is certainly not right. You've made gbt_bpchar_consistent\n> work identically to gbt_text_consistent, but it needs to implement\n> a test equivalent to bpchareq, ie ignore trailing spaces in both\n> inputs.\n>\n\nThanks for your explaintion! The bpchareq already ignore trailing spaces\nin both inputs. The question is that the bpchar in btree_gist do not call\nbpchareq, it always call texteq. I tried the patch[1] and it works as\nexpected, howerver, I don't think it's good way to fix this problem.\n\n> The minimum-effort fix would be to apply rtrim1 to both strings\n> in gbt_bpchar_consistent, but I wonder if we can improve on that\n> by pushing the ignore-trailing-spaces behavior further down.\n> I didn't look yet at whether gbt_var_consistent can support\n> any type-specific behavior.\n>\n\nAdding type-specific for gbt_var_consistent looks like more generally.\nFor example, for bpchar type, we should call bpchareq rather than texteq.\n\nAm I understand right?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n[1]\ndiff --git a/contrib/btree_gist/btree_text.c b/contrib/btree_gist/btree_text.c\nindex 8019d11281..7f45ee6e3b 100644\n--- a/contrib/btree_gist/btree_text.c\n+++ b/contrib/btree_gist/btree_text.c\n@@ -121,16 +121,7 @@ gbt_bpchar_compress(PG_FUNCTION_ARGS)\n \t}\n \n \tif (entry->leafkey)\n-\t{\n-\n-\t\tDatum\t\td = DirectFunctionCall1(rtrim1, entry->key);\n-\t\tGISTENTRY\ttrim;\n-\n-\t\tgistentryinit(trim, d,\n-\t\t\t\t\t entry->rel, entry->page,\n-\t\t\t\t\t entry->offset, true);\n-\t\tretval = gbt_var_compress(&trim, &tinfo);\n-\t}\n+\t\tretval = gbt_var_compress(entry, &tinfo);\n \telse\n \t\tretval = entry;\n \n@@ -189,6 +180,11 @@ gbt_bpchar_consistent(PG_FUNCTION_ARGS)\n \t\ttinfo.eml = pg_database_encoding_max_length();\n \t}\n \n+\tr.lower = (bytea *) DatumGetPointer(DirectFunctionCall1(rtrim1,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tPointerGetDatum(r.lower)));\n+\tr.upper = (bytea *) DatumGetPointer(DirectFunctionCall1(rtrim1,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tPointerGetDatum(r.upper)));\n+\n \tretval = gbt_var_consistent(&r, trim, strategy, PG_GET_COLLATION(),\n \t\t\t\t\t\t\t\tGIST_LEAF(entry), &tinfo, fcinfo->flinfo);\n \tPG_RETURN_BOOL(retval);\n\n\n",
"msg_date": "Thu, 06 Jan 2022 18:50:45 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Thu, 06 Jan 2022 at 00:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The minimum-effort fix would be to apply rtrim1 to both strings\n>> in gbt_bpchar_consistent, but I wonder if we can improve on that\n>> by pushing the ignore-trailing-spaces behavior further down.\n>> I didn't look yet at whether gbt_var_consistent can support\n>> any type-specific behavior.\n\n> Adding type-specific for gbt_var_consistent looks like more generally.\n> For example, for bpchar type, we should call bpchareq rather than texteq.\n\nI looked at this and it does seem like it might work, as per attached\npatch. The one thing that is troubling me is that the opclass is set\nup to apply gbt_text_same, which is formally the Wrong Thing for bpchar,\nbecause the equality semantics shouldn't be quite the same. But we\ncould not fix that without a module version bump, which is annoying.\nI think that it might not be necessary to change it, because\n\n(1) There's no such thing as unique GIST indexes, so it should not\nmatter if the \"same\" function is a bit stricter than the datatype's\nnominal notion of equality. It's certainly okay for that to vary\nfrom the semantics applied by the consistent function --- GIST has\nno idea that the consistent function is allegedly testing equality.\n\n(2) If all the input values for a column have been coerced to the same\ntypmod, then it doesn't matter because two values that are equal after\nspace-stripping would be equal without space-stripping, too.\n\nHowever, (2) doesn't hold for an existing index that the user has failed\nto REINDEX, because then the index would contain some space-stripped\nvalues that same() will not say are equal to incoming new values.\nAgain, I think this doesn't matter much, but maybe I'm missing\nsomething. I've not really dug into what GIST uses the same()\nfunction for.\n\nIn any case, if we do need same() to implement the identical\nbehavior to bpchareq(), then the other solution isn't sufficient\neither.\n\nSo in short, it seems like we ought to do some compatibility testing\nand see if this code misbehaves at all with an index created by the\nold code. I don't particularly want to do that ... any volunteers?\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 06 Jan 2022 14:21:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "\nOn Fri, 07 Jan 2022 at 03:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I looked at this and it does seem like it might work, as per attached\n> patch. The one thing that is troubling me is that the opclass is set\n> up to apply gbt_text_same, which is formally the Wrong Thing for bpchar,\n> because the equality semantics shouldn't be quite the same. But we\n> could not fix that without a module version bump, which is annoying.\n> I think that it might not be necessary to change it, because\n>\n> (1) There's no such thing as unique GIST indexes, so it should not\n> matter if the \"same\" function is a bit stricter than the datatype's\n> nominal notion of equality. It's certainly okay for that to vary\n> from the semantics applied by the consistent function --- GIST has\n> no idea that the consistent function is allegedly testing equality.\n>\n> (2) If all the input values for a column have been coerced to the same\n> typmod, then it doesn't matter because two values that are equal after\n> space-stripping would be equal without space-stripping, too.\n>\n> However, (2) doesn't hold for an existing index that the user has failed\n> to REINDEX, because then the index would contain some space-stripped\n> values that same() will not say are equal to incoming new values.\n> Again, I think this doesn't matter much, but maybe I'm missing\n> something. I've not really dug into what GIST uses the same()\n> function for.\n>\n> In any case, if we do need same() to implement the identical\n> behavior to bpchareq(), then the other solution isn't sufficient\n> either.\n>\n> So in short, it seems like we ought to do some compatibility testing\n> and see if this code misbehaves at all with an index created by the\n> old code. I don't particularly want to do that ... any volunteers?\n>\n\nThanks for your patch, it looks good to me. I'm not sure how to test this.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 07 Jan 2022 14:26:31 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "Hello,\n07.01.2022 09:26, Japin Li wrote:\n> On Fri, 07 Jan 2022 at 03:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> In any case, if we do need same() to implement the identical\n> behavior to bpchareq(), then the other solution isn't sufficient\n> either.\n>\n> So in short, it seems like we ought to do some compatibility testing\n> and see if this code misbehaves at all with an index created by the\n> old code. I don't particularly want to do that ... any volunteers?\n>\n> Thanks for your patch, it looks good to me. I'm not sure how to test this.\nI will test it tomorrow.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 7 Jan 2022 12:00:30 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "07.01.2022 12:00, Alexander Lakhin wrote:\n> Hello,\n> 07.01.2022 09:26, Japin Li wrote:\n>> On Fri, 07 Jan 2022 at 03:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> In any case, if we do need same() to implement the identical\n>> behavior to bpchareq(), then the other solution isn't sufficient\n>> either.\n>>\n>> So in short, it seems like we ought to do some compatibility testing\n>> and see if this code misbehaves at all with an index created by the\n>> old code. I don't particularly want to do that ... any volunteers?\n>>\n>> Thanks for your patch, it looks good to me. I'm not sure how to test this.\n> I will test it tomorrow.\nI've made a simple test based on the regression test (see attachment)\nand can confirm that REINDEX after upgrade fixes the index contents.\n\nDifferences after upgrade but before REINDEX:\n--- /tmp/pgtest/char.out 2022-01-08 21:27:43.912274805 +0300\n+++ /tmp/pgtest/char.expected 2022-01-08 21:27:43.896274765 +0300\n@@ -40,8 +40,8 @@\n (2 rows)\n \n SELECT * FROM chartmp WHERE a BETWEEN '31a' AND '31c';\n- a \n-------\n- 31b0\n+ a \n+----------------------------------\n+ 31b0 \n (1 row)\n \nREINDEX INDEX charidx\nDifferences after upgrade and REINDEX:\nFiles /tmp/pgtest/char.out and /tmp/pgtest/char.expected are identical\n\n(Unfortunately for me) I found no anomalies related to gbt_text_same()\nwith an index created with the previous implementation. I've added\ndiagnostic logging that shows when gbt_text_same() returns 0 for keys\nthat are the equal but have different padding. So I've observed that\ngbt_text_same() returns incorrect result, but all the btree_gist tests\nstill pass. Moreover, unconditional \"*result = 0;\" in gbt_text_same()\ndoesn't affect the tests at all.\nI've found that gbt_text_same() is called by gistKeyIsEQ() from\nbackend/access/gist/gistutil.c, and made gistKeyIsEQ() return false any\ntime. And even with such change all check-world tests still pass (except\nfor isolation/predicate-gist that failed due to locking of pages split\ndifferently). So for now, I still don't know how to get incorrect query\nresults due to incorrect gistKeyIsEQ() behavior/excessive page splitting.\n\nBest regards,\nAlexander",
"msg_date": "Sat, 8 Jan 2022 22:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> (Unfortunately for me) I found no anomalies related to gbt_text_same()\n> with an index created with the previous implementation. I've added\n> diagnostic logging that shows when gbt_text_same() returns 0 for keys\n> that are the equal but have different padding. So I've observed that\n> gbt_text_same() returns incorrect result, but all the btree_gist tests\n> still pass. Moreover, unconditional \"*result = 0;\" in gbt_text_same()\n> doesn't affect the tests at all.\n> I've found that gbt_text_same() is called by gistKeyIsEQ() from\n> backend/access/gist/gistutil.c, and made gistKeyIsEQ() return false any\n> time. And even with such change all check-world tests still pass (except\n> for isolation/predicate-gist that failed due to locking of pages split\n> differently). So for now, I still don't know how to get incorrect query\n> results due to incorrect gistKeyIsEQ() behavior/excessive page splitting.\n\nYeah, if that's the only use-case then it's pretty easy to see that\nan overly strict equality test isn't going to hurt us much. At worst\nit'll cause the index to be a little bit inefficiently stored due to\nunnecessary node splits. Even then, that won't happen much in normal\nuse, since the discrepancy could only arise once in the lifespan of\nan index node (when it first sees a new-style entry that could have\nbeen considered equal to the old-style value).\n\nSo I think this solution will work, and I'll go ahead and push it.\nThanks for testing! (and for the original report ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Jan 2022 14:07:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Index-only scan for btree_gist turns bpchar to char"
}
] |
[
{
"msg_contents": "On master, doc/src/sgml/biblio.sgml has a biblioentry for a pdf from ISO:\n\n\"Information technology — Database languages — SQL Technical Reports —\nPart SQL Notation support 6: (JSON) for JavaScript Object\"\n\nThat pdf was a 2017 edition but the url now points to .zip that no \nlonger exists.\n\nThe replacement is a ~200 euro pdf (2021). I'd be thankful if someone \nwould send the pdf to me; maybe I can update my JSON tests.\n\nAnd we should remove that entry from the bibliography (or have it point \nto the new page [1]).\n\n\nErik Rijkers\n\n\n[1] https://www.iso.org/standard/78937.html\n\n\n",
"msg_date": "Tue, 4 Jan 2022 18:10:07 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "biblio.sgml dead link"
},
{
"msg_contents": "On Tue, Jan 04, 2022 at 06:10:07PM +0100, Erik Rijkers wrote:\n> The replacement is a ~200 euro pdf (2021). I'd be thankful if someone would\n> send the pdf to me; maybe I can update my JSON tests.\n> \n> And we should remove that entry from the bibliography (or have it point to\n> the new page [1]).\n\nRemoving the entry seems a bit overdoing it to me, and updating to a\npaywall does not sound completely right to me either. Another thing\nthat we could do is to just remove the link, but keep its reference.\n\nAny thoughts from others?\n--\nMichael",
"msg_date": "Wed, 5 Jan 2022 10:26:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: biblio.sgml dead link"
},
{
"msg_contents": "> On 5 Jan 2022, at 02:26, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Jan 04, 2022 at 06:10:07PM +0100, Erik Rijkers wrote:\n>> The replacement is a ~200 euro pdf (2021). I'd be thankful if someone would\n>> send the pdf to me; maybe I can update my JSON tests.\n>> \n>> And we should remove that entry from the bibliography (or have it point to\n>> the new page [1]).\n> \n> Removing the entry seems a bit overdoing it to me, and updating to a\n> paywall does not sound completely right to me either. Another thing\n> that we could do is to just remove the link, but keep its reference.\n> \n> Any thoughts from others?\n\nWe definitely shouldn't remove it, it's referenced from functions-json.html and\nthat IMO adds value.\n\nI think we should remove the link, not because it costs money but because it\ncan be purchased from several places, and choosing one to \"favor\" seems to\ninvite criticism. Kind of how we don't link to an online store for buying the\nother books.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 5 Jan 2022 21:56:26 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: biblio.sgml dead link"
},
{
"msg_contents": "On Wed, Jan 05, 2022 at 09:56:26PM +0100, Daniel Gustafsson wrote:\n> I think we should remove the link, not because it costs money but because it\n> can be purchased from several places, and choosing one to \"favor\" seems to\n> invite criticism. Kind of how we don't link to an online store for buying the\n> other books.\n\nYeah, that's my feeling as well. So done this way across the board.\n--\nMichael",
"msg_date": "Thu, 6 Jan 2022 11:43:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: biblio.sgml dead link"
}
] |
[
{
"msg_contents": "Hi,\n\nFor genam.c:\n\n+ UseDirtyCatalogSnapshot = dirtysnap;\n+\nDoes the old value of UseDirtyCatalogSnapshot need to be restored at the\nend of the func ?\n\n+systable_recheck_tuple(SysScanDesc sysscan, HeapTuple tup, bool dirtysnap)\n\nConsidering that parameter dirtysnap is a bool, I think it should be named\nisdirtysnap so that its meaning can be distinguished from:\n\n+ Snapshot dirtySnapshot;\n\n+ UseDirtyCatalogSnapshot = true;\n+\n+ dirtySnapshot = GetCatalogSnapshot(RelationGetRelid(*depRel));\n\nI tend to think that passing usedirtysnap (bool parameter)\nto GetCatalogSnapshot() would be more flexible than setting global variable.\n\nCheers\n\nHi,For genam.c:+ UseDirtyCatalogSnapshot = dirtysnap;+Does the old value of UseDirtyCatalogSnapshot need to be restored at the end of the func ?+systable_recheck_tuple(SysScanDesc sysscan, HeapTuple tup, bool dirtysnap)Considering that parameter dirtysnap is a bool, I think it should be named isdirtysnap so that its meaning can be distinguished from:+ Snapshot dirtySnapshot;+ UseDirtyCatalogSnapshot = true;++ dirtySnapshot = GetCatalogSnapshot(RelationGetRelid(*depRel));I tend to think that passing usedirtysnap (bool parameter) to GetCatalogSnapshot() would be more flexible than setting global variable.Cheers",
"msg_date": "Tue, 4 Jan 2022 15:04:46 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch to avoid orphaned dependencies"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgres server emits a message at DEBUG1 level when it skips a\ncheckpoint. At times, developers might be surprised after figuring out\nfrom server logs that there were no checkpoints happening at all\nduring a certain period of time when DEBUG1 messages aren't captured.\nHow about emitting the message at LOG level if log_checkpoints is set?\nPatch attached.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Wed, 5 Jan 2022 10:24:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Emit \"checkpoint skipped because system is idle\" message at LOG level\n if log_checkpoints is set"
},
{
"msg_contents": "On Wed, Jan 5, 2022 at 10:24 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Postgres server emits a message at DEBUG1 level when it skips a\n> checkpoint. At times, developers might be surprised after figuring out\n> from server logs that there were no checkpoints happening at all\n> during a certain period of time when DEBUG1 messages aren't captured.\n> How about emitting the message at LOG level if log_checkpoints is set?\n> Patch attached.\n>\n> Thoughts?\n\n+1 to convert to LOG when log_checkpoints is set.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jan 2022 10:45:06 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Emit \"checkpoint skipped because system is idle\" message at LOG\n level if log_checkpoints is set"
},
{
"msg_contents": "On Wed, Jan 05, 2022 at 10:45:06AM +0530, Dilip Kumar wrote:\n> On Wed, Jan 5, 2022 at 10:24 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Postgres server emits a message at DEBUG1 level when it skips a\n> > checkpoint. At times, developers might be surprised after figuring out\n> > from server logs that there were no checkpoints happening at all\n> > during a certain period of time when DEBUG1 messages aren't captured.\n> > How about emitting the message at LOG level if log_checkpoints is set?\n> > Patch attached.\n> \n> +1 to convert to LOG when log_checkpoints is set.\n\nI think it would be odd to write logs of increased severity, for the case where\nwe did not do anything. I think it really is a debug log.\n\nI don't think the log level should be changed to avoid \"developer\" confusion,\nas you said (I'm not sure if you mean a postgres developer or an application\ndeveloper, though).\n\nIs there any evidence that this has caused user confusion in the last 4 years ?\n\n|commit 6ef2eba3f57f17960b7cd4958e18aa79e357de2f\n|Author: Andres Freund <andres@anarazel.de>\n|Date: Thu Dec 22 11:31:50 2016 -0800\n|\n| Skip checkpoints, archiving on idle systems.\n\nNote that logging a message may not be benign ; I think it could cause the\ndisks to spin up, that would othewise have been in power saving mode,\nespecially if you log to syslog, which can issue fsync. Also, part of the\nargument for enabling log_checkpoint by default was that a small, quiescent\ninstance would not write logs every 5 minutes.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 5 Jan 2022 17:18:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Emit \"checkpoint skipped because system is idle\" message at LOG\n level if log_checkpoints is set"
},
{
"msg_contents": "At Wed, 5 Jan 2022 17:18:06 -0600, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Wed, Jan 05, 2022 at 10:45:06AM +0530, Dilip Kumar wrote:\n> > On Wed, Jan 5, 2022 at 10:24 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Postgres server emits a message at DEBUG1 level when it skips a\n> > > checkpoint. At times, developers might be surprised after figuring out\n> > > from server logs that there were no checkpoints happening at all\n> > > during a certain period of time when DEBUG1 messages aren't captured.\n> > > How about emitting the message at LOG level if log_checkpoints is set?\n> > > Patch attached.\n> > \n> > +1 to convert to LOG when log_checkpoints is set.\n> \n> I think it would be odd to write logs of increased severity, for the case where\n> we did not do anything. I think it really is a debug log.\n> \n> I don't think the log level should be changed to avoid \"developer\" confusion,\n> as you said (I'm not sure if you mean a postgres developer or an application\n> developer, though).\n> \n> Is there any evidence that this has caused user confusion in the last 4 years ?\n> \n> |commit 6ef2eba3f57f17960b7cd4958e18aa79e357de2f\n> |Author: Andres Freund <andres@anarazel.de>\n> |Date: Thu Dec 22 11:31:50 2016 -0800\n> |\n> | Skip checkpoints, archiving on idle systems.\n> \n> Note that logging a message may not be benign ; I think it could cause the\n> disks to spin up, that would othewise have been in power saving mode,\n> especially if you log to syslog, which can issue fsync. Also, part of the\n> argument for enabling log_checkpoint by default was that a small, quiescent\n> instance would not write logs every 5 minutes.\n\nAgreed. -1 to just raising elevel of the message.\n\nIf someone keen to show some debug messages, it is viable for\narbitrary messages by lowering log_min_messages then inserting a\ncustom filter to emit_log_hook. It invites some overhead on\nirrelevant processes, but that overhead would be avoidable with a\n*bit* dirty hack in the filter,\n\nWe might want to discuss more convenient or cleaner way to get the\nsame result.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Jan 2022 16:34:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Emit \"checkpoint skipped because system is idle\" message at\n LOG level if log_checkpoints is set"
},
{
"msg_contents": "On Thu, Jan 06, 2022 at 04:34:38PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 5 Jan 2022 17:18:06 -0600, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> > \n> > |commit 6ef2eba3f57f17960b7cd4958e18aa79e357de2f\n> > |Author: Andres Freund <andres@anarazel.de>\n> > |Date: Thu Dec 22 11:31:50 2016 -0800\n> > |\n> > | Skip checkpoints, archiving on idle systems.\n> > \n> > Note that logging a message may not be benign ; I think it could cause the\n> > disks to spin up, that would othewise have been in power saving mode,\n> > especially if you log to syslog, which can issue fsync. Also, part of the\n> > argument for enabling log_checkpoint by default was that a small, quiescent\n> > instance would not write logs every 5 minutes.\n> \n> Agreed. -1 to just raising elevel of the message.\n\n-1 too.\n\n> If someone keen to show some debug messages, it is viable for\n> arbitrary messages by lowering log_min_messages then inserting a\n> custom filter to emit_log_hook. It invites some overhead on\n> irrelevant processes, but that overhead would be avoidable with a\n> *bit* dirty hack in the filter,\n> \n> We might want to discuss more convenient or cleaner way to get the\n> same result.\n\nWe could add a checkpoint_skipped counter to pg_stat_bgwriter for instance.\n\n\n",
"msg_date": "Thu, 6 Jan 2022 18:58:14 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Emit \"checkpoint skipped because system is idle\" message at LOG\n level if log_checkpoints is set"
},
{
"msg_contents": "On Wed, Jan 05, 2022 at 10:45:06AM +0530, Dilip Kumar wrote:\n> +1 to convert to LOG when log_checkpoints is set.\n\nOn Thu, Jan 06, 2022 at 04:34:38PM +0900, Kyotaro Horiguchi wrote:\n> Agreed. -1 to just raising elevel of the message.\n\nOn Thu, Jan 06, 2022 at 06:58:14PM +0800, Julien Rouhaud wrote:\n> -1 too.\n> \n> > If someone keen to show some debug messages, it is viable for\n> > arbitrary messages by lowering log_min_messages then inserting a\n> > custom filter to emit_log_hook. It invites some overhead on\n> > irrelevant processes, but that overhead would be avoidable with a\n> > *bit* dirty hack in the filter,\n> > \n> > We might want to discuss more convenient or cleaner way to get the\n> > same result.\n> \n> We could add a checkpoint_skipped counter to pg_stat_bgwriter for instance.\n\n+1 (cc Melanie)\n\nBharath: there's no agreement that this behavior change is desirable, so I\nsuggest to close the CF entry.\n\nActually, I suggest to not immediately create CF entries; instead, wait to see\nif there's any objections or discussion. FWIW, I try to wait a day before\ncreating a CF entry, since the scope/goal/desirability of a thread can change\ndramatically. This avoids burdening reviewers with the task of later\ndiscussing whether it's okay to close a CF entry.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 Jan 2022 11:26:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Emit \"checkpoint skipped because system is idle\" message at LOG\n level if log_checkpoints is set"
}
] |
[
{
"msg_contents": "Hi,\n\nthis is more a cosmetic concern, but anyway: running initdb gives this output:\n\n...\ncreating configuration files ... ok\nrunning bootstrap script ... ok\nperforming post-bootstrap initialization ... ok\nsyncing data to disk ... ok\n\ninitdb: warning: enabling \"trust\" authentication for local connections\nYou can change this by editing pg_hba.conf or using the option -A, or\n--auth-local and --auth-host, the next time you run initdb.\n...\n\nShouldn't there be a \".\" after \"authentication for local connections\"? Probably it should be like this:\ninitdb: warning: Enabling \"trust\" authentication for local connections.\n\ninitdb's output a few lines earlier gives this, which all close with a \"dot\" and start with upper case:\n\n\"The database cluster will be initialized with locale \"en_US.UTF-8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\n\nData page checksums are disabled.\"\n\n\nRegards\nDaniel\n\n\n\n\n",
"msg_date": "Wed, 5 Jan 2022 18:48:57 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Are we missing a dot in initdb's output?"
},
{
"msg_contents": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com> writes:\n> initdb: warning: enabling \"trust\" authentication for local connections\n> You can change this by editing pg_hba.conf or using the option -A, or\n> --auth-local and --auth-host, the next time you run initdb.\n\n> Shouldn't there be a \".\" after \"authentication for local connections\"?\n\nMeh, I think this is mimicking our coding style for primary vs. detail\nmessages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Jan 2022 13:56:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing a dot in initdb's output?"
}
] |
[
{
"msg_contents": "Commit 6ce16088b caused me to look at pgoutput.c's handling of\ncache invalidations, and I was pretty appalled by what I found.\n\n* rel_sync_cache_relation_cb does the wrong thing when called for\na cache flush (i.e., relid == 0). Instead of invalidating all\nRelationSyncCache entries as it should, it will do nothing.\n\n* When rel_sync_cache_relation_cb does invalidate an entry,\nit immediately zaps the entry->map structure, even though that\nmight still be in use (as per the adjacent comment that carefully\nexplains why this isn't safe). I'm not sure if this could lead\nto a dangling-pointer core dump, but it sure seems like it could\nlead to failing to translate tuples that are about to be sent.\n\n* Similarly, rel_sync_cache_publication_cb is way too eager to\nreset the pubactions flags, which would likely lead to failing\nto transmit changes that we should transmit.\n\nThe attached patch fixes these things, but I'm still pretty\nunhappy with the general design of the data structures in\npgoutput.c, because there is this weird random mishmash of\nstatic variables along with a palloc'd PGOutputData struct.\nThis cannot work if there are ever two active LogicalDecodingContexts\nin the same process. I don't think serial use of LogicalDecodingContexts\n(ie, destroy one and then make another) works very well either,\nbecause pgoutput_shutdown is a mere fig leaf that ignores all the\njunk the module previously made (in CacheMemoryContext no less).\nSo I wonder whether either of those scenarios is possible/supported/\nexpected to be needed in future.\n\nAlso ... maybe I'm not looking in the right place, but I do not\nsee anything anywhere in logical decoding that is taking any lock\non the relation being processed. How can that be safe? In\nparticular, how do we know that the data collected by get_rel_sync_entry\nisn't already stale by the time we return from the function?\nSetting replicate_valid = true at the bottom of the function would\noverwrite any notification we might have gotten from a syscache callback\nwhile reading catalog data.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 05 Jan 2022 17:11:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bugs in pgoutput.c"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Commit 6ce16088b caused me to look at pgoutput.c's handling of\n> cache invalidations, and I was pretty appalled by what I found.\n>\n> * rel_sync_cache_relation_cb does the wrong thing when called for\n> a cache flush (i.e., relid == 0). Instead of invalidating all\n> RelationSyncCache entries as it should, it will do nothing.\n>\n> * When rel_sync_cache_relation_cb does invalidate an entry,\n> it immediately zaps the entry->map structure, even though that\n> might still be in use (as per the adjacent comment that carefully\n> explains why this isn't safe). I'm not sure if this could lead\n> to a dangling-pointer core dump, but it sure seems like it could\n> lead to failing to translate tuples that are about to be sent.\n>\n> * Similarly, rel_sync_cache_publication_cb is way too eager to\n> reset the pubactions flags, which would likely lead to failing\n> to transmit changes that we should transmit.\n>\n> The attached patch fixes these things, but I'm still pretty\n> unhappy with the general design of the data structures in\n> pgoutput.c, because there is this weird random mishmash of\n> static variables along with a palloc'd PGOutputData struct.\n> This cannot work if there are ever two active LogicalDecodingContexts\n> in the same process. I don't think serial use of LogicalDecodingContexts\n> (ie, destroy one and then make another) works very well either,\n> because pgoutput_shutdown is a mere fig leaf that ignores all the\n> junk the module previously made (in CacheMemoryContext no less).\n> So I wonder whether either of those scenarios is possible/supported/\n> expected to be needed in future.\n>\n> Also ... maybe I'm not looking in the right place, but I do not\n> see anything anywhere in logical decoding that is taking any lock\n> on the relation being processed. How can that be safe?\n>\n\nWe don't need to acquire a lock on relation while decoding changes\nfrom WAL because it uses a historic snapshot to build a relcache entry\nand all the later changes to the rel are absorbed while decoding WAL.\n\nIt is important to not acquire a lock on user-defined relations during\ndecoding otherwise it could lead to deadlock as explained in the email\n[1].\n\n* Would it be better if we move all the initialization done by patch\nin get_rel_sync_entry() to a separate function as I expect future\npatches might need to reset more things?\n\n*\n * logical decoding callback calls - but invalidation events can come in\n- * *during* a callback if we access the relcache in the callback. Because\n- * of that we must mark the cache entry as invalid but not remove it from\n- * the hash while it could still be referenced, then prune it at a later\n- * safe point.\n- *\n- * Getting invalidations for relations that aren't in the table is\n- * entirely normal, since there's no way to unregister for an invalidation\n- * event. So we don't care if it's found or not.\n+ * *during* a callback if we do any syscache or table access in the\n+ * callback.\n\nAs we don't take locks on tables, can invalidation events be accepted\nduring table access? I could be missing something but I see relation.c\naccepts invalidation messages only when lock mode is not 'NoLock'.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Ks%2Bp8wDbzhDr7yMYEWDbWFRJAd_uOY-moikc%2Bzr9ER%2Bg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 6 Jan 2022 09:46:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also ... maybe I'm not looking in the right place, but I do not\n>> see anything anywhere in logical decoding that is taking any lock\n>> on the relation being processed. How can that be safe?\n\n> We don't need to acquire a lock on relation while decoding changes\n> from WAL because it uses a historic snapshot to build a relcache entry\n> and all the later changes to the rel are absorbed while decoding WAL.\n\nThat might be okay for the system catalog entries, but I don't see\nhow it prevents some other session from dropping the table entirely,\nthereby causing the on-disk storage to go away. Is it guaranteed\nthat logical decoding will never try to fetch any on-disk data?\n(I can sort of believe that that might be true, but there are scary\ncorner cases for toasted data, such as an UPDATE that carries forward\na pre-existing toast datum.)\n\n> * Would it be better if we move all the initialization done by patch\n> in get_rel_sync_entry() to a separate function as I expect future\n> patches might need to reset more things?\n\nDon't see that it helps particularly.\n\n> + * *during* a callback if we do any syscache or table access in the\n> + * callback.\n\n> As we don't take locks on tables, can invalidation events be accepted\n> during table access? I could be missing something but I see relation.c\n> accepts invalidation messages only when lock mode is not 'NoLock'.\n\nThe core point here is that you're assuming that NO code path taken\nduring logical decoding would try to take a lock. I don't believe it,\nat least not unless you can point me to some debugging cross-check that\nguarantees it.\n\nGiven that we're interested in historic not current snapshots, I can\nbuy that it might be workable to manage syscache invalidations totally\ndifferently than the way it's done in normal processing, in which case\n(*if* it's done like that) maybe no invals would need to be recognized\nwhile an output plugin is executing. But (a) the comment here is\nentirely wrong if that's so, and (b) I don't see anything in inval.c\nthat makes it work differently.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jan 2022 14:58:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 2:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That might be okay for the system catalog entries, but I don't see\n> how it prevents some other session from dropping the table entirely,\n> thereby causing the on-disk storage to go away. Is it guaranteed\n> that logical decoding will never try to fetch any on-disk data?\n> (I can sort of believe that that might be true, but there are scary\n> corner cases for toasted data, such as an UPDATE that carries forward\n> a pre-existing toast datum.)\n\nIf I'm not mistaken, logical decoding is only allowed to read data\nfrom system catalog tables or tables that are flagged as being \"like\nsystem catalog tables for logical decoding purposes only\". See\nRelationIsAccessibleInLogicalDecoding and\nRelationIsUsedAsCatalogTable. As far as the actual table data is\nconcerned, it has to be reconstructed solely from the WAL.\n\nI am not sure what locking is required here and am not taking a\nposition on that ... but it's definitely not the case that a logical\ndecoding plugin can decide to just read from any old table it likes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jan 2022 15:34:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> > + * *during* a callback if we do any syscache or table access in the\n> > + * callback.\n>\n> > As we don't take locks on tables, can invalidation events be accepted\n> > during table access? I could be missing something but I see relation.c\n> > accepts invalidation messages only when lock mode is not 'NoLock'.\n>\n> The core point here is that you're assuming that NO code path taken\n> during logical decoding would try to take a lock. I don't believe it,\n> at least not unless you can point me to some debugging cross-check that\n> guarantees it.\n>\n\nAFAIK, currently, there is no such debugging cross-check in locking\nAPIs but we can add one to ensure that we don't acquire lock on\nuser-defined tables during logical decoding. As pointed by Robert, I\nalso don't think accessing user tables will work during logical\ndecoding.\n\n> Given that we're interested in historic not current snapshots, I can\n> buy that it might be workable to manage syscache invalidations totally\n> differently than the way it's done in normal processing, in which case\n> (*if* it's done like that) maybe no invals would need to be recognized\n> while an output plugin is executing. But (a) the comment here is\n> entirely wrong if that's so, and (b) I don't see anything in inval.c\n> that makes it work differently.\n>\n\nI think we need invalidations to work in output plugin to ensure that\nthe RelationSyncEntry has correct information.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 7 Jan 2022 09:07:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Commit 6ce16088b caused me to look at pgoutput.c's handling of\n> cache invalidations, and I was pretty appalled by what I found.\n>\n> * rel_sync_cache_relation_cb does the wrong thing when called for\n> a cache flush (i.e., relid == 0). Instead of invalidating all\n> RelationSyncCache entries as it should, it will do nothing.\n>\n> * When rel_sync_cache_relation_cb does invalidate an entry,\n> it immediately zaps the entry->map structure, even though that\n> might still be in use (as per the adjacent comment that carefully\n> explains why this isn't safe). I'm not sure if this could lead\n> to a dangling-pointer core dump, but it sure seems like it could\n> lead to failing to translate tuples that are about to be sent.\n>\n> * Similarly, rel_sync_cache_publication_cb is way too eager to\n> reset the pubactions flags, which would likely lead to failing\n> to transmit changes that we should transmit.\n>\n\nAre you planning to proceed with this patch? AFAICS, this is good to\ngo. Yesterday, while debugging/analyzing one cfbot failure[1] with one\nof my colleagues for row filter patch [2], we have seen the problem\ndue to the exact reason (second reason) you outlined here. After using\nyour patch and adapting the row filter patch atop it we see problem\ngot fixed.\n\n[1] - https://cirrus-ci.com/task/5450648090050560?logs=test_world#L3975\n[2] - https://commitfest.postgresql.org/36/2906/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 29 Jan 2022 08:32:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "On Sat, Jan 29, 2022 at 8:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jan 6, 2022 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Commit 6ce16088b caused me to look at pgoutput.c's handling of\n> > cache invalidations, and I was pretty appalled by what I found.\n> >\n> > * rel_sync_cache_relation_cb does the wrong thing when called for\n> > a cache flush (i.e., relid == 0). Instead of invalidating all\n> > RelationSyncCache entries as it should, it will do nothing.\n> >\n> > * When rel_sync_cache_relation_cb does invalidate an entry,\n> > it immediately zaps the entry->map structure, even though that\n> > might still be in use (as per the adjacent comment that carefully\n> > explains why this isn't safe). I'm not sure if this could lead\n> > to a dangling-pointer core dump, but it sure seems like it could\n> > lead to failing to translate tuples that are about to be sent.\n> >\n> > * Similarly, rel_sync_cache_publication_cb is way too eager to\n> > reset the pubactions flags, which would likely lead to failing\n> > to transmit changes that we should transmit.\n> >\n>\n> Are you planning to proceed with this patch?\n>\n\nTom, is it okay for you if I go ahead with this patch after some testing?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Feb 2022 08:15:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Tom, is it okay for you if I go ahead with this patch after some testing?\n\nI've been too busy to get back to it, so sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Feb 2022 21:48:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "On Thu, Feb 3, 2022 at 8:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Tom, is it okay for you if I go ahead with this patch after some testing?\n>\n> I've been too busy to get back to it, so sure.\n>\n\nThanks. I have tested the patch by generating an invalidation message\nfor table DDL before accessing the syscache in\nlogicalrep_write_tuple(). I see that it correctly invalidates the\nentry and rebuilds it for the next operation. I couldn't come up with\nsome automatic test for it so used the debugger to test it. I have\nmade a minor change in one of the comments. I am planning to push this\ntomorrow unless there are comments or suggestions.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 3 Feb 2022 17:24:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bugs in pgoutput.c"
},
{
"msg_contents": "On Thu, Feb 3, 2022 at 5:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 3, 2022 at 8:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > Tom, is it okay for you if I go ahead with this patch after some testing?\n> >\n> > I've been too busy to get back to it, so sure.\n> >\n>\n> Thanks. I have tested the patch by generating an invalidation message\n> for table DDL before accessing the syscache in\n> logicalrep_write_tuple(). I see that it correctly invalidates the\n> entry and rebuilds it for the next operation. I couldn't come up with\n> some automatic test for it so used the debugger to test it. I have\n> made a minor change in one of the comments. I am planning to push this\n> tomorrow unless there are comments or suggestions.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Feb 2022 17:50:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bugs in pgoutput.c"
}
] |
[
{
"msg_contents": "I saw one (and then went looking and found some more) enum with a\ntrailing comma.\n\nThese are quite rare in the PG src, so I doubt they are intentional.\n\nPSA a patch to remove the trailing commas for all that I found.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 6 Jan 2022 10:56:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove trailing comma from enums"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 12:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> I saw one (and then went looking and found some more) enum with a\n> trailing comma.\n>\n> These are quite rare in the PG src, so I doubt they are intentional.\n>\n> PSA a patch to remove the trailing commas for all that I found.\n\n-1. I don't see the problem with C99 trailing commas. They avoid\nnoisy diff lines when patches add/remove items.\n\n\n",
"msg_date": "Thu, 6 Jan 2022 14:10:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing comma from enums"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jan 6, 2022 at 12:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>> These are quite rare in the PG src, so I doubt they are intentional.\n>> PSA a patch to remove the trailing commas for all that I found.\n\n> -1. I don't see the problem with C99 trailing commas. They avoid\n> noisy diff lines when patches add/remove items.\n\nI think they're rare because up till very recently we catered to\npre-C99 compilers that wouldn't accept them. There's not much\npoint in insisting on that now, though.\n\nPersonally I'm less excited than Thomas about trailing commas\nbeing good for reducing diff noise, mainly because I think\nthat \"add new entries at the end\" is an anti-pattern, and\nif you put new items where they logically belong then the\nproblem is much rarer. But I'm not going to argue against\ncommitters who want to do it like that, either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Jan 2022 20:23:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing comma from enums"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 12:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Thu, Jan 6, 2022 at 12:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >> These are quite rare in the PG src, so I doubt they are intentional.\n> >> PSA a patch to remove the trailing commas for all that I found.\n>\n> > -1. I don't see the problem with C99 trailing commas. They avoid\n> > noisy diff lines when patches add/remove items.\n>\n> I think they're rare because up till very recently we catered to\n> pre-C99 compilers that wouldn't accept them. There's not much\n> point in insisting on that now, though.\n>\n> Personally I'm less excited than Thomas about trailing commas\n> being good for reducing diff noise, mainly because I think\n> that \"add new entries at the end\" is an anti-pattern, and\n> if you put new items where they logically belong then the\n> problem is much rarer. But I'm not going to argue against\n> committers who want to do it like that, either.\n\nFWIW, the background of this was that one of these examples overlapped\nwith a feature currently in development and it just caused a waste of\neveryone's time by firstly \"fixing\" (removing) the extra comma and\nthen getting multiple code reviews saying the change was unrelated to\nthat feature and so having to remove that fix again. So I felt\nremoving all such commas at HEAD not only makes all the enums\nconsistent, but it may prevent similar time-wasting for others in the\nfuture.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 6 Jan 2022 12:52:50 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove trailing comma from enums"
},
{
"msg_contents": "At Thu, 6 Jan 2022 12:52:50 +1100, Peter Smith <smithpb2250@gmail.com> wrote in \n> On Thu, Jan 6, 2022 at 12:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > On Thu, Jan 6, 2022 at 12:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >> These are quite rare in the PG src, so I doubt they are intentional.\n> > >> PSA a patch to remove the trailing commas for all that I found.\n> >\n> > > -1. I don't see the problem with C99 trailing commas. They avoid\n> > > noisy diff lines when patches add/remove items.\n> >\n> > I think they're rare because up till very recently we catered to\n> > pre-C99 compilers that wouldn't accept them. There's not much\n> > point in insisting on that now, though.\n> >\n> > Personally I'm less excited than Thomas about trailing commas\n> > being good for reducing diff noise, mainly because I think\n> > that \"add new entries at the end\" is an anti-pattern, and\n> > if you put new items where they logically belong then the\n> > problem is much rarer. But I'm not going to argue against\n> > committers who want to do it like that, either.\n> \n> FWIW, the background of this was that one of these examples overlapped\n> with a feature currently in development and it just caused a waste of\n> everyone's time by firstly \"fixing\" (removing) the extra comma and\n> then getting multiple code reviews saying the change was unrelated to\n> that feature and so having to remove that fix again. So I felt\n> removing all such commas at HEAD not only makes all the enums\n> consistent, but it may prevent similar time-wasting for others in the\n> future.\n\nI don't know where the above conversation took place, but it seems to\nme that the first patch is not significant for reviewing and the last\npatch seems to be just a waste of time even premising the first patch\nsurvives.\n\nI don't care whether the last item of an enum has a trailing comma or\nnot. (Or I like comma-less generally but I understand Thomas'\nopinion.) I think one may take either way if need to modify the lines\ninvolving such lines. But mildly object to make a change just to fix\nthem.\n\n# Also, I don't want to see a comma-batttle breaks out at the end of\n# an enum though...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Jan 2022 17:20:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove trailing comma from enums"
}
] |
[
{
"msg_contents": "Hi all,\n\n$subject is taken care of by Bruce every year, but this has not been\ndone yet. That's one run of src/tools/copyright.pl on HEAD (as of the\nattached) combined with updates of ./doc/src/sgml/legal.sgml and\n./COPYRIGHT in the back-branches, so that's straight-forward.\n\nBruce, are you planning to refresh the branches?\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 6 Jan 2022 14:48:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Updating Copyright notices to 2022?"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 02:48:05PM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> $subject is taken care of by Bruce every year, but this has not been\n> done yet. That's one run of src/tools/copyright.pl on HEAD (as of the\n> attached) combined with updates of ./doc/src/sgml/legal.sgml and\n> ./COPYRIGHT in the back-branches, so that's straight-forward.\n> \n> Bruce, are you planning to refresh the branches?\n\nI like to be current on my email before I do it, and I have been\ncompleting a Debian upgrade this week so am behind. If you would like\nto do it, please go ahead. If not, I will try to do it before Sunday.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 7 Jan 2022 16:57:11 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Updating Copyright notices to 2022?"
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 04:57:11PM -0500, Bruce Momjian wrote:\n> I like to be current on my email before I do it, and I have been\n> completing a Debian upgrade this week so am behind. If you would like\n> to do it, please go ahead. If not, I will try to do it before Sunday.\n\nThanks for the update. There is no rush here, so waiting for a couple\nof days won't change anything. And historically, you are the one who\nhas always done the change :)\n--\nMichael",
"msg_date": "Sat, 8 Jan 2022 08:51:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Updating Copyright notices to 2022?"
},
{
"msg_contents": "On Sat, Jan 8, 2022 at 08:51:08AM +0900, Michael Paquier wrote:\n> On Fri, Jan 07, 2022 at 04:57:11PM -0500, Bruce Momjian wrote:\n> > I like to be current on my email before I do it, and I have been\n> > completing a Debian upgrade this week so am behind. If you would like\n> > to do it, please go ahead. If not, I will try to do it before Sunday.\n> \n> Thanks for the update. There is no rush here, so waiting for a couple\n> of days won't change anything. And historically, you are the one who\n> has always done the change :)\n\nDone, sorry for the delay.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 7 Jan 2022 19:06:06 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Updating Copyright notices to 2022?"
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 07:06:06PM -0500, Bruce Momjian wrote:\n> Done, sorry for the delay.\n\nNo problem. Thanks!\n--\nMichael",
"msg_date": "Sat, 8 Jan 2022 14:13:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Updating Copyright notices to 2022?"
}
] |
[
{
"msg_contents": "Dear all\n\nWhen ingesting mobility (IoT) data into MobilityDB\nhttps://mobilitydb.com/\nwe transform very wide (2K attributes) car mobility data of high frequence\n(every tenth of a second) from flat format (e.g. CSV) into MobilityDB\nformat in which there is a single record per trip and each of the signals\nis transformed into a temporal attribute (tbool, tint, tfloat, ttext,\ntgeompoint, tgeogpoint), which are temporal extensions of the corresponding\nPostgreSQL/PostGIS base types (bool, int, float, text, geometry,\ngeography). All temporal types are stored using extended format, e.g.,\n CREATE TYPE tfloat (\n internallength = variable,\n [...]\n storage = extended,\n alignment = double,\n [...]\n );\n\nGiven that each temporal value can be very wide (on average 30K\ntimestamped points/floats/text/... per trip) our first question is\n* Is extended the right storage for this ?\n\nOur second question is how all the 2K temporal attributes are stored, which\nmay be\n* on a single table space\n* in one table space per attribute\nwhich in other words, relates to the question row vs column storage.\n\nMany thanks for your insight\n\nEsteban\n\nDear allWhen ingesting mobility (IoT) data into MobilityDBhttps://mobilitydb.com/we transform very wide (2K attributes) car mobility data of high frequence (every tenth of a second) from flat format (e.g. CSV) into MobilityDB format in which there is a single record per trip and each of the signals is transformed into a temporal attribute (tbool, tint, tfloat, ttext, tgeompoint, tgeogpoint), which are temporal extensions of the corresponding PostgreSQL/PostGIS base types (bool, int, float, text, geometry, geography). All temporal types are stored using extended format, e.g., CREATE TYPE tfloat ( internallength = variable, [...] storage = extended, alignment = double, [...] );Given that each temporal value can be very wide (on average 30K timestamped points/floats/text/... per trip) our first question is* Is extended the right storage for this ?Our second question is how all the 2K temporal attributes are stored, which may be* on a single table space* in one table space per attributewhich in other words, relates to the question row vs column storage.Many thanks for your insightEsteban",
"msg_date": "Thu, 6 Jan 2022 16:05:20 +0100",
"msg_from": "Esteban Zimanyi <esteban.zimanyi@ulb.be>",
"msg_from_op": true,
"msg_subject": "Storage for multiple variable-length attributes in a single row"
},
{
"msg_contents": "Dear all\n\nMay I kindly ask your insight about a question I posted 1 month ago and for\nwhich I never received any answer ?\n\nMany thanks\n\nOn Thu, Jan 6, 2022 at 4:05 PM Esteban Zimanyi <esteban.zimanyi@ulb.be>\nwrote:\n\n> Dear all\n>\n> When ingesting mobility (IoT) data into MobilityDB\n> https://mobilitydb.com/\n> we transform very wide (2K attributes) car mobility data of high frequence\n> (every tenth of a second) from flat format (e.g. CSV) into MobilityDB\n> format in which there is a single record per trip and each of the signals\n> is transformed into a temporal attribute (tbool, tint, tfloat, ttext,\n> tgeompoint, tgeogpoint), which are temporal extensions of the corresponding\n> PostgreSQL/PostGIS base types (bool, int, float, text, geometry,\n> geography). All temporal types are stored using extended format, e.g.,\n> CREATE TYPE tfloat (\n> internallength = variable,\n> [...]\n> storage = extended,\n> alignment = double,\n> [...]\n> );\n>\n> Given that each temporal value can be very wide (on average 30K\n> timestamped points/floats/text/... per trip) our first question is\n> * Is extended the right storage for this ?\n>\n> Our second question is how all the 2K temporal attributes are stored,\n> which may be\n> * on a single table space\n> * in one table space per attribute\n> which in other words, relates to the question row vs column storage.\n>\n> Many thanks for your insight\n>\n> Esteban\n>\n>\n\nDear allMay I kindly ask your insight about a question I posted 1 month ago and for which I never received any answer ?Many thanksOn Thu, Jan 6, 2022 at 4:05 PM Esteban Zimanyi <esteban.zimanyi@ulb.be> wrote:Dear allWhen ingesting mobility (IoT) data into MobilityDBhttps://mobilitydb.com/we transform very wide (2K attributes) car mobility data of high frequence (every tenth of a second) from flat format (e.g. CSV) into MobilityDB format in which there is a single record per trip and each of the signals is transformed into a temporal attribute (tbool, tint, tfloat, ttext, tgeompoint, tgeogpoint), which are temporal extensions of the corresponding PostgreSQL/PostGIS base types (bool, int, float, text, geometry, geography). All temporal types are stored using extended format, e.g., CREATE TYPE tfloat ( internallength = variable, [...] storage = extended, alignment = double, [...] );Given that each temporal value can be very wide (on average 30K timestamped points/floats/text/... per trip) our first question is* Is extended the right storage for this ?Our second question is how all the 2K temporal attributes are stored, which may be* on a single table space* in one table space per attributewhich in other words, relates to the question row vs column storage.Many thanks for your insightEsteban",
"msg_date": "Mon, 7 Feb 2022 16:52:22 +0100",
"msg_from": "Esteban Zimanyi <esteban.zimanyi@ulb.be>",
"msg_from_op": true,
"msg_subject": "Re: Storage for multiple variable-length attributes in a single row"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 8:44 AM Esteban Zimanyi <esteban.zimanyi@ulb.be>\nwrote:\n\n> May I kindly ask your insight about a question I posted 1 month ago and\n> for which I never received any answer ?\n>\n\n-hackers really isn't the correct place for usage questions like this -\neven if you are creating a custom type (why you are doing this is left\nunstated, I have no idea what it means to be a temporally extended version\nof boolean, etc...)\n\nYou should read up on how TOAST works in PostgreSQL since that is what\nhandles large length data values.\n\nIIUC your setup correctly, you are claiming to have 2k or more columns.\nThis well exceeds the limit for PostgreSQL.\nA \"tablespace\" is a particular functionality provided by the server. You\nare using \"table space\" in a different sense and I'm unsure exactly what\nyou mean. I presume \"cell\". PostgreSQL has row-oriented storage (our\ninternals documentation goes over this).\n\nI think your mention of mobilitydb also complicates receiving a useful\nresponse as this list is for the core project. That you can exceed the\ncolumn count limit suggests that your environment is enough different than\ncore that you should be asking there.\n\nYou will need to await someone else to specifically answer the extended\nstorage question though - but I suspect you've provided insufficient\ndetails in that regard.\n\nDavid J.\n\nOn Mon, Feb 7, 2022 at 8:44 AM Esteban Zimanyi <esteban.zimanyi@ulb.be> wrote:May I kindly ask your insight about a question I posted 1 month ago and for which I never received any answer ?-hackers really isn't the correct place for usage questions like this - even if you are creating a custom type (why you are doing this is left unstated, I have no idea what it means to be a temporally extended version of boolean, etc...)You should read up on how TOAST works in PostgreSQL since that is what handles large length data values.IIUC your setup correctly, you are claiming to have 2k or more columns. This well exceeds the limit for PostgreSQL.A \"tablespace\" is a particular functionality provided by the server. You are using \"table space\" in a different sense and I'm unsure exactly what you mean. I presume \"cell\". PostgreSQL has row-oriented storage (our internals documentation goes over this).I think your mention of mobilitydb also complicates receiving a useful response as this list is for the core project. That you can exceed the column count limit suggests that your environment is enough different than core that you should be asking there.You will need to await someone else to specifically answer the extended storage question though - but I suspect you've provided insufficient details in that regard.David J.",
"msg_date": "Mon, 7 Feb 2022 09:04:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage for multiple variable-length attributes in a single row"
},
{
"msg_contents": "Many thanks for your prompt reply David. Allow me then to restate the\nquestions, hoping that this better fits this mailing list.\n\nMobilityDB is a time-series extension to PostgreSQL/PostGIS in which\ntime-varying attributes (e.g., gear, GPS location of a car) are\nsemantically grouped into \"units\" (e.g., a trip of a car) and are stored as\ntemporal functions, e.g., a set of couples (integer, timestamptz) for gear\n(a temporal integer) or a set of triples (lon, lat, timestamptz) for the\nGPS location (a temporal point). All temporal types are stored using\nextended format, e.g.,\n CREATE TYPE tint (\n internallength = variable,\n [...]\n storage = extended,\n alignment = double,\n [...]\n );\nWhen ingesting mobility (IoT) data into MobilityDB we receive very wide (2K\nattributes) of high frequency (every tenth of a second) from flat format\n(e.g. CSV) and we need to store it in PostgreSQL tables using MobilityDB\ntemporal types. In the above scenario, the values of these temporal types\ncan be very wide (on average 30K timestamped couples/triples per trip).\n\nAs suggested by David, this goes beyond the \"traditional\" usage of\nPostgreSQL. Therefore my questions are\n* What is the suggested strategy to splitting these 2K attributes into\nvertically partitioned tables where the tables are linked by the primary\nkey (e.g. trip number in the example above). Are there any limitations/best\npractices in the number/size of TOASTED attributes that a table should\ncontain.\n* In each partitioned table containing N TOASTED attributes, given the\nabove requirements, are there any limitations/best practices in storing\nthem using extended storage or an alternative one such as external.\n\nMany thanks for your insight\n\nEsteban\n\nMany thanks for your prompt reply David. Allow me then to restate the questions, hoping that this better fits this mailing list.MobilityDB is a time-series extension to PostgreSQL/PostGIS in which time-varying attributes (e.g., gear, GPS location of a car) are semantically grouped into \"units\" (e.g., a trip of a car) and are stored as temporal functions, e.g., a set of couples (integer, timestamptz) for gear (a temporal integer) or a set of triples (lon, lat, timestamptz) for the GPS location (a temporal point). All temporal types are stored using extended format, e.g., CREATE TYPE tint ( internallength = variable, [...] storage = extended, alignment = double, [...] );When ingesting mobility (IoT) data into MobilityDB we receive very wide (2K attributes) of high frequency (every tenth of a second) from flat format (e.g. CSV) and we need to store it in PostgreSQL tables using MobilityDB temporal types. In the above scenario, the values of these temporal types can be very wide (on average 30K timestamped couples/triples per trip).As suggested by David, this goes beyond the \"traditional\" usage of PostgreSQL. Therefore my questions are* What is the suggested strategy to splitting these 2K attributes into vertically partitioned tables where the tables are linked by the primary key (e.g. trip number in the example above). Are there any limitations/best practices in the number/size of TOASTED attributes that a table should contain.* In each partitioned table containing N TOASTED attributes, given the above requirements, are there any limitations/best practices in storing them using extended storage or an alternative one such as external.Many thanks for your insightEsteban",
"msg_date": "Mon, 7 Feb 2022 18:06:43 +0100",
"msg_from": "Esteban Zimanyi <estebanzimanyi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage for multiple variable-length attributes in a single row"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 9:58 AM Esteban Zimanyi <estebanzimanyi@gmail.com>\nwrote:\n\n>\n> As suggested by David, this goes beyond the \"traditional\" usage of\n> PostgreSQL. Therefore my questions are\n> * What is the suggested strategy to splitting these 2K attributes into\n> vertically partitioned tables where the tables are linked by the primary\n> key (e.g. trip number in the example above). Are there any limitations/best\n> practices in the number/size of TOASTED attributes that a table should\n> contain.\n> * In each partitioned table containing N TOASTED attributes, given the\n> above requirements, are there any limitations/best practices in storing\n> them using extended storage or an alternative one such as external.\n>\n>\nFrankly, the best practice is \"don't have that many columns\". Since you\ndo, I posit that you are just going to have to make decisions (possibly\nwith experimentation) on your own. Or maybe ask around on a MobilityDB\nforum what people using that tool and having these kinds of data structures\ndo. From a core PostgreSQL perspective you've already deviated from the\nmodel structures that it was designed with in mind.\nI'm really confused that you'd want the data value itself to contain a\ntimestamp that, on a per-row basis, should be the same timestamp that every\nother value on the row has. Extracting the timestamp to it own column and\nusing simpler and atomic data types is how core PostgreSQL and the\nrelational model normalization recommend dealing with this situation. Then\nyou just break up the attributes of a similar nature into their own tables\nbased upon their shared nature. In almost all cases relying on \"main\"\nstorage.\n\nDavid J.\n\nOn Mon, Feb 7, 2022 at 9:58 AM Esteban Zimanyi <estebanzimanyi@gmail.com> wrote:As suggested by David, this goes beyond the \"traditional\" usage of PostgreSQL. Therefore my questions are* What is the suggested strategy to splitting these 2K attributes into vertically partitioned tables where the tables are linked by the primary key (e.g. trip number in the example above). Are there any limitations/best practices in the number/size of TOASTED attributes that a table should contain.* In each partitioned table containing N TOASTED attributes, given the above requirements, are there any limitations/best practices in storing them using extended storage or an alternative one such as external.Frankly, the best practice is \"don't have that many columns\". Since you do, I posit that you are just going to have to make decisions (possibly with experimentation) on your own. Or maybe ask around on a MobilityDB forum what people using that tool and having these kinds of data structures do. From a core PostgreSQL perspective you've already deviated from the model structures that it was designed with in mind.I'm really confused that you'd want the data value itself to contain a timestamp that, on a per-row basis, should be the same timestamp that every other value on the row has. Extracting the timestamp to it own column and using simpler and atomic data types is how core PostgreSQL and the relational model normalization recommend dealing with this situation. Then you just break up the attributes of a similar nature into their own tables based upon their shared nature. In almost all cases relying on \"main\" storage.David J.",
"msg_date": "Mon, 7 Feb 2022 10:10:53 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage for multiple variable-length attributes in a single row"
},
{
"msg_contents": "On Mon, Feb 07, 2022 at 10:10:53AM -0700, David G. Johnston wrote:\n> On Mon, Feb 7, 2022 at 9:58 AM Esteban Zimanyi <estebanzimanyi@gmail.com>\n> wrote:\n> \n> >\n> > As suggested by David, this goes beyond the \"traditional\" usage of\n> > PostgreSQL. Therefore my questions are\n> > * What is the suggested strategy to splitting these 2K attributes into\n> > vertically partitioned tables where the tables are linked by the primary\n> > key (e.g. trip number in the example above). Are there any limitations/best\n> > practices in the number/size of TOASTED attributes that a table should\n> > contain.\n> > * In each partitioned table containing N TOASTED attributes, given the\n> > above requirements, are there any limitations/best practices in storing\n> > them using extended storage or an alternative one such as external.\n> >\n> >\n> Frankly, the best practice is \"don't have that many columns\". Since you\n> do, I posit that you are just going to have to make decisions (possibly\n> with experimentation) on your own. Or maybe ask around on a MobilityDB\n> forum what people using that tool and having these kinds of data structures\n> do. From a core PostgreSQL perspective you've already deviated from the\n> model structures that it was designed with in mind.\n> I'm really confused that you'd want the data value itself to contain a\n> timestamp that, on a per-row basis, should be the same timestamp that every\n> other value on the row has. Extracting the timestamp to it own column and\n> using simpler and atomic data types is how core PostgreSQL and the\n> relational model normalization recommend dealing with this situation. Then\n> you just break up the attributes of a similar nature into their own tables\n> based upon their shared nature. In almost all cases relying on \"main\"\n> storage.\n\nActually looking at the original example:\n\n> CREATE TYPE tint (\n> internallength = variable,\n> [...]\n> storage = extended,\n> alignment = double,\n> [...]\n> );\n\nI'm wondering if it's just some miscommunication here. If the tint data type\nonly needs to hold a timestamp and an int, I don't see why it would be\nvarlerna at all.\n\nSo if a single tint can internally hold thousands of (int, timestamptz), a bit\nlike pgpointcloud, then having it by default external (so both possibly\nout-of-line and compressed) seems like a good idea, as you can definitely hit\nthe 8k boundary, it should compress nicely and you also avoid some quite high\ntuple header overhead.\n\n\n",
"msg_date": "Tue, 8 Feb 2022 01:19:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage for multiple variable-length attributes in a single row"
},
{
"msg_contents": "Dear David\n\nThere are two approaches for storing temporal information in a relational\ndatabase, explored since the 1980s following the work of Richard Snodgrass\nhttp://www2.cs.arizona.edu/~rts/publications.html\ntuple-timestamping vs attribute-timestamping. The SQL standard used the\ntuple-timestamping approach, but in MobilityDB we decided to use the\nattribute-timestamping approach. As you rightly pointed out,\ntuple-timestamping follows the traditional relational normalization theory.\n\nThe main advantage of the attribute timestamping for mobility data is that\nwe need only to store the changes of values for a temporal attribute. In\nthe example of gear for a car, even if we receive high-frequency\nobservations, there will be very little gear changes for a trip, while\nthere will be much more position changes. Therefore on MobilityDB we only\nstore the change of values (e.g., no change of position will be stored\nduring a red light or traffic jam), which constitutes a huge lossless\ncompression with respect to the raw format storing every observation in a\nsingle row. We have experimented 450% lossless compression for real IoT\ndata.\n\nIn addition, MobilityDB does all temporal operations and allows to\ndetermine the value of any temporal attribute at any timestamp (e.g., using\nlinear interpolation between observations for speed or GPS position),\nindependently of the actual stored observations.\n\nI hope this clarifies things a little.\n\nDear DavidThere are two approaches for storing temporal information in a relational database, explored since the 1980s following the work of Richard Snodgrasshttp://www2.cs.arizona.edu/~rts/publications.htmltuple-timestamping vs attribute-timestamping. The SQL standard used the tuple-timestamping approach, but in MobilityDB we decided to use the attribute-timestamping approach. As you rightly pointed out, tuple-timestamping follows the traditional relational normalization theory.The main advantage of the attribute timestamping for mobility data is that we need only to store the changes of values for a temporal attribute. In the example of gear for a car, even if we receive high-frequency observations, there will be very little gear changes for a trip, while there will be much more position changes. Therefore on MobilityDB we only store the change of values (e.g., no change of position will be stored during a red light or traffic jam), which constitutes a huge lossless compression with respect to the raw format storing every observation in a single row. We have experimented 450% lossless compression for real IoT data.In addition, MobilityDB does all temporal operations and allows to determine the value of any temporal attribute at any timestamp (e.g., using linear interpolation between observations for speed or GPS position), independently of the actual stored observations.I hope this clarifies things a little.",
"msg_date": "Mon, 7 Feb 2022 18:42:47 +0100",
"msg_from": "Esteban Zimanyi <estebanzimanyi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage for multiple variable-length attributes in a single row"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems like the two functions ReplicationSlotsComputeRequiredLSN and\nReplicationSlotsComputeLogicalRestartLSN more or less does the same\nthing which makes me optimize (saving 40 LOC) it as attached. I'm\npretty much okay if it gets rejected on the grounds that it creates a\nlot of diff with the older versions and the new API may not look\nnicer, still I want to give it a try.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 6 Jan 2022 22:13:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Deduplicate min restart_lsn calculation code"
},
{
"msg_contents": "On 2022-Jan-06, Bharath Rupireddy wrote:\n\n> Hi,\n> \n> It seems like the two functions ReplicationSlotsComputeRequiredLSN and\n> ReplicationSlotsComputeLogicalRestartLSN more or less does the same\n> thing which makes me optimize (saving 40 LOC) it as attached. I'm\n> pretty much okay if it gets rejected on the grounds that it creates a\n> lot of diff with the older versions and the new API may not look\n> nicer, still I want to give it a try.\n> \n> Thoughts?\n\nHmm, it seems sensible to me. But I would not have the second boolean\nargument in the new function, and instead have the caller save the\nreturn value in a local variable to do the XLogSetReplicationSlotMinimumLSN\nstep separately. Then the new function API is not so strange.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Linux transformó mi computadora, de una `máquina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada día aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n",
"msg_date": "Thu, 6 Jan 2022 15:24:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Deduplicate min restart_lsn calculation code"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 11:54 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jan-06, Bharath Rupireddy wrote:\n>\n> > Hi,\n> >\n> > It seems like the two functions ReplicationSlotsComputeRequiredLSN and\n> > ReplicationSlotsComputeLogicalRestartLSN more or less does the same\n> > thing which makes me optimize (saving 40 LOC) it as attached. I'm\n> > pretty much okay if it gets rejected on the grounds that it creates a\n> > lot of diff with the older versions and the new API may not look\n> > nicer, still I want to give it a try.\n> >\n> > Thoughts?\n>\n> Hmm, it seems sensible to me. But I would not have the second boolean\n> argument in the new function, and instead have the caller save the\n> return value in a local variable to do the XLogSetReplicationSlotMinimumLSN\n> step separately. Then the new function API is not so strange.\n\nThanks for taking a look at it. Here's the v2 patch, please review.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Fri, 7 Jan 2022 09:07:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Deduplicate min restart_lsn calculation code"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile checking the PQping code I noticed that pg_ctl does not rely on PQping\nsince commit f13ea95f9e4 (v10) so the attached patch removes a comment from\ninternal_ping().\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 06 Jan 2022 21:33:07 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": true,
"msg_subject": "fix libpq comment"
},
{
"msg_contents": "On Thu, Jan 06, 2022 at 09:33:07PM -0300, Euler Taveira wrote:\n> While checking the PQping code I noticed that pg_ctl does not rely on PQping\n> since commit f13ea95f9e4 (v10) so the attached patch removes a comment from\n> internal_ping().\n\nLooking at the area, the rest looks fine. So, applied as per your\nsuggestion. Thanks, Euler!\n--\nMichael",
"msg_date": "Fri, 7 Jan 2022 16:11:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix libpq comment"
}
] |
[
{
"msg_contents": "Refactor tar method of walmethods.c to rely on the compression method\n\nSince d62bcc8, the directory method of walmethods.c uses the compression\nmethod to determine which code path to take. The tar method, used by\npg_basebackup --format=t, was inconsistent regarding that, as it relied\non the compression level to check if no compression or gzip should be\nused. This commit makes the code more consistent as a whole in this\nfile, making the tar logic use a compression method rather than\nassigning COMPRESSION_NONE that would be ignored.\n\nThe options of pg_basebackup are planned to be reworked but we are not\nsure yet of the shape they should have as this has some dependency with\nthe integration of the server-side compression for base backups, so this\nis left out for the moment. This change has as benefit to make easier\nthe future integration of new compression methods for the tar method of\nwalmethods.c, for the client-side compression.\n\nReviewed-by: Georgios Kokolatos\nDiscussion: https://postgr.es/m/Yb3GEgWwcu4wZDuA@paquier.xyz\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/000f3adfdc4336df83777eba86ce48f36cb6c6e9\n\nModified Files\n--------------\nsrc/bin/pg_basebackup/pg_basebackup.c | 3 +-\nsrc/bin/pg_basebackup/walmethods.c | 57 ++++++++++++++++++++++-------------\n2 files changed, 38 insertions(+), 22 deletions(-)",
"msg_date": "Fri, 07 Jan 2022 04:49:48 +0000",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pgsql: Refactor tar method of walmethods.c to rely on the compression\n m"
},
{
"msg_contents": "Re: Michael Paquier\n> Refactor tar method of walmethods.c to rely on the compression method\n\nHi,\n\nsince about this commit, pg_wal.tar is no longer compressed at all:\n\n$ pg_basebackup -D foo --format=tar\n$ ls -l foo/\n-rw------- 1 cbe cbe 137152 7. Jan 15:37 backup_manifest\n-rw------- 1 cbe cbe 23606272 7. Jan 15:37 base.tar\n-rw------- 1 cbe cbe 16778752 7. Jan 15:37 pg_wal.tar\n\n$ pg_basebackup -D foogz --format=tar --gzip\n$ ls -l foogz\n-rw------- 1 cbe cbe 137152 7. Jan 15:37 backup_manifest\n-rw------- 1 cbe cbe 3073257 7. Jan 15:37 base.tar.gz\n-rw------- 1 cbe cbe 16779264 7. Jan 15:37 pg_wal.tar <-- should be pg_wal.tar.gz\n\nChristoph\n\n\n",
"msg_date": "Fri, 7 Jan 2022 15:41:16 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Refactor tar method of walmethods.c to rely on the\n compression m"
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 03:41:16PM +0100, Christoph Berg wrote:\n> since about this commit, pg_wal.tar is no longer compressed at all:\n\nThanks. That's a thinko coming from the fact that\nZ_DEFAULT_COMPRESSION is -1, combination possible when specifying only\n--gzip. So, fixed.\n--\nMichael",
"msg_date": "Sat, 8 Jan 2022 09:16:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Refactor tar method of walmethods.c to rely on the\n compression m"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen running the plcheck in Windows we get the following warning, it is\nvisible in the cfbots [1]:\n\nUse of uninitialized value $1 in concatenation (.) or string at\nsrc/tools/msvc/vcregress.pl line 350.\n\nThis points to mangle_plpython3 subroutine. The attached patch addresses\nthe problem.\n\n[1] http://commitfest.cputube.org/\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Fri, 7 Jan 2022 13:20:52 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix vcregress plpython3 warning"
},
{
"msg_contents": "\nOn 1/7/22 07:20, Juan José Santamaría Flecha wrote:\n> Hi,\n>\n> When running the plcheck in Windows we get the following warning, it\n> is visible in the cfbots [1]:\n>\n> Use of uninitialized value $1 in concatenation (.) or string at\n> src/tools/msvc/vcregress.pl <http://vcregress.pl> line 350.\n>\n> This points to mangle_plpython3 subroutine. The attached patch\n> addresses the problem.\n>\n> [1] http://commitfest.cputube.org/\n\n\nYeah, this code is not a model of clarity though. I had to think through\nit and I write quite a bit of perl. I would probably write it something\nlike this:\n\n\ns/EXTENSION (.*?)plpython2?u/EXTENSION $1plpython3u/g ;\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 7 Jan 2022 08:30:38 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 2:30 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> Yeah, this code is not a model of clarity though. I had to think through\n> it and I write quite a bit of perl. I would probably write it something\n> like this:\n>\n>\n> s/EXTENSION (.*?)plpython2?u/EXTENSION $1plpython3u/g ;\n>\n> Yeah, I had to do some testing to figure it out. Based on what\nregress-python3-mangle.mk does, I think it tries to ignore cases such as:\n\nDROP EXTENSION IF EXISTS plpython2u CASCADE;\n\nWhich that expression would match. Maybe use a couple of lines as in the\nmake file?\n\ns/EXTENSION plpython2?u/EXTENSION plpython3u/g\ns/EXTENSION ([^ ]*)_plpython2?u/EXTENSION \\$1_plpython3u/g\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Jan 7, 2022 at 2:30 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nYeah, this code is not a model of clarity though. I had to think through\nit and I write quite a bit of perl. I would probably write it something\nlike this:\n\n\ns/EXTENSION (.*?)plpython2?u/EXTENSION $1plpython3u/g ;Yeah, I had to do some testing to figure it out. Based on what regress-python3-mangle.mk does, I think it tries to ignore cases such as:DROP EXTENSION IF EXISTS plpython2u CASCADE;Which that expression would match. Maybe use a couple of lines as in the make file?s/EXTENSION plpython2?u/EXTENSION plpython3u/gs/EXTENSION ([^ ]*)_plpython2?u/EXTENSION \\$1_plpython3u/gRegards,Juan José Santamaría Flecha",
"msg_date": "Fri, 7 Jan 2022 14:56:24 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 2:56 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\ncopy-paste\n\n> s/EXTENSION plpython2?u/EXTENSION plpython3u/g\n> s/EXTENSION ([^ ]*)_plpython2?u/EXTENSION $1_plpython3u/g\n>\n\nOn Fri, Jan 7, 2022 at 2:56 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:copy-paste s/EXTENSION plpython2?u/EXTENSION plpython3u/gs/EXTENSION ([^ ]*)_plpython2?u/EXTENSION $1_plpython3u/g",
"msg_date": "Fri, 7 Jan 2022 14:58:01 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "\nOn 1/7/22 08:56, Juan José Santamaría Flecha wrote:\n>\n> On Fri, Jan 7, 2022 at 2:30 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> Yeah, this code is not a model of clarity though. I had to think\n> through\n> it and I write quite a bit of perl. I would probably write it\n> something\n> like this:\n>\n>\n> s/EXTENSION (.*?)plpython2?u/EXTENSION $1plpython3u/g ;\n>\n> Yeah, I had to do some testing to figure it out. Based on\n> what regress-python3-mangle.mk <http://regress-python3-mangle.mk>\n> does, I think it tries to ignore cases such as:\n>\n> DROP EXTENSION IF EXISTS plpython2u CASCADE;\n>\n> Which that expression would match. Maybe use a couple of lines as in\n> the make file?\n>\n> s/EXTENSION plpython2?u/EXTENSION plpython3u/g\n> s/EXTENSION ([^ ]*)_plpython2?u/EXTENSION \\$1_plpython3u/g\n>\n>\n\nIn that case, just this should work:\n\n\ns/EXTENSION (\\S*?)plpython2?u/EXTENSION $1plpython3u/g ;\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 7 Jan 2022 09:24:39 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 3:24 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> In that case, just this should work:\n>\n> s/EXTENSION (\\S*?)plpython2?u/EXTENSION $1plpython3u/g ;\n>\n> LGTM.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Jan 7, 2022 at 3:24 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nIn that case, just this should work:\ns/EXTENSION (\\S*?)plpython2?u/EXTENSION $1plpython3u/g ;LGTM.Regards,Juan José Santamaría Flecha",
"msg_date": "Fri, 7 Jan 2022 15:41:10 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 3:24 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> In that case, just this should work:\n>\n> s/EXTENSION (\\S*?)plpython2?u/EXTENSION $1plpython3u/g ;\n>\n> Please find attached a patch for so. I have also open an item in the\ncommitfest:\n\nhttps://commitfest.postgresql.org/37/3507/\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Jan 7, 2022 at 3:24 PM Andrew Dunstan <andrew@dunslane.net> wrote:In that case, just this should work:\ns/EXTENSION (\\S*?)plpython2?u/EXTENSION $1plpython3u/g ;Please find attached a patch for so. I have also open an item in the commitfest:https://commitfest.postgresql.org/37/3507/Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 10 Jan 2022 12:51:00 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 12:51 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n> Please find attached a patch for so.\n>\nThe patch.\n\n>\n>\nRegards,\n>\n> Juan José Santamaría Flecha\n>",
"msg_date": "Mon, 10 Jan 2022 12:53:40 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "\nOn 1/10/22 06:53, Juan José Santamaría Flecha wrote:\n>\n> On Mon, Jan 10, 2022 at 12:51 PM Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n>\n> Please find attached a patch for so. \n>\n> The patch.\n>\n> \n>\n>\n\nPushed, and backpatched.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 10 Jan 2022 10:14:10 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix vcregress plpython3 warning"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 4:14 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> Pushed, and backpatched.\n>\n> Great, thanks.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Jan 10, 2022 at 4:14 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nPushed, and backpatched.\nGreat, thanks.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 10 Jan 2022 16:26:38 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix vcregress plpython3 warning"
}
] |
[
{
"msg_contents": "to Dear Hackers,\n\nI decided to we need basic e-mail sender in command line when coding \nsomethings and asking one-line questions.\n\nThanks, Ali Koca",
"msg_date": "Fri, 7 Jan 2022 15:48:24 +0300",
"msg_from": "Ali Koca <kinetixcicocuk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Python Plain Text Sender"
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 03:48:24PM +0300, Ali Koca wrote:\n> to Dear Hackers,\n> \n> I decided to we need basic e-mail sender in command line when coding\n> somethings and asking one-line questions.\n\nWhy ? Are there mail clients for which this is hard to do ?\n\nI don't think a custom MUA/mail client is something we should implement nor\nmaintain. Unless it does something better than other mail clients (and doesn't\ndo anything worse than most).\n\nI think this is mostly an issue of user awareness. Providing a script to do\nthis won't help users who don't realize that their MUA is sending only HTML\ngobbledygook.\n\nBTW, why did you send the same email again 5 hours later ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 7 Jan 2022 11:36:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Python Plain Text Sender"
}
] |
[
{
"msg_contents": "to Dear Hackers,\n\nI decided to we need basic e-mail sender in command line when coding \nsomethings and asking one-line questions.\n\nAli Koca",
"msg_date": "Fri, 7 Jan 2022 19:55:03 +0300",
"msg_from": "Ali Koca <kinetixcicocuk@gmail.com>",
"msg_from_op": true,
"msg_subject": "None"
}
] |
[
{
"msg_contents": "Hi,\nIn contrib/pgcrypto/pgcrypto.c :\n\n err = px_combo_init(c, (uint8 *) VARDATA_ANY(key), klen, NULL, 0);\n\nNote: NULL is passed as iv.\n\nWhen combo_init() is called,\n\n if (ivlen > ivs)\n memcpy(ivbuf, iv, ivs);\n else\n memcpy(ivbuf, iv, ivlen);\n\nIt seems we need to consider the case of null being passed as iv for\nmemcpy() because of this:\n\n/usr/include/string.h:44:28: note: nonnull attribute specified here\n\nWhat do you think of the following patch ?\n\nCheers",
"msg_date": "Fri, 7 Jan 2022 16:32:01 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 04:32:01PM -0800, Zhihong Yu wrote:\n> In contrib/pgcrypto/pgcrypto.c :\n> \n> err = px_combo_init(c, (uint8 *) VARDATA_ANY(key), klen, NULL, 0);\n> \n> Note: NULL is passed as iv.\n> \n> When combo_init() is called,\n> \n> if (ivlen > ivs)\n> memcpy(ivbuf, iv, ivs);\n> else\n> memcpy(ivbuf, iv, ivlen);\n> \n> It seems we need to consider the case of null being passed as iv for\n> memcpy() because of this:\n> \n> /usr/include/string.h:44:28: note: nonnull attribute specified here\n\nI agree it's time to fix cases like this, given\nhttps://postgr.es/m/flat/20200904023648.GB3426768@rfd.leadboat.com. However,\nit should be one patch fixing all (or at least many) of them.\n\n> --- a/contrib/pgcrypto/px.c\n> +++ b/contrib/pgcrypto/px.c\n> @@ -198,10 +198,13 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned klen,\n> \tif (ivs > 0)\n> \t{\n> \t\tivbuf = palloc0(ivs);\n> -\t\tif (ivlen > ivs)\n> -\t\t\tmemcpy(ivbuf, iv, ivs);\n> -\t\telse\n> -\t\t\tmemcpy(ivbuf, iv, ivlen);\n> +\t\tif (iv != NULL)\n> +\t\t{\n> +\t\t\tif (ivlen > ivs)\n> +\t\t\t\tmemcpy(ivbuf, iv, ivs);\n> +\t\t\telse\n> +\t\t\t\tmemcpy(ivbuf, iv, ivlen);\n> +\t\t}\n> \t}\n\nIf someone were to pass NULL iv with nonzero ivlen, that will silently\nmalfunction. I'd avoid that risk by writing this way:\n\n--- a/contrib/pgcrypto/px.c\n+++ b/contrib/pgcrypto/px.c\n@@ -202,3 +202,3 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned klen,\n \t\t\tmemcpy(ivbuf, iv, ivs);\n-\t\telse\n+\t\telse if (ivlen > 0)\n \t\t\tmemcpy(ivbuf, iv, ivlen);\n\nThat also gives the compiler an additional optimization strategy.\n\n\n",
"msg_date": "Sat, 8 Jan 2022 17:52:02 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sat, Jan 8, 2022 at 5:52 PM Noah Misch <noah@leadboat.com> wrote:\n\n> On Fri, Jan 07, 2022 at 04:32:01PM -0800, Zhihong Yu wrote:\n> > In contrib/pgcrypto/pgcrypto.c :\n> >\n> > err = px_combo_init(c, (uint8 *) VARDATA_ANY(key), klen, NULL, 0);\n> >\n> > Note: NULL is passed as iv.\n> >\n> > When combo_init() is called,\n> >\n> > if (ivlen > ivs)\n> > memcpy(ivbuf, iv, ivs);\n> > else\n> > memcpy(ivbuf, iv, ivlen);\n> >\n> > It seems we need to consider the case of null being passed as iv for\n> > memcpy() because of this:\n> >\n> > /usr/include/string.h:44:28: note: nonnull attribute specified here\n>\n> I agree it's time to fix cases like this, given\n> https://postgr.es/m/flat/20200904023648.GB3426768@rfd.leadboat.com.\n> However,\n> it should be one patch fixing all (or at least many) of them.\n\n\n> > --- a/contrib/pgcrypto/px.c\n> > +++ b/contrib/pgcrypto/px.c\n> > @@ -198,10 +198,13 @@ combo_init(PX_Combo *cx, const uint8 *key,\n> unsigned klen,\n> > if (ivs > 0)\n> > {\n> > ivbuf = palloc0(ivs);\n> > - if (ivlen > ivs)\n> > - memcpy(ivbuf, iv, ivs);\n> > - else\n> > - memcpy(ivbuf, iv, ivlen);\n> > + if (iv != NULL)\n> > + {\n> > + if (ivlen > ivs)\n> > + memcpy(ivbuf, iv, ivs);\n> > + else\n> > + memcpy(ivbuf, iv, ivlen);\n> > + }\n> > }\n>\n> If someone were to pass NULL iv with nonzero ivlen, that will silently\n>\nHi,\nIf iv is NULL, none of the memcpy() would be called (based on my patch).\nCan you elaborate your suggestion in more detail ?\n\nPatch v2 is attached, covering more files.\n\nSince the referenced email was old, line numbers have changed.\nIt would be nice if an up-to-date list is provided in case more places\nshould be changed.\n\nCheers\n\n\n> malfunction. I'd avoid that risk by writing this way:\n>\n> --- a/contrib/pgcrypto/px.c\n> +++ b/contrib/pgcrypto/px.c\n> @@ -202,3 +202,3 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned\n> klen,\n> memcpy(ivbuf, iv, ivs);\n> - else\n> + else if (ivlen > 0)\n> memcpy(ivbuf, iv, ivlen);\n>\n> That also gives the compiler an additional optimization strategy.\n>",
"msg_date": "Sat, 8 Jan 2022 18:52:14 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sat, Jan 08, 2022 at 06:52:14PM -0800, Zhihong Yu wrote:\n> On Sat, Jan 8, 2022 at 5:52 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Fri, Jan 07, 2022 at 04:32:01PM -0800, Zhihong Yu wrote:\n\n> > I agree it's time to fix cases like this, given\n> > https://postgr.es/m/flat/20200904023648.GB3426768@rfd.leadboat.com. However,\n> > it should be one patch fixing all (or at least many) of them.\n\n> > > --- a/contrib/pgcrypto/px.c\n> > > +++ b/contrib/pgcrypto/px.c\n> > > @@ -198,10 +198,13 @@ combo_init(PX_Combo *cx, const uint8 *key,\n> > unsigned klen,\n> > > if (ivs > 0)\n> > > {\n> > > ivbuf = palloc0(ivs);\n> > > - if (ivlen > ivs)\n> > > - memcpy(ivbuf, iv, ivs);\n> > > - else\n> > > - memcpy(ivbuf, iv, ivlen);\n> > > + if (iv != NULL)\n> > > + {\n> > > + if (ivlen > ivs)\n> > > + memcpy(ivbuf, iv, ivs);\n> > > + else\n> > > + memcpy(ivbuf, iv, ivlen);\n> > > + }\n> > > }\n> >\n> > If someone were to pass NULL iv with nonzero ivlen, that will silently\n> >\n> Hi,\n> If iv is NULL, none of the memcpy() would be called (based on my patch).\n> Can you elaborate your suggestion in more detail ?\n\nOn further thought, I would write it this way:\n\n--- a/contrib/pgcrypto/px.c\n+++ b/contrib/pgcrypto/px.c\n@@ -202,3 +202,3 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned klen,\n \t\t\tmemcpy(ivbuf, iv, ivs);\n-\t\telse\n+\t\telse if (ivlen != 0)\n \t\t\tmemcpy(ivbuf, iv, ivlen);\n\nThat helps in two ways. First, if someone passes iv==NULL and ivlen!=0, my\nversion will tend to crash, but yours will treat that like ivlen==0. Since\nthis would be a programming error, crashing is better. Second, a compiler can\nopt to omit the \"ivlen != 0\" test from the generated assembly, because the\ncompiler can know that memcpy(any_value_a, any_value_b, 0) is a no-op.\n\n> Since the referenced email was old, line numbers have changed.\n> It would be nice if an up-to-date list is provided in case more places\n> should be changed.\n\nTo check whether you've gotten them all, configure with CC='gcc\n-fsanitize=undefined -fsanitize-undefined-trap-on-error' and run check-world.\n\n\n",
"msg_date": "Sat, 8 Jan 2022 19:11:00 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sat, Jan 8, 2022 at 7:11 PM Noah Misch <noah@leadboat.com> wrote:\n\n> On Sat, Jan 08, 2022 at 06:52:14PM -0800, Zhihong Yu wrote:\n> > On Sat, Jan 8, 2022 at 5:52 PM Noah Misch <noah@leadboat.com> wrote:\n> > > On Fri, Jan 07, 2022 at 04:32:01PM -0800, Zhihong Yu wrote:\n>\n> > > I agree it's time to fix cases like this, given\n> > > https://postgr.es/m/flat/20200904023648.GB3426768@rfd.leadboat.com.\n> However,\n> > > it should be one patch fixing all (or at least many) of them.\n>\n> > > > --- a/contrib/pgcrypto/px.c\n> > > > +++ b/contrib/pgcrypto/px.c\n> > > > @@ -198,10 +198,13 @@ combo_init(PX_Combo *cx, const uint8 *key,\n> > > unsigned klen,\n> > > > if (ivs > 0)\n> > > > {\n> > > > ivbuf = palloc0(ivs);\n> > > > - if (ivlen > ivs)\n> > > > - memcpy(ivbuf, iv, ivs);\n> > > > - else\n> > > > - memcpy(ivbuf, iv, ivlen);\n> > > > + if (iv != NULL)\n> > > > + {\n> > > > + if (ivlen > ivs)\n> > > > + memcpy(ivbuf, iv, ivs);\n> > > > + else\n> > > > + memcpy(ivbuf, iv, ivlen);\n> > > > + }\n> > > > }\n> > >\n> > > If someone were to pass NULL iv with nonzero ivlen, that will silently\n> > >\n> > Hi,\n> > If iv is NULL, none of the memcpy() would be called (based on my patch).\n> > Can you elaborate your suggestion in more detail ?\n>\n> On further thought, I would write it this way:\n>\n> --- a/contrib/pgcrypto/px.c\n> +++ b/contrib/pgcrypto/px.c\n> @@ -202,3 +202,3 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned\n> klen,\n> memcpy(ivbuf, iv, ivs);\n> - else\n> + else if (ivlen != 0)\n> memcpy(ivbuf, iv, ivlen);\n>\n> That helps in two ways. First, if someone passes iv==NULL and ivlen!=0, my\n> version will tend to crash, but yours will treat that like ivlen==0. Since\n> this would be a programming error, crashing is better. Second, a compiler\n> can\n> opt to omit the \"ivlen != 0\" test from the generated assembly, because the\n> compiler can know that memcpy(any_value_a, any_value_b, 0) is a no-op.\n>\n\nHi,\nUpdated patch is attached.\n\n\n>\n> > Since the referenced email was old, line numbers have changed.\n> > It would be nice if an up-to-date list is provided in case more places\n> > should be changed.\n>\n> To check whether you've gotten them all, configure with CC='gcc\n> -fsanitize=undefined -fsanitize-undefined-trap-on-error' and run\n> check-world.\n>",
"msg_date": "Sat, 8 Jan 2022 19:31:04 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On further thought, I would write it this way:\n\n> -\t\telse\n> +\t\telse if (ivlen != 0)\n> \t\t\tmemcpy(ivbuf, iv, ivlen);\n\nFWIW, I liked the \"ivlen > 0\" formulation better. They should be\nequivalent, because ivlen is unsigned, but it just seems like \"> 0\"\nis more natural.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Jan 2022 02:32:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Noah Misch <noah@leadboat.com> writes:\n> > On further thought, I would write it this way:\n>\n> > - else\n> > + else if (ivlen != 0)\n> > memcpy(ivbuf, iv, ivlen);\n>\n> FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n> equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n> is more natural.\n>\n> regards, tom lane\n>\n\nPatch v4 is attached.\n\nCheers",
"msg_date": "Sun, 9 Jan 2022 04:37:32 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n> On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > On further thought, I would write it this way:\n> >\n> > > - else\n> > > + else if (ivlen != 0)\n> > > memcpy(ivbuf, iv, ivlen);\n> >\n> > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n> > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n> > is more natural.\n\nIf I were considering the one code site in isolation, I'd pick \"ivlen > 0\".\nBut of the four sites identified so far, three have signed length variables.\nSince we're likely to get more examples of this pattern, some signed and some\nunsigned, I'd rather use a style that does the optimal thing whether or not\nthe variable is signed. What do you think?\n\n> Patch v4 is attached.\n\nDoes this pass the test procedure shown upthread?\n\n\n",
"msg_date": "Sun, 9 Jan 2022 08:48:54 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n>> On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n>>> equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n>>> is more natural.\n\n> If I were considering the one code site in isolation, I'd pick \"ivlen > 0\".\n> But of the four sites identified so far, three have signed length variables.\n\nOh, hmm. Unless we want to start changing those to unsigned, I agree\na not-equal test is a safer convention.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Jan 2022 11:51:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:\n\n> On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n> > On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Noah Misch <noah@leadboat.com> writes:\n> > > > On further thought, I would write it this way:\n> > >\n> > > > - else\n> > > > + else if (ivlen != 0)\n> > > > memcpy(ivbuf, iv, ivlen);\n> > >\n> > > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n> > > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n> > > is more natural.\n>\n> If I were considering the one code site in isolation, I'd pick \"ivlen > 0\".\n> But of the four sites identified so far, three have signed length\n> variables.\n> Since we're likely to get more examples of this pattern, some signed and\n> some\n> unsigned, I'd rather use a style that does the optimal thing whether or not\n> the variable is signed. What do you think?\n>\n> > Patch v4 is attached.\n>\n> Does this pass the test procedure shown upthread?\n>\nHi,\nI installed gcc 4.9.3\n\nWhen I ran:\n./configure CFLAGS='-fsanitize=undefined\n-fsanitize-undefined-trap-on-error'\n\nI saw:\n\nconfigure:3977: $? = 0\nconfigure:3966: gcc -V >&5\ngcc: error: unrecognized command line option '-V'\ngcc: fatal error: no input files\ncompilation terminated.\nconfigure:3977: $? = 1\nconfigure:3966: gcc -qversion >&5\ngcc: error: unrecognized command line option '-qversion'\ngcc: fatal error: no input files\ncompilation terminated.\nconfigure:3977: $? = 1\nconfigure:3997: checking whether the C compiler works\nconfigure:4019: gcc -fsanitize=undefined -fsanitize-undefined-trap-on-error\n conftest.c >&5\ngcc: error: unrecognized command line option\n'-fsanitize-undefined-trap-on-error'\nconfigure:4023: $? = 1\nconfigure:4061: result: no\n\nI wonder if a higher version gcc is needed.\n\nFYI\n\nOn Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n> On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > On further thought, I would write it this way:\n> >\n> > > - else\n> > > + else if (ivlen != 0)\n> > > memcpy(ivbuf, iv, ivlen);\n> >\n> > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n> > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n> > is more natural.\n\nIf I were considering the one code site in isolation, I'd pick \"ivlen > 0\".\nBut of the four sites identified so far, three have signed length variables.\nSince we're likely to get more examples of this pattern, some signed and some\nunsigned, I'd rather use a style that does the optimal thing whether or not\nthe variable is signed. What do you think?\n\n> Patch v4 is attached.\n\nDoes this pass the test procedure shown upthread?Hi,I installed gcc 4.9.3When I ran:./configure CFLAGS='-fsanitize=undefined -fsanitize-undefined-trap-on-error' I saw:configure:3977: $? = 0configure:3966: gcc -V >&5gcc: error: unrecognized command line option '-V'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3966: gcc -qversion >&5gcc: error: unrecognized command line option '-qversion'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3997: checking whether the C compiler worksconfigure:4019: gcc -fsanitize=undefined -fsanitize-undefined-trap-on-error conftest.c >&5gcc: error: unrecognized command line option '-fsanitize-undefined-trap-on-error'configure:4023: $? = 1configure:4061: result: noI wonder if a higher version gcc is needed.FYI",
"msg_date": "Sun, 9 Jan 2022 12:38:23 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sun, Jan 9, 2022 at 12:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:\n>\n>> On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n>> > On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > > Noah Misch <noah@leadboat.com> writes:\n>> > > > On further thought, I would write it this way:\n>> > >\n>> > > > - else\n>> > > > + else if (ivlen != 0)\n>> > > > memcpy(ivbuf, iv, ivlen);\n>> > >\n>> > > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n>> > > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n>> > > is more natural.\n>>\n>> If I were considering the one code site in isolation, I'd pick \"ivlen >\n>> 0\".\n>> But of the four sites identified so far, three have signed length\n>> variables.\n>> Since we're likely to get more examples of this pattern, some signed and\n>> some\n>> unsigned, I'd rather use a style that does the optimal thing whether or\n>> not\n>> the variable is signed. What do you think?\n>>\n>> > Patch v4 is attached.\n>>\n>> Does this pass the test procedure shown upthread?\n>>\n> Hi,\n> I installed gcc 4.9.3\n>\n> When I ran:\n> ./configure CFLAGS='-fsanitize=undefined\n> -fsanitize-undefined-trap-on-error'\n>\n> I saw:\n>\n> configure:3977: $? = 0\n> configure:3966: gcc -V >&5\n> gcc: error: unrecognized command line option '-V'\n> gcc: fatal error: no input files\n> compilation terminated.\n> configure:3977: $? = 1\n> configure:3966: gcc -qversion >&5\n> gcc: error: unrecognized command line option '-qversion'\n> gcc: fatal error: no input files\n> compilation terminated.\n> configure:3977: $? = 1\n> configure:3997: checking whether the C compiler works\n> configure:4019: gcc -fsanitize=undefined\n> -fsanitize-undefined-trap-on-error conftest.c >&5\n> gcc: error: unrecognized command line option\n> '-fsanitize-undefined-trap-on-error'\n> configure:4023: $? = 1\n> configure:4061: result: no\n>\n> I wonder if a higher version gcc is needed.\n>\n> FYI\n>\n\nAfter installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).\nIn the output of `make check-world`, I don't see `runtime error`.\nThough there was a crash (maybe specific to my machine):\n\nCore was generated by\n`/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n--singl'.\nProgram terminated with signal SIGILL, Illegal instruction.\n#0 0x000000000050642d in write_item.cold ()\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\nsssd-client-1.16.5-10.el7_9.10.x86_64\n(gdb) bt\n#0 0x000000000050642d in write_item.cold ()\n#1 0x0000000000ba9d1b in write_relcache_init_file ()\n#2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n#3 0x0000000000bd5cb5 in InitPostgres ()\n#4 0x0000000000a0a9ea in PostgresMain ()\n\nFYI\n\nOn Sun, Jan 9, 2022 at 12:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n> On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > On further thought, I would write it this way:\n> >\n> > > - else\n> > > + else if (ivlen != 0)\n> > > memcpy(ivbuf, iv, ivlen);\n> >\n> > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n> > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n> > is more natural.\n\nIf I were considering the one code site in isolation, I'd pick \"ivlen > 0\".\nBut of the four sites identified so far, three have signed length variables.\nSince we're likely to get more examples of this pattern, some signed and some\nunsigned, I'd rather use a style that does the optimal thing whether or not\nthe variable is signed. What do you think?\n\n> Patch v4 is attached.\n\nDoes this pass the test procedure shown upthread?Hi,I installed gcc 4.9.3When I ran:./configure CFLAGS='-fsanitize=undefined -fsanitize-undefined-trap-on-error' I saw:configure:3977: $? = 0configure:3966: gcc -V >&5gcc: error: unrecognized command line option '-V'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3966: gcc -qversion >&5gcc: error: unrecognized command line option '-qversion'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3997: checking whether the C compiler worksconfigure:4019: gcc -fsanitize=undefined -fsanitize-undefined-trap-on-error conftest.c >&5gcc: error: unrecognized command line option '-fsanitize-undefined-trap-on-error'configure:4023: $? = 1configure:4061: result: noI wonder if a higher version gcc is needed.FYI After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).In the output of `make check-world`, I don't see `runtime error`.Though there was a crash (maybe specific to my machine):Core was generated by `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres --singl'.Program terminated with signal SIGILL, Illegal instruction.#0 0x000000000050642d in write_item.cold ()Missing separate debuginfos, use: debuginfo-install glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64 sssd-client-1.16.5-10.el7_9.10.x86_64(gdb) bt#0 0x000000000050642d in write_item.cold ()#1 0x0000000000ba9d1b in write_relcache_init_file ()#2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()#3 0x0000000000bd5cb5 in InitPostgres ()#4 0x0000000000a0a9ea in PostgresMain () FYI",
"msg_date": "Sun, 9 Jan 2022 13:27:27 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Sun, Jan 9, 2022 at 12:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:\n>>\n>>> On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n>>> > On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> > > Noah Misch <noah@leadboat.com> writes:\n>>> > > > On further thought, I would write it this way:\n>>> > >\n>>> > > > - else\n>>> > > > + else if (ivlen != 0)\n>>> > > > memcpy(ivbuf, iv, ivlen);\n>>> > >\n>>> > > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n>>> > > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n>>> > > is more natural.\n>>>\n>>> If I were considering the one code site in isolation, I'd pick \"ivlen >\n>>> 0\".\n>>> But of the four sites identified so far, three have signed length\n>>> variables.\n>>> Since we're likely to get more examples of this pattern, some signed and\n>>> some\n>>> unsigned, I'd rather use a style that does the optimal thing whether or\n>>> not\n>>> the variable is signed. What do you think?\n>>>\n>>> > Patch v4 is attached.\n>>>\n>>> Does this pass the test procedure shown upthread?\n>>>\n>> Hi,\n>> I installed gcc 4.9.3\n>>\n>> When I ran:\n>> ./configure CFLAGS='-fsanitize=undefined\n>> -fsanitize-undefined-trap-on-error'\n>>\n>> I saw:\n>>\n>> configure:3977: $? = 0\n>> configure:3966: gcc -V >&5\n>> gcc: error: unrecognized command line option '-V'\n>> gcc: fatal error: no input files\n>> compilation terminated.\n>> configure:3977: $? = 1\n>> configure:3966: gcc -qversion >&5\n>> gcc: error: unrecognized command line option '-qversion'\n>> gcc: fatal error: no input files\n>> compilation terminated.\n>> configure:3977: $? = 1\n>> configure:3997: checking whether the C compiler works\n>> configure:4019: gcc -fsanitize=undefined\n>> -fsanitize-undefined-trap-on-error conftest.c >&5\n>> gcc: error: unrecognized command line option\n>> '-fsanitize-undefined-trap-on-error'\n>> configure:4023: $? = 1\n>> configure:4061: result: no\n>>\n>> I wonder if a higher version gcc is needed.\n>>\n>> FYI\n>>\n>\n> After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).\n> In the output of `make check-world`, I don't see `runtime error`.\n> Though there was a crash (maybe specific to my machine):\n>\n> Core was generated by\n> `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n> --singl'.\n> Program terminated with signal SIGILL, Illegal instruction.\n> #0 0x000000000050642d in write_item.cold ()\n> Missing separate debuginfos, use: debuginfo-install\n> glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\n> sssd-client-1.16.5-10.el7_9.10.x86_64\n> (gdb) bt\n> #0 0x000000000050642d in write_item.cold ()\n> #1 0x0000000000ba9d1b in write_relcache_init_file ()\n> #2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n> #3 0x0000000000bd5cb5 in InitPostgres ()\n> #4 0x0000000000a0a9ea in PostgresMain ()\n>\n> FYI\n>\nHi,\nEarlier I was using devtoolset-11 which had an `Illegal instruction` error.\n\nI compiled / installed gcc-11 from source (which took whole afternoon).\n`make check-world` passed with patch v3.\nIn tmp_install/log/install.log, I saw:\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n-fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n-I../../src/include -D_GNU_SOURCE -c -o path.o path.c\nrm -f libpgport.a\n\nCheers\n\nOn Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Jan 9, 2022 at 12:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n> On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > On further thought, I would write it this way:\n> >\n> > > - else\n> > > + else if (ivlen != 0)\n> > > memcpy(ivbuf, iv, ivlen);\n> >\n> > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n> > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n> > is more natural.\n\nIf I were considering the one code site in isolation, I'd pick \"ivlen > 0\".\nBut of the four sites identified so far, three have signed length variables.\nSince we're likely to get more examples of this pattern, some signed and some\nunsigned, I'd rather use a style that does the optimal thing whether or not\nthe variable is signed. What do you think?\n\n> Patch v4 is attached.\n\nDoes this pass the test procedure shown upthread?Hi,I installed gcc 4.9.3When I ran:./configure CFLAGS='-fsanitize=undefined -fsanitize-undefined-trap-on-error' I saw:configure:3977: $? = 0configure:3966: gcc -V >&5gcc: error: unrecognized command line option '-V'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3966: gcc -qversion >&5gcc: error: unrecognized command line option '-qversion'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3997: checking whether the C compiler worksconfigure:4019: gcc -fsanitize=undefined -fsanitize-undefined-trap-on-error conftest.c >&5gcc: error: unrecognized command line option '-fsanitize-undefined-trap-on-error'configure:4023: $? = 1configure:4061: result: noI wonder if a higher version gcc is needed.FYI After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).In the output of `make check-world`, I don't see `runtime error`.Though there was a crash (maybe specific to my machine):Core was generated by `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres --singl'.Program terminated with signal SIGILL, Illegal instruction.#0 0x000000000050642d in write_item.cold ()Missing separate debuginfos, use: debuginfo-install glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64 sssd-client-1.16.5-10.el7_9.10.x86_64(gdb) bt#0 0x000000000050642d in write_item.cold ()#1 0x0000000000ba9d1b in write_relcache_init_file ()#2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()#3 0x0000000000bd5cb5 in InitPostgres ()#4 0x0000000000a0a9ea in PostgresMain () FYIHi,Earlier I was using devtoolset-11 which had an `Illegal instruction` error.I compiled / installed gcc-11 from source (which took whole afternoon).`make check-world` passed with patch v3.In tmp_install/log/install.log, I saw:gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o path.o path.crm -f libpgport.a Cheers",
"msg_date": "Sun, 9 Jan 2022 18:45:09 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sun, Jan 9, 2022 at 6:45 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Sun, Jan 9, 2022 at 12:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>>>\n>>>\n>>> On Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:\n>>>\n>>>> On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n>>>> > On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> > > Noah Misch <noah@leadboat.com> writes:\n>>>> > > > On further thought, I would write it this way:\n>>>> > >\n>>>> > > > - else\n>>>> > > > + else if (ivlen != 0)\n>>>> > > > memcpy(ivbuf, iv, ivlen);\n>>>> > >\n>>>> > > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n>>>> > > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n>>>> > > is more natural.\n>>>>\n>>>> If I were considering the one code site in isolation, I'd pick \"ivlen >\n>>>> 0\".\n>>>> But of the four sites identified so far, three have signed length\n>>>> variables.\n>>>> Since we're likely to get more examples of this pattern, some signed\n>>>> and some\n>>>> unsigned, I'd rather use a style that does the optimal thing whether or\n>>>> not\n>>>> the variable is signed. What do you think?\n>>>>\n>>>> > Patch v4 is attached.\n>>>>\n>>>> Does this pass the test procedure shown upthread?\n>>>>\n>>> Hi,\n>>> I installed gcc 4.9.3\n>>>\n>>> When I ran:\n>>> ./configure CFLAGS='-fsanitize=undefined\n>>> -fsanitize-undefined-trap-on-error'\n>>>\n>>> I saw:\n>>>\n>>> configure:3977: $? = 0\n>>> configure:3966: gcc -V >&5\n>>> gcc: error: unrecognized command line option '-V'\n>>> gcc: fatal error: no input files\n>>> compilation terminated.\n>>> configure:3977: $? = 1\n>>> configure:3966: gcc -qversion >&5\n>>> gcc: error: unrecognized command line option '-qversion'\n>>> gcc: fatal error: no input files\n>>> compilation terminated.\n>>> configure:3977: $? = 1\n>>> configure:3997: checking whether the C compiler works\n>>> configure:4019: gcc -fsanitize=undefined\n>>> -fsanitize-undefined-trap-on-error conftest.c >&5\n>>> gcc: error: unrecognized command line option\n>>> '-fsanitize-undefined-trap-on-error'\n>>> configure:4023: $? = 1\n>>> configure:4061: result: no\n>>>\n>>> I wonder if a higher version gcc is needed.\n>>>\n>>> FYI\n>>>\n>>\n>> After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).\n>> In the output of `make check-world`, I don't see `runtime error`.\n>> Though there was a crash (maybe specific to my machine):\n>>\n>> Core was generated by\n>> `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n>> --singl'.\n>> Program terminated with signal SIGILL, Illegal instruction.\n>> #0 0x000000000050642d in write_item.cold ()\n>> Missing separate debuginfos, use: debuginfo-install\n>> glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\n>> sssd-client-1.16.5-10.el7_9.10.x86_64\n>> (gdb) bt\n>> #0 0x000000000050642d in write_item.cold ()\n>> #1 0x0000000000ba9d1b in write_relcache_init_file ()\n>> #2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n>> #3 0x0000000000bd5cb5 in InitPostgres ()\n>> #4 0x0000000000a0a9ea in PostgresMain ()\n>>\n>> FYI\n>>\n> Hi,\n> Earlier I was using devtoolset-11 which had an `Illegal instruction` error.\n>\n> I compiled / installed gcc-11 from source (which took whole afternoon).\n> `make check-world` passed with patch v3.\n> In tmp_install/log/install.log, I saw:\n>\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> rm -f libpgport.a\n>\n\nHi, Noah:\nPatch v3 passes `make check-world`\n\nCan you take another look ?\n\nOn Sun, Jan 9, 2022 at 6:45 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Jan 9, 2022 at 12:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Sun, Jan 9, 2022 at 8:48 AM Noah Misch <noah@leadboat.com> wrote:On Sun, Jan 09, 2022 at 04:37:32AM -0800, Zhihong Yu wrote:\n> On Sat, Jan 8, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Noah Misch <noah@leadboat.com> writes:\n> > > On further thought, I would write it this way:\n> >\n> > > - else\n> > > + else if (ivlen != 0)\n> > > memcpy(ivbuf, iv, ivlen);\n> >\n> > FWIW, I liked the \"ivlen > 0\" formulation better. They should be\n> > equivalent, because ivlen is unsigned, but it just seems like \"> 0\"\n> > is more natural.\n\nIf I were considering the one code site in isolation, I'd pick \"ivlen > 0\".\nBut of the four sites identified so far, three have signed length variables.\nSince we're likely to get more examples of this pattern, some signed and some\nunsigned, I'd rather use a style that does the optimal thing whether or not\nthe variable is signed. What do you think?\n\n> Patch v4 is attached.\n\nDoes this pass the test procedure shown upthread?Hi,I installed gcc 4.9.3When I ran:./configure CFLAGS='-fsanitize=undefined -fsanitize-undefined-trap-on-error' I saw:configure:3977: $? = 0configure:3966: gcc -V >&5gcc: error: unrecognized command line option '-V'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3966: gcc -qversion >&5gcc: error: unrecognized command line option '-qversion'gcc: fatal error: no input filescompilation terminated.configure:3977: $? = 1configure:3997: checking whether the C compiler worksconfigure:4019: gcc -fsanitize=undefined -fsanitize-undefined-trap-on-error conftest.c >&5gcc: error: unrecognized command line option '-fsanitize-undefined-trap-on-error'configure:4023: $? = 1configure:4061: result: noI wonder if a higher version gcc is needed.FYI After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).In the output of `make check-world`, I don't see `runtime error`.Though there was a crash (maybe specific to my machine):Core was generated by `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres --singl'.Program terminated with signal SIGILL, Illegal instruction.#0 0x000000000050642d in write_item.cold ()Missing separate debuginfos, use: debuginfo-install glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64 sssd-client-1.16.5-10.el7_9.10.x86_64(gdb) bt#0 0x000000000050642d in write_item.cold ()#1 0x0000000000ba9d1b in write_relcache_init_file ()#2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()#3 0x0000000000bd5cb5 in InitPostgres ()#4 0x0000000000a0a9ea in PostgresMain () FYIHi,Earlier I was using devtoolset-11 which had an `Illegal instruction` error.I compiled / installed gcc-11 from source (which took whole afternoon).`make check-world` passed with patch v3.In tmp_install/log/install.log, I saw:gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -c -o path.o path.crm -f libpgport.a Hi, Noah: Patch v3 passes `make check-world`Can you take another look ?",
"msg_date": "Mon, 10 Jan 2022 15:34:27 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 03:34:27PM -0800, Zhihong Yu wrote:\n> On Sun, Jan 9, 2022 at 6:45 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> > -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> > -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> > -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> > -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> \n> Patch v3 passes `make check-world`\n\nThe patch uses the \"LENGTH_VAR != 0\" style in px.c, but it uses \"POINTER_VAR\n!= NULL\" style in the other files. Please use \"LENGTH_VAR != 0\" style in each\nplace you're changing.\n\nAssuming the next version looks good, I'll likely back-patch it to v10. Would\nanyone like to argue for a back-patch all the way to 9.2?\n\n\n",
"msg_date": "Wed, 12 Jan 2022 18:49:33 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 6:49 PM Noah Misch <noah@leadboat.com> wrote:\n\n> On Mon, Jan 10, 2022 at 03:34:27PM -0800, Zhihong Yu wrote:\n> > On Sun, Jan 9, 2022 at 6:45 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > > gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> > > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> > > -Wmissing-format-attribute -Wimplicit-fallthrough=3\n> -Wcast-function-type\n> > > -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard\n> > > -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> > > -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> > > -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> >\n> > Patch v3 passes `make check-world`\n>\n> The patch uses the \"LENGTH_VAR != 0\" style in px.c, but it uses\n> \"POINTER_VAR\n> != NULL\" style in the other files. Please use \"LENGTH_VAR != 0\" style in\n> each\n> place you're changing.\n>\n> Assuming the next version looks good, I'll likely back-patch it to v10.\n> Would\n> anyone like to argue for a back-patch all the way to 9.2?\n>\nHi,\nPlease take a look at patch v5.\n\nCheers",
"msg_date": "Wed, 12 Jan 2022 19:08:23 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 7:08 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Wed, Jan 12, 2022 at 6:49 PM Noah Misch <noah@leadboat.com> wrote:\n>\n>> On Mon, Jan 10, 2022 at 03:34:27PM -0800, Zhihong Yu wrote:\n>> > On Sun, Jan 9, 2022 at 6:45 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> > > gcc -Wall -Wmissing-prototypes -Wpointer-arith\n>> > > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n>> > > -Wmissing-format-attribute -Wimplicit-fallthrough=3\n>> -Wcast-function-type\n>> > > -Wformat-security -fno-strict-aliasing -fwrapv\n>> -fexcess-precision=standard\n>> > > -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n>> > > -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n>> > > -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n>> >\n>> > Patch v3 passes `make check-world`\n>>\n>> The patch uses the \"LENGTH_VAR != 0\" style in px.c, but it uses\n>> \"POINTER_VAR\n>> != NULL\" style in the other files. Please use \"LENGTH_VAR != 0\" style in\n>> each\n>> place you're changing.\n>>\n>> Assuming the next version looks good, I'll likely back-patch it to v10.\n>> Would\n>> anyone like to argue for a back-patch all the way to 9.2?\n>>\n> Hi,\n> Please take a look at patch v5.\n>\n> Cheers\n>\nNoah:\nDo you have any more review comments ?\n\nThanks\n\nOn Wed, Jan 12, 2022 at 7:08 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Jan 12, 2022 at 6:49 PM Noah Misch <noah@leadboat.com> wrote:On Mon, Jan 10, 2022 at 03:34:27PM -0800, Zhihong Yu wrote:\n> On Sun, Jan 9, 2022 at 6:45 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> > -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> > -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> > -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> > -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> \n> Patch v3 passes `make check-world`\n\nThe patch uses the \"LENGTH_VAR != 0\" style in px.c, but it uses \"POINTER_VAR\n!= NULL\" style in the other files. Please use \"LENGTH_VAR != 0\" style in each\nplace you're changing.\n\nAssuming the next version looks good, I'll likely back-patch it to v10. Would\nanyone like to argue for a back-patch all the way to 9.2?Hi,Please take a look at patch v5.Cheers Noah:Do you have any more review comments ?Thanks",
"msg_date": "Thu, 13 Jan 2022 15:03:11 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Sun, Jan 09, 2022 at 06:45:09PM -0800, Zhihong Yu wrote:\n> On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).\n> > In the output of `make check-world`, I don't see `runtime error`.\n\nThat's expected. With -fsanitize-undefined-trap-on-error, the program will\ngenerate SIGILL when UBSan detects undefined behavior. To get \"runtime error\"\nmessages in the postmaster log, drop -fsanitize-undefined-trap-on-error. Both\nways of running the tests have uses. -fsanitize-undefined-trap-on-error is\nbetter when you think the code is clean, because a zero \"make check-world\"\nexit status confirms the code is clean. Once you know the code is unclean in\nsome way, -fsanitize-undefined-trap-on-error is better for getting details.\n\n> > Though there was a crash (maybe specific to my machine):\n> >\n> > Core was generated by\n> > `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n> > --singl'.\n> > Program terminated with signal SIGILL, Illegal instruction.\n> > #0 0x000000000050642d in write_item.cold ()\n> > Missing separate debuginfos, use: debuginfo-install\n> > glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\n> > sssd-client-1.16.5-10.el7_9.10.x86_64\n> > (gdb) bt\n> > #0 0x000000000050642d in write_item.cold ()\n> > #1 0x0000000000ba9d1b in write_relcache_init_file ()\n> > #2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n> > #3 0x0000000000bd5cb5 in InitPostgres ()\n> > #4 0x0000000000a0a9ea in PostgresMain ()\n\nThat is UBSan detecting undefined behavior. A successful patch version will\nfix write_item(), among many other places that are currently making\ncheck-world fail. I get the same when testing your v5 under \"gcc (Debian\n11.2.0-13) 11.2.0\". I used the same host as buildfarm member thorntail, and I\nconfigured like this:\n\n ./configure -C --with-lz4 --prefix=$HOME/sw/nopath/pghead --enable-tap-tests --enable-debug --enable-depend --enable-cassert CC='ccache gcc-11 -fsanitize=undefined -fsanitize-undefined-trap-on-error' CFLAGS='-O2 -funwind-tables'\n\n> Earlier I was using devtoolset-11 which had an `Illegal instruction` error.\n> \n> I compiled / installed gcc-11 from source (which took whole afternoon).\n> `make check-world` passed with patch v3.\n> In tmp_install/log/install.log, I saw:\n> \n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> rm -f libpgport.a\n\nPerhaps this self-compiled gcc-11 is defective, being unable to detect the\ninstances of undefined behavior that other builds detect. If so, use the\n\"devtoolset-11\" gcc instead. You're also building without optimization; that\nmight be the problem.\n\n\n",
"msg_date": "Thu, 13 Jan 2022 20:09:40 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 8:09 PM Noah Misch <noah@leadboat.com> wrote:\n\n> On Sun, Jan 09, 2022 at 06:45:09PM -0800, Zhihong Yu wrote:\n> > On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > > After installing gcc-11, ./configure passed (with\n> 0003-memcpy-null.patch).\n> > > In the output of `make check-world`, I don't see `runtime error`.\n>\n> That's expected. With -fsanitize-undefined-trap-on-error, the program will\n> generate SIGILL when UBSan detects undefined behavior. To get \"runtime\n> error\"\n> messages in the postmaster log, drop -fsanitize-undefined-trap-on-error.\n> Both\n> ways of running the tests have uses. -fsanitize-undefined-trap-on-error is\n> better when you think the code is clean, because a zero \"make check-world\"\n> exit status confirms the code is clean. Once you know the code is unclean\n> in\n> some way, -fsanitize-undefined-trap-on-error is better for getting details.\n>\n> > > Though there was a crash (maybe specific to my machine):\n> > >\n> > > Core was generated by\n> > >\n> `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n> > > --singl'.\n> > > Program terminated with signal SIGILL, Illegal instruction.\n> > > #0 0x000000000050642d in write_item.cold ()\n> > > Missing separate debuginfos, use: debuginfo-install\n> > > glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\n> > > sssd-client-1.16.5-10.el7_9.10.x86_64\n> > > (gdb) bt\n> > > #0 0x000000000050642d in write_item.cold ()\n> > > #1 0x0000000000ba9d1b in write_relcache_init_file ()\n> > > #2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n> > > #3 0x0000000000bd5cb5 in InitPostgres ()\n> > > #4 0x0000000000a0a9ea in PostgresMain ()\n>\n> That is UBSan detecting undefined behavior. A successful patch version\n> will\n> fix write_item(), among many other places that are currently making\n> check-world fail. I get the same when testing your v5 under \"gcc (Debian\n> 11.2.0-13) 11.2.0\". I used the same host as buildfarm member thorntail,\n> and I\n> configured like this:\n>\n> ./configure -C --with-lz4 --prefix=$HOME/sw/nopath/pghead\n> --enable-tap-tests --enable-debug --enable-depend --enable-cassert\n> CC='ccache gcc-11 -fsanitize=undefined -fsanitize-undefined-trap-on-error'\n> CFLAGS='-O2 -funwind-tables'\n>\n> > Earlier I was using devtoolset-11 which had an `Illegal instruction`\n> error.\n> >\n> > I compiled / installed gcc-11 from source (which took whole afternoon).\n> > `make check-world` passed with patch v3.\n> > In tmp_install/log/install.log, I saw:\n> >\n> > gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> > -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard\n> > -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> > -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> > -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> > rm -f libpgport.a\n>\n> Perhaps this self-compiled gcc-11 is defective, being unable to detect the\n> instances of undefined behavior that other builds detect. If so, use the\n> \"devtoolset-11\" gcc instead. You're also building without optimization;\n> that\n> might be the problem.\n>\n\nI tried both locally built gcc-11 and devtoolset-11 with configure command\ncopied from above.\n`make world` failed in both cases with:\n\nperforming post-bootstrap initialization ... sh: line 1: 24714 Illegal\ninstruction (core dumped)\n\".../postgres/tmp_install/.../postgres/bin/postgres\" --single -F -O -j -c\nsearch_path=pg_catalog -c exit_on_error=true -c log_checkpoints=false\ntemplate1 > /dev/null\nchild process exited with exit code 132\n\n#0 0x000000000050a8d6 in write_item (data=<optimized out>, len=<optimized\nout>, fp=<optimized out>) at relcache.c:6471\n#1 0x0000000000c33273 in write_relcache_init_file (shared=true) at\nrelcache.c:6368\n#2 0x0000000000c33c50 in RelationCacheInitializePhase3 () at\nrelcache.c:4220\n#3 0x0000000000c55825 in InitPostgres (in_dbname=<optimized out>,\ndboid=3105442800, username=<optimized out>, useroid=<optimized out>,\nout_dbname=0x0, override_allow_connections=<optimized out>) at\npostinit.c:1014\n\nFYI\n\nOn Thu, Jan 13, 2022 at 8:09 PM Noah Misch <noah@leadboat.com> wrote:On Sun, Jan 09, 2022 at 06:45:09PM -0800, Zhihong Yu wrote:\n> On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).\n> > In the output of `make check-world`, I don't see `runtime error`.\n\nThat's expected. With -fsanitize-undefined-trap-on-error, the program will\ngenerate SIGILL when UBSan detects undefined behavior. To get \"runtime error\"\nmessages in the postmaster log, drop -fsanitize-undefined-trap-on-error. Both\nways of running the tests have uses. -fsanitize-undefined-trap-on-error is\nbetter when you think the code is clean, because a zero \"make check-world\"\nexit status confirms the code is clean. Once you know the code is unclean in\nsome way, -fsanitize-undefined-trap-on-error is better for getting details.\n\n> > Though there was a crash (maybe specific to my machine):\n> >\n> > Core was generated by\n> > `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n> > --singl'.\n> > Program terminated with signal SIGILL, Illegal instruction.\n> > #0 0x000000000050642d in write_item.cold ()\n> > Missing separate debuginfos, use: debuginfo-install\n> > glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\n> > sssd-client-1.16.5-10.el7_9.10.x86_64\n> > (gdb) bt\n> > #0 0x000000000050642d in write_item.cold ()\n> > #1 0x0000000000ba9d1b in write_relcache_init_file ()\n> > #2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n> > #3 0x0000000000bd5cb5 in InitPostgres ()\n> > #4 0x0000000000a0a9ea in PostgresMain ()\n\nThat is UBSan detecting undefined behavior. A successful patch version will\nfix write_item(), among many other places that are currently making\ncheck-world fail. I get the same when testing your v5 under \"gcc (Debian\n11.2.0-13) 11.2.0\". I used the same host as buildfarm member thorntail, and I\nconfigured like this:\n\n ./configure -C --with-lz4 --prefix=$HOME/sw/nopath/pghead --enable-tap-tests --enable-debug --enable-depend --enable-cassert CC='ccache gcc-11 -fsanitize=undefined -fsanitize-undefined-trap-on-error' CFLAGS='-O2 -funwind-tables'\n\n> Earlier I was using devtoolset-11 which had an `Illegal instruction` error.\n> \n> I compiled / installed gcc-11 from source (which took whole afternoon).\n> `make check-world` passed with patch v3.\n> In tmp_install/log/install.log, I saw:\n> \n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> rm -f libpgport.a\n\nPerhaps this self-compiled gcc-11 is defective, being unable to detect the\ninstances of undefined behavior that other builds detect. If so, use the\n\"devtoolset-11\" gcc instead. You're also building without optimization; that\nmight be the problem.I tried both locally built gcc-11 and devtoolset-11 with configure command copied from above.`make world` failed in both cases with:performing post-bootstrap initialization ... sh: line 1: 24714 Illegal instruction (core dumped) \".../postgres/tmp_install/.../postgres/bin/postgres\" --single -F -O -j -c search_path=pg_catalog -c exit_on_error=true -c log_checkpoints=false template1 > /dev/nullchild process exited with exit code 132 #0 0x000000000050a8d6 in write_item (data=<optimized out>, len=<optimized out>, fp=<optimized out>) at relcache.c:6471#1 0x0000000000c33273 in write_relcache_init_file (shared=true) at relcache.c:6368#2 0x0000000000c33c50 in RelationCacheInitializePhase3 () at relcache.c:4220#3 0x0000000000c55825 in InitPostgres (in_dbname=<optimized out>, dboid=3105442800, username=<optimized out>, useroid=<optimized out>, out_dbname=0x0, override_allow_connections=<optimized out>) at postinit.c:1014FYI",
"msg_date": "Thu, 13 Jan 2022 21:09:30 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 9:09 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Thu, Jan 13, 2022 at 8:09 PM Noah Misch <noah@leadboat.com> wrote:\n>\n>> On Sun, Jan 09, 2022 at 06:45:09PM -0800, Zhihong Yu wrote:\n>> > On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> > > After installing gcc-11, ./configure passed (with\n>> 0003-memcpy-null.patch).\n>> > > In the output of `make check-world`, I don't see `runtime error`.\n>>\n>> That's expected. With -fsanitize-undefined-trap-on-error, the program\n>> will\n>> generate SIGILL when UBSan detects undefined behavior. To get \"runtime\n>> error\"\n>> messages in the postmaster log, drop -fsanitize-undefined-trap-on-error.\n>> Both\n>> ways of running the tests have uses. -fsanitize-undefined-trap-on-error\n>> is\n>> better when you think the code is clean, because a zero \"make check-world\"\n>> exit status confirms the code is clean. Once you know the code is\n>> unclean in\n>> some way, -fsanitize-undefined-trap-on-error is better for getting\n>> details.\n>>\n>> > > Though there was a crash (maybe specific to my machine):\n>> > >\n>> > > Core was generated by\n>> > >\n>> `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n>> > > --singl'.\n>> > > Program terminated with signal SIGILL, Illegal instruction.\n>> > > #0 0x000000000050642d in write_item.cold ()\n>> > > Missing separate debuginfos, use: debuginfo-install\n>> > > glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\n>> > > sssd-client-1.16.5-10.el7_9.10.x86_64\n>> > > (gdb) bt\n>> > > #0 0x000000000050642d in write_item.cold ()\n>> > > #1 0x0000000000ba9d1b in write_relcache_init_file ()\n>> > > #2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n>> > > #3 0x0000000000bd5cb5 in InitPostgres ()\n>> > > #4 0x0000000000a0a9ea in PostgresMain ()\n>>\n>> That is UBSan detecting undefined behavior. A successful patch version\n>> will\n>> fix write_item(), among many other places that are currently making\n>> check-world fail. I get the same when testing your v5 under \"gcc (Debian\n>> 11.2.0-13) 11.2.0\". I used the same host as buildfarm member thorntail,\n>> and I\n>> configured like this:\n>>\n>> ./configure -C --with-lz4 --prefix=$HOME/sw/nopath/pghead\n>> --enable-tap-tests --enable-debug --enable-depend --enable-cassert\n>> CC='ccache gcc-11 -fsanitize=undefined -fsanitize-undefined-trap-on-error'\n>> CFLAGS='-O2 -funwind-tables'\n>>\n>> > Earlier I was using devtoolset-11 which had an `Illegal instruction`\n>> error.\n>> >\n>> > I compiled / installed gcc-11 from source (which took whole afternoon).\n>> > `make check-world` passed with patch v3.\n>> > In tmp_install/log/install.log, I saw:\n>> >\n>> > gcc -Wall -Wmissing-prototypes -Wpointer-arith\n>> > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n>> > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n>> > -Wformat-security -fno-strict-aliasing -fwrapv\n>> -fexcess-precision=standard\n>> > -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n>> > -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n>> > -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n>> > rm -f libpgport.a\n>>\n>> Perhaps this self-compiled gcc-11 is defective, being unable to detect the\n>> instances of undefined behavior that other builds detect. If so, use the\n>> \"devtoolset-11\" gcc instead. You're also building without optimization;\n>> that\n>> might be the problem.\n>>\n>\n> I tried both locally built gcc-11 and devtoolset-11 with configure command\n> copied from above.\n> `make world` failed in both cases with:\n>\n> performing post-bootstrap initialization ... sh: line 1: 24714 Illegal\n> instruction (core dumped)\n> \".../postgres/tmp_install/.../postgres/bin/postgres\" --single -F -O -j -c\n> search_path=pg_catalog -c exit_on_error=true -c log_checkpoints=false\n> template1 > /dev/null\n> child process exited with exit code 132\n>\n> #0 0x000000000050a8d6 in write_item (data=<optimized out>, len=<optimized\n> out>, fp=<optimized out>) at relcache.c:6471\n> #1 0x0000000000c33273 in write_relcache_init_file (shared=true) at\n> relcache.c:6368\n> #2 0x0000000000c33c50 in RelationCacheInitializePhase3 () at\n> relcache.c:4220\n> #3 0x0000000000c55825 in InitPostgres (in_dbname=<optimized out>,\n> dboid=3105442800, username=<optimized out>, useroid=<optimized out>,\n> out_dbname=0x0, override_allow_connections=<optimized out>) at\n> postinit.c:1014\n>\n> FYI\n>\nHi,\nI forgot to mention that patch v5 was included during the experiment.\n\nCheers\n\nOn Thu, Jan 13, 2022 at 9:09 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Thu, Jan 13, 2022 at 8:09 PM Noah Misch <noah@leadboat.com> wrote:On Sun, Jan 09, 2022 at 06:45:09PM -0800, Zhihong Yu wrote:\n> On Sun, Jan 9, 2022 at 1:27 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > After installing gcc-11, ./configure passed (with 0003-memcpy-null.patch).\n> > In the output of `make check-world`, I don't see `runtime error`.\n\nThat's expected. With -fsanitize-undefined-trap-on-error, the program will\ngenerate SIGILL when UBSan detects undefined behavior. To get \"runtime error\"\nmessages in the postmaster log, drop -fsanitize-undefined-trap-on-error. Both\nways of running the tests have uses. -fsanitize-undefined-trap-on-error is\nbetter when you think the code is clean, because a zero \"make check-world\"\nexit status confirms the code is clean. Once you know the code is unclean in\nsome way, -fsanitize-undefined-trap-on-error is better for getting details.\n\n> > Though there was a crash (maybe specific to my machine):\n> >\n> > Core was generated by\n> > `/nfusr/dev-server/zyu/postgres/tmp_install/usr/local/pgsql/bin/postgres\n> > --singl'.\n> > Program terminated with signal SIGILL, Illegal instruction.\n> > #0 0x000000000050642d in write_item.cold ()\n> > Missing separate debuginfos, use: debuginfo-install\n> > glibc-2.17-325.el7_9.x86_64 nss-pam-ldapd-0.8.13-25.el7.x86_64\n> > sssd-client-1.16.5-10.el7_9.10.x86_64\n> > (gdb) bt\n> > #0 0x000000000050642d in write_item.cold ()\n> > #1 0x0000000000ba9d1b in write_relcache_init_file ()\n> > #2 0x0000000000bb58f7 in RelationCacheInitializePhase3 ()\n> > #3 0x0000000000bd5cb5 in InitPostgres ()\n> > #4 0x0000000000a0a9ea in PostgresMain ()\n\nThat is UBSan detecting undefined behavior. A successful patch version will\nfix write_item(), among many other places that are currently making\ncheck-world fail. I get the same when testing your v5 under \"gcc (Debian\n11.2.0-13) 11.2.0\". I used the same host as buildfarm member thorntail, and I\nconfigured like this:\n\n ./configure -C --with-lz4 --prefix=$HOME/sw/nopath/pghead --enable-tap-tests --enable-debug --enable-depend --enable-cassert CC='ccache gcc-11 -fsanitize=undefined -fsanitize-undefined-trap-on-error' CFLAGS='-O2 -funwind-tables'\n\n> Earlier I was using devtoolset-11 which had an `Illegal instruction` error.\n> \n> I compiled / installed gcc-11 from source (which took whole afternoon).\n> `make check-world` passed with patch v3.\n> In tmp_install/log/install.log, I saw:\n> \n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n> -Wno-format-truncation -Wno-stringop-truncation -fsanitize=undefined\n> -fsanitize-undefined-trap-on-error -I../../src/port -DFRONTEND\n> -I../../src/include -D_GNU_SOURCE -c -o path.o path.c\n> rm -f libpgport.a\n\nPerhaps this self-compiled gcc-11 is defective, being unable to detect the\ninstances of undefined behavior that other builds detect. If so, use the\n\"devtoolset-11\" gcc instead. You're also building without optimization; that\nmight be the problem.I tried both locally built gcc-11 and devtoolset-11 with configure command copied from above.`make world` failed in both cases with:performing post-bootstrap initialization ... sh: line 1: 24714 Illegal instruction (core dumped) \".../postgres/tmp_install/.../postgres/bin/postgres\" --single -F -O -j -c search_path=pg_catalog -c exit_on_error=true -c log_checkpoints=false template1 > /dev/nullchild process exited with exit code 132 #0 0x000000000050a8d6 in write_item (data=<optimized out>, len=<optimized out>, fp=<optimized out>) at relcache.c:6471#1 0x0000000000c33273 in write_relcache_init_file (shared=true) at relcache.c:6368#2 0x0000000000c33c50 in RelationCacheInitializePhase3 () at relcache.c:4220#3 0x0000000000c55825 in InitPostgres (in_dbname=<optimized out>, dboid=3105442800, username=<optimized out>, useroid=<optimized out>, out_dbname=0x0, override_allow_connections=<optimized out>) at postinit.c:1014FYIHi,I forgot to mention that patch v5 was included during the experiment.Cheers",
"msg_date": "Thu, 13 Jan 2022 21:12:31 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: null iv parameter passed to combo_init()"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using Postgres' full text search since some time now and overall it's working really well for me. However one issue I have is that ts_headline highlights partial matches of FOLLOWED BY (<->) expressions, e.g.\n\nSELECT ts_headline('some words and some more words', to_tsquery('some<->words'));\n\ngives\n\n<b>some</b> <b>words</b> and <b>some</b> more <b>words</b>\n\nwhile I expect\n\n<b>some words</b> and some more words\n\nI think the highlights of partial matches is in most cases not useful and confusing to end users, they may think the search did not recognize they requested the words to be consecutive. Google also does not highlight partial matches it seems.\n\nI suspect to implement this would require substantial changes to ts_headline since it seems to treat the lexemes independently, but it would be a big improvement and should be the default behaviour.\n\nBest regards,\nJake N\n\n\n",
"msg_date": "Sat, 8 Jan 2022 20:09:24 +0100 (CET)",
"msg_from": "Jake North <jknr@tuta.io>",
"msg_from_op": true,
"msg_subject": "[feature request] ts_headline should have an option to highlight\n only full matches of <-> expressions"
},
{
"msg_contents": "8 Jan 2022, 20:09 by jknr@tuta.io:\n\n> Hi,\n>\n> I am using Postgres' full text search since some time now and overall it's working really well for me. However one issue I have is that ts_headline highlights partial matches of FOLLOWED BY (<->) expressions, e.g.\n>\n> SELECT ts_headline('some words and some more words', to_tsquery('some<->words'));\n>\n> gives\n>\n> <b>some</b> <b>words</b> and <b>some</b> more <b>words</b>\n>\n> while I expect\n>\n> <b>some words</b> and some more words\n>\n> I think the highlights of partial matches is in most cases not useful and confusing to end users, they may think the search did not recognize they requested the words to be consecutive. Google also does not highlight partial matches it seems.\n>\n> I suspect to implement this would require substantial changes to ts_headline since it seems to treat the lexemes independently, but it would be a big improvement and should be the default behaviour.\n>\n> Best regards,\n> Jake N\n>\n\nPS I found this issue brought up here https://stackoverflow.com/questions/69512416/is-ts-headline-intended-to-highlight-non-matching-parts-of-the-query-which-it but the real issue was not recognised is seems\n\n\n",
"msg_date": "Sat, 8 Jan 2022 20:26:51 +0100 (CET)",
"msg_from": "Jake North <jknr@tuta.io>",
"msg_from_op": true,
"msg_subject": "Re: [feature request] ts_headline should have an option to\n highlight only full matches of <-> expressions"
}
] |
[
{
"msg_contents": "On 5/1/2022 10:13, Tom Lane wrote:\n > I feel like we need to get away from the idea that there is just\n > one query hash, and somehow let different extensions attach\n > differently-calculated hashes to a query. I don't have any immediate\n > ideas about how to do that in a reasonably inexpensive way.\n\nNow, queryId field represents an query class (depending on an jumbling \nimplementation). It is used by extensions as the way for simple tracking \na query from a parse tree creation point to the end of its life along \nall hook calls, which an extension uses (remember about possible plan \ncaching).\n\nI know at least two different cases of using queryId:\n1) for monitoring purposes - pg_stat_statements is watching how often \nqueries of a class emerge in the database and collects a stat on each class.\n2) adaptive purposes - some extensions influence a planner decision \nduring the optimization stage and want to learn on a performance shift \nat the end of execution stage.\n\nDifferent purposes may require different jumbling implementations. But \nusers can want to use such extensions at the same time. So we should \nallow to attach many different query IDs to a query (maybe better to \ncall it as 'query label'?).\n\nThinking for a while I invented three different ways to implement it:\n1. queryId will be a trivial 64-bit counter. So, each extension can \ndiffer each query from any other, track it along all hooks, use an \njumbling code and store an queryId internally. Here only one big problem \nI see - increasing overhead in the case of many consumers of queryId \nfeature.\n\n2. Instead of simple queryId we can store a list of pairs (QueryId, \nfuncOid). An extension can register a callback for queryId generation \nand the core will form a list of queryIds right after an query tree \nrewriting. funcOid is needed to differ jumbling implementations. Here we \nshould invent an additional node type for an element of the list.\n\n3. Instead of queryId we could add a multi-purpose 'private' list in the \nQuery struct. Any extension can add to this list additional object(s) \n(with registered object type, of course). As an example, i can imagine a \nkind of convention for queryIds in such case - store a String node with \nvalue: '<extension name> - <Query ID>'.\nThis way we should implement a registered callback mechanism too.\n\nI think, third way is the cheapest, flexible and simple for implementation.\n\nAny thoughts, comments, criticism ?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Sun, 9 Jan 2022 01:02:23 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> On 5/1/2022 10:13, Tom Lane wrote:\n>>> I feel like we need to get away from the idea that there is just\n>>> one query hash, and somehow let different extensions attach\n>>> differently-calculated hashes to a query. I don't have any immediate\n>>> ideas about how to do that in a reasonably inexpensive way.\n\n> Thinking for a while I invented three different ways to implement it:\n> 1. queryId will be a trivial 64-bit counter.\n\nThis seems pretty useless. The whole point of the query hash, at\nleast for many use-cases, is to allow recognizing queries that are\nthe same or similar.\n\n> 2. Instead of simple queryId we can store a list of pairs (QueryId, \n> funcOid). An extension can register a callback for queryId generation \n> and the core will form a list of queryIds right after an query tree \n> rewriting. funcOid is needed to differ jumbling implementations. Here we \n> should invent an additional node type for an element of the list.\n\nI'm not sure that funcOid is a reasonable way to tag different hash\ncalculation methods, because it isn't necessarily stable across\ninstallations. For the same reason, it'd be hard for two extensions\nto collaborate on a common query-hash definition.\n\n> 3. Instead of queryId we could add a multi-purpose 'private' list in the \n> Query struct. Any extension can add to this list additional object(s) \n> (with registered object type, of course). As an example, i can imagine a \n> kind of convention for queryIds in such case - store a String node with \n> value: '<extension name> - <Query ID>'.\n\nAgain, this is presuming that every extension is totally independent\nand has no interest in what any other code is doing. But I don't\nthink we want to make every extension that wants a hash duplicate\nthe whole of queryjumble.c.\n\nThe idea I'd been vaguely thinking about is to allow attaching a list\nof query-hash nodes to a Query, where each node would contain a \"tag\"\nidentifying the specific hash calculation method, and also the value\nof the query's hash calculated according to that method. We could\nprobably get away with saying that all such hash values must be uint64.\nThe main difference from your function-OID idea, I think, is that\nI'm envisioning the tags as being small integers with well-known\nvalues, similarly to the way we manage stakind values in pg_statistic.\nIn this way, an extension that wants a hash that the core knows how\nto calculate doesn't need its own copy of the code, and similarly\none extension could publish a calculation method for use by other\nextensions.\n\nWe'd also need some mechanism for registering a function to be\nused to calculate the hash for any given tag value, of course.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Jan 2022 19:49:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "> On Sat, Jan 08, 2022 at 07:49:59PM -0500, Tom Lane wrote:\n>\n> The idea I'd been vaguely thinking about is to allow attaching a list\n> of query-hash nodes to a Query, where each node would contain a \"tag\"\n> identifying the specific hash calculation method, and also the value\n> of the query's hash calculated according to that method. We could\n> probably get away with saying that all such hash values must be uint64.\n> The main difference from your function-OID idea, I think, is that\n> I'm envisioning the tags as being small integers with well-known\n> values, similarly to the way we manage stakind values in pg_statistic.\n> In this way, an extension that wants a hash that the core knows how\n> to calculate doesn't need its own copy of the code, and similarly\n> one extension could publish a calculation method for use by other\n> extensions.\n\nAn extension that wants a slightly modified version of hash calculation\nimplementation from the core would still need to copy everything. The\ncore probably has to provide more than one (hash, method) pair to cover\nsome basic needs.\n\n\n",
"msg_date": "Sun, 9 Jan 2022 12:43:06 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On Sun, Jan 09, 2022 at 12:43:06PM +0100, Dmitry Dolgov wrote:\n>\n> An extension that wants a slightly modified version of hash calculation\n> implementation from the core would still need to copy everything. The\n> core probably has to provide more than one (hash, method) pair to cover\n> some basic needs.\n\nOr just GUC(s) to adapt the behavior. But in any case there isn't much that\ncan be done that won't result in a huge performance drop (like e.g. the wanted\nstability over logical replication or backup/restore).\n\n\n",
"msg_date": "Sun, 9 Jan 2022 20:04:44 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On Sat, Jan 08, 2022 at 07:49:59PM -0500, Tom Lane wrote:\n>\n> The idea I'd been vaguely thinking about is to allow attaching a list\n> of query-hash nodes to a Query, where each node would contain a \"tag\"\n> identifying the specific hash calculation method, and also the value\n> of the query's hash calculated according to that method.\n\nFor now the queryid mixes two different things: fingerprinting and query text\nnormalization. Should each calculation method be allowed to do a different\nnormalization too, and if yes where should be stored the state data needed for\nthat? If not, we would need some kind of primary hash for that purpose.\n\nLooking at Andrey's use case for wanting multiple hashes, I don't think that\nadaptive optimization needs a normalized query string. The only use would be\nto output some statistics, but this could be achieved by storing a list of\n\"primary queryid\" for each adaptive entry. That's probably also true for\nanything that's not monitoring intended. Also, all monitoring consumers should\nprobably agree on the same queryid, both fingerprint and normalized string, as\notherwise it's impossible to cross-reference metric data.\n\n\n",
"msg_date": "Sun, 9 Jan 2022 20:13:21 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On 1/9/22 5:13 PM, Julien Rouhaud wrote:\n> For now the queryid mixes two different things: fingerprinting and query text\n> normalization. Should each calculation method be allowed to do a different\n> normalization too, and if yes where should be stored the state data needed for\n> that? If not, we would need some kind of primary hash for that purpose.\n> \nDo You mean JumbleState?\nI think, registering queryId generator we should store also a pointer \n(void **args) to an additional data entry, as usual.\n\n> Looking at Andrey's use case for wanting multiple hashes, I don't think that\n> adaptive optimization needs a normalized query string. The only use would be\n> to output some statistics, but this could be achieved by storing a list of\n> \"primary queryid\" for each adaptive entry. That's probably also true for\n> anything that's not monitoring intended. Also, all monitoring consumers should\n> probably agree on the same queryid, both fingerprint and normalized string, as\n> otherwise it's impossible to cross-reference metric data.\n> \nI can add one more use case.\nOur extension for freezing query plan uses query tree comparison \ntechnique to prove, that the plan can be applied (and we don't need to \nexecute planning procedure at all).\nThe procedure of a tree equality checking is expensive and we use \ncheaper queryId comparison to identify possible candidates. So here, for \nthe better performance and queries coverage, we need to use query tree \nnormalization - queryId should be stable to some modifications in a \nquery text which do not change semantics.\nAs an example, query plan with external parameters can be used to \nexecute constant query if these constants correspond by place and type \nto the parameters. So, queryId calculation technique returns also \npointers to all constants and parameters found during the calculation.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Mon, 10 Jan 2022 09:10:59 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 09:10:59AM +0500, Andrey V. Lepikhov wrote:\n> On 1/9/22 5:13 PM, Julien Rouhaud wrote:\n> > For now the queryid mixes two different things: fingerprinting and query text\n> > normalization. Should each calculation method be allowed to do a different\n> > normalization too, and if yes where should be stored the state data needed for\n> > that? If not, we would need some kind of primary hash for that purpose.\n> > \n> Do You mean JumbleState?\n\nYes, or some other abstraction. For instance with the proposed patch to handle\ndifferently the IN clauses, you needs a to change JumbleState.\n\n> I think, registering queryId generator we should store also a pointer (void\n> **args) to an additional data entry, as usual.\n\nWell, yes but only if you need to produce different versions of the normalized\nquery text, and I'm not convinced that's really necessary.\n\n> > Looking at Andrey's use case for wanting multiple hashes, I don't think that\n> > adaptive optimization needs a normalized query string. The only use would be\n> > to output some statistics, but this could be achieved by storing a list of\n> > \"primary queryid\" for each adaptive entry. That's probably also true for\n> > anything that's not monitoring intended. Also, all monitoring consumers should\n> > probably agree on the same queryid, both fingerprint and normalized string, as\n> > otherwise it's impossible to cross-reference metric data.\n> > \n> I can add one more use case.\n> Our extension for freezing query plan uses query tree comparison technique\n> to prove, that the plan can be applied (and we don't need to execute\n> planning procedure at all).\n> The procedure of a tree equality checking is expensive and we use cheaper\n> queryId comparison to identify possible candidates. So here, for the better\n> performance and queries coverage, we need to use query tree normalization -\n> queryId should be stable to some modifications in a query text which do not\n> change semantics.\n> As an example, query plan with external parameters can be used to execute\n> constant query if these constants correspond by place and type to the\n> parameters. So, queryId calculation technique returns also pointers to all\n> constants and parameters found during the calculation.\n\nI'm also working on a similar extension, and yes you can't accept any\nfingerprinting approach for that. I don't know what are the exact heuristics\nof your cheaper queryid calculation are, but is it reasonable to use it with\nsomething like pg_stat_statements? If yes, you don't really need two queryid\napproach for the sake of this single extension and therefore don't need to\nstore multiple jumble state or similar per statement. Especially since\nrequiring another one would mean a performance drop as soon as you want to use\nsomething as common as pg_stat_statements.\n\n\n",
"msg_date": "Mon, 10 Jan 2022 12:51:55 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On 1/10/22 9:51 AM, Julien Rouhaud wrote:\n> On Mon, Jan 10, 2022 at 09:10:59AM +0500, Andrey V. Lepikhov wrote:\n>> I can add one more use case.\n>> Our extension for freezing query plan uses query tree comparison technique\n>> to prove, that the plan can be applied (and we don't need to execute\n>> planning procedure at all).\n>> The procedure of a tree equality checking is expensive and we use cheaper\n>> queryId comparison to identify possible candidates. So here, for the better\n>> performance and queries coverage, we need to use query tree normalization -\n>> queryId should be stable to some modifications in a query text which do not\n>> change semantics.\n>> As an example, query plan with external parameters can be used to execute\n>> constant query if these constants correspond by place and type to the\n>> parameters. So, queryId calculation technique returns also pointers to all\n>> constants and parameters found during the calculation.\n> \n> I'm also working on a similar extension, and yes you can't accept any\n> fingerprinting approach for that. I don't know what are the exact heuristics\n> of your cheaper queryid calculation are, but is it reasonable to use it with\n> something like pg_stat_statements? If yes, you don't really need two queryid\n> approach for the sake of this single extension and therefore don't need to\n> store multiple jumble state or similar per statement. Especially since\n> requiring another one would mean a performance drop as soon as you want to use\n> something as common as pg_stat_statements.\n> \nI think, pg_stat_statements can live with an queryId generator of the \nsr_plan extension. But It replaces all constants with $XXX parameter at \nthe query string. In our extension user defines which plan is optimal \nand which constants can be used as parameters in the plan.\nOne drawback I see here - creating or dropping of my extension changes \nbehavior of pg_stat_statements that leads to distortion of the DB load \nprofile. Also, we haven't guarantees, that another extension will work \ncorrectly (or in optimal way) with such queryId.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Mon, 10 Jan 2022 12:37:34 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 12:37:34PM +0500, Andrey V. Lepikhov wrote:\n> I think, pg_stat_statements can live with an queryId generator of the\n> sr_plan extension. But It replaces all constants with $XXX parameter at the\n> query string. In our extension user defines which plan is optimal and which\n> constants can be used as parameters in the plan.\n\nI don't know the details of that extension, but I think that it should work as\nlong as you have the constants information in the jumble state, whatever the\nresulting normalized query string is right?\n\n> One drawback I see here - creating or dropping of my extension changes\n> behavior of pg_stat_statements that leads to distortion of the DB load\n> profile. Also, we haven't guarantees, that another extension will work\n> correctly (or in optimal way) with such queryId.\n\nBut then, if generating 2 queryid is a better for performance, does the\nextension really need an additional jumble state and/or normalized query\nstring?\n\n\n",
"msg_date": "Mon, 10 Jan 2022 16:56:58 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On 10/1/2022 13:56, Julien Rouhaud wrote:\n> On Mon, Jan 10, 2022 at 12:37:34PM +0500, Andrey V. Lepikhov wrote:\n>> I think, pg_stat_statements can live with an queryId generator of the\n>> sr_plan extension. But It replaces all constants with $XXX parameter at the\n>> query string. In our extension user defines which plan is optimal and which\n>> constants can be used as parameters in the plan.\n> \n> I don't know the details of that extension, but I think that it should work as\n> long as you have the constants information in the jumble state, whatever the\n> resulting normalized query string is right?\nYes. the same input query string doesn't prove that frozen query plan \ncan be used, because rewrite rules could be changed. So we use only a \nquery tree. Here we must have custom jumbling implementation.\nqueryId in this extension defines two aspects:\n1. How many input queries will be compared with a query tree template of \nthe frozen statement.\n2. As a result, performance overheads on unsuccessful comparings.\n> \n>> One drawback I see here - creating or dropping of my extension changes\n>> behavior of pg_stat_statements that leads to distortion of the DB load\n>> profile. Also, we haven't guarantees, that another extension will work\n>> correctly (or in optimal way) with such queryId.\n> \n> But then, if generating 2 queryid is a better for performance, does the\n> extension really need an additional jumble state and/or normalized query\n> string?\nAdditional Jumble state isn't necessary, but it would be better for \nperformance to collect pointers to all constant nodes during a process \nof hash generation.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Mon, 10 Jan 2022 15:24:46 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 03:24:46PM +0500, Andrey Lepikhov wrote:\n> On 10/1/2022 13:56, Julien Rouhaud wrote:\n> > \n> > I don't know the details of that extension, but I think that it should work as\n> > long as you have the constants information in the jumble state, whatever the\n> > resulting normalized query string is right?\n> Yes. the same input query string doesn't prove that frozen query plan can be\n> used, because rewrite rules could be changed. So we use only a query tree.\n\nYes, I'm fully aware of that. I wasn't asking about using the input query\nstring but the need for generating a dedicated normalized output query string\nthat would be potentially different from the one generated by\npg_stat_statements (or similar).\n\n> > But then, if generating 2 queryid is a better for performance, does the\n> > extension really need an additional jumble state and/or normalized query\n> > string?\n> Additional Jumble state isn't necessary, but it would be better for\n> performance to collect pointers to all constant nodes during a process of\n> hash generation.\n\nI agree, but it's even better for performance if this is collected only once.\nI still don't know if this extension (or any extension) would require something\ndifferent from a common jumble state that would serve for all purpose or if we\ncan work with a single jumble state and only handle multiple queryid.\n\n\n",
"msg_date": "Mon, 10 Jan 2022 18:39:57 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On 10/1/2022 15:39, Julien Rouhaud wrote:\n> On Mon, Jan 10, 2022 at 03:24:46PM +0500, Andrey Lepikhov wrote:\n>> On 10/1/2022 13:56, Julien Rouhaud wrote:\n>> Yes. the same input query string doesn't prove that frozen query plan can be\n>> used, because rewrite rules could be changed. So we use only a query tree.\n> \n> Yes, I'm fully aware of that. I wasn't asking about using the input query\n> string but the need for generating a dedicated normalized output query string\n> that would be potentially different from the one generated by\n> pg_stat_statements (or similar).\n> \nThanks, now I got it. I don't remember a single situation where we would \nneed to normalize a query string.\n>>> But then, if generating 2 queryid is a better for performance, does the\n>>> extension really need an additional jumble state and/or normalized query\n>>> string?\n>> Additional Jumble state isn't necessary, but it would be better for\n>> performance to collect pointers to all constant nodes during a process of\n>> hash generation.\n> \n> I agree, but it's even better for performance if this is collected only once.\n> I still don't know if this extension (or any extension) would require something\n> different from a common jumble state that would serve for all purpose or if we\n> can work with a single jumble state and only handle multiple queryid.\nI think, JumbleState in highly monitoring-specific, maybe even \npg_stat_statements specific (maybe you can designate another examples). \nI haven't ideas how it can be used in arbitrary extension.\nBut, it is not so difficult to imagine an implementation (as part of \nTom's approach, described earlier) such kind of storage for each \ngeneration method.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Mon, 10 Jan 2022 22:55:36 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On 1/9/22 5:49 AM, Tom Lane wrote:\n> The idea I'd been vaguely thinking about is to allow attaching a list\n> of query-hash nodes to a Query, where each node would contain a \"tag\"\n> identifying the specific hash calculation method, and also the value\n> of the query's hash calculated according to that method. We could\n> probably get away with saying that all such hash values must be uint64.\n> The main difference from your function-OID idea, I think, is that\n> I'm envisioning the tags as being small integers with well-known\n> values, similarly to the way we manage stakind values in pg_statistic.\n> In this way, an extension that wants a hash that the core knows how\n> to calculate doesn't need its own copy of the code, and similarly\n> one extension could publish a calculation method for use by other\n> extensions.\n\nTo move forward, I have made a patch that implements this idea (see \nattachment). It is a POC, but passes all regression tests.\nRegistration of an queryId generator implemented by analogy with \nextensible methods machinery. Also, I switched queryId to int64 type and \nrenamed to 'label'.\n\nSome lessons learned:\n1. Single queryId implementation is deeply tangled with the core code \n(stat reporting machinery and parallel workers as an example).\n2. We need a custom queryId, that is based on a generated queryId \n(according to the logic of pg_stat_statements).\n3. We should think about safety of de-registering procedure.\n4. We should reserve position of default in-core generator and think on \nlogic of enabling/disabling it.\n5. We should add an EXPLAIN hook, to allow an extension to print this \ncustom queryId.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Fri, 21 Jan 2022 11:33:22 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "> On Fri, Jan 21, 2022 at 11:33:22AM +0500, Andrey V. Lepikhov wrote:\n> On 1/9/22 5:49 AM, Tom Lane wrote:\n> > The idea I'd been vaguely thinking about is to allow attaching a list\n> > of query-hash nodes to a Query, where each node would contain a \"tag\"\n> > identifying the specific hash calculation method, and also the value\n> > of the query's hash calculated according to that method. We could\n> > probably get away with saying that all such hash values must be uint64.\n> > The main difference from your function-OID idea, I think, is that\n> > I'm envisioning the tags as being small integers with well-known\n> > values, similarly to the way we manage stakind values in pg_statistic.\n> > In this way, an extension that wants a hash that the core knows how\n> > to calculate doesn't need its own copy of the code, and similarly\n> > one extension could publish a calculation method for use by other\n> > extensions.\n>\n> To move forward, I have made a patch that implements this idea (see\n> attachment). It is a POC, but passes all regression tests.\n\nThanks. Couple of comments off the top of my head:\n\n> Registration of an queryId generator implemented by analogy with extensible\n> methods machinery.\n\nWhy not more like suggested with stakind and slots in some data\nstructure? All of those generators have to be iterated anyway, so not\nsure if a hash table makes sense.\n\n> Also, I switched queryId to int64 type and renamed to\n> 'label'.\n\nA name with \"id\" in it would be better I believe. Label could be think\nof as \"the query belongs to a certain category\", while the purpose is\nidentification.\n\n> 2. We need a custom queryId, that is based on a generated queryId (according\n> to the logic of pg_stat_statements).\n\nCould you clarify?\n\n> 4. We should reserve position of default in-core generator\n\n From the discussion above I was under the impression that the core\ngenerator should be distinguished by a predefined kind.\n\n> 5. We should add an EXPLAIN hook, to allow an extension to print this custom\n> queryId.\n\nWhy? It would make sense if custom generation code will be generating\nsome complex structure, but the queryId itself is still a hash.\n\n\n",
"msg_date": "Fri, 28 Jan 2022 17:51:56 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 28, 2022 at 05:51:56PM +0100, Dmitry Dolgov wrote:\n> > On Fri, Jan 21, 2022 at 11:33:22AM +0500, Andrey V. Lepikhov wrote:\n> > On 1/9/22 5:49 AM, Tom Lane wrote:\n> > > The idea I'd been vaguely thinking about is to allow attaching a list\n> > > of query-hash nodes to a Query, where each node would contain a \"tag\"\n> > > identifying the specific hash calculation method, and also the value\n> > > of the query's hash calculated according to that method. We could\n> > > probably get away with saying that all such hash values must be uint64.\n> > > The main difference from your function-OID idea, I think, is that\n> > > I'm envisioning the tags as being small integers with well-known\n> > > values, similarly to the way we manage stakind values in pg_statistic.\n> > > In this way, an extension that wants a hash that the core knows how\n> > > to calculate doesn't need its own copy of the code, and similarly\n> > > one extension could publish a calculation method for use by other\n> > > extensions.\n> >\n> > To move forward, I have made a patch that implements this idea (see\n> > attachment). It is a POC, but passes all regression tests.\n> [...]\n> > 4. We should reserve position of default in-core generator\n> \n> From the discussion above I was under the impression that the core\n> generator should be distinguished by a predefined kind.\n\nI don't really like this approach. IIUC, this patch as-is is meant to break\npg_stat_statements extensibility. If kind == 0 is reserved for in-core queryid\nthen you can't use pg_stat_statement with a different queryid generator\nanymore. Unless you meant that kind == 0 is reserved for monitoring /\npg_stat_statement purpose and custom extension should register that specific\nkind too if that's what they are supposed to implement?\n\nThe patch also reserves kind == -1 for pg_stat_statements internal purpose, and\nI'm not really sure why that's needed. Are additional extension that want to\nagree with pg_stat_statements on what the monitoring queryid is\nsupposed to do that too?\n\nI'm also unsure of how are extensions supposed to cooperate in general, as\nI feel that queryids should be implemented for some \"intent\" (like monitoring,\nplanning optimization...). That being said I'm not sure if e.g. AQO heuristics\nare too specific for its need or if it could be shared with other extension\nthat might be dealing with similar concerns.\n\n\n",
"msg_date": "Sat, 29 Jan 2022 15:51:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "> On Sat, Jan 29, 2022 at 03:51:33PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> On Fri, Jan 28, 2022 at 05:51:56PM +0100, Dmitry Dolgov wrote:\n> > > On Fri, Jan 21, 2022 at 11:33:22AM +0500, Andrey V. Lepikhov wrote:\n> > > On 1/9/22 5:49 AM, Tom Lane wrote:\n> > > > The idea I'd been vaguely thinking about is to allow attaching a list\n> > > > of query-hash nodes to a Query, where each node would contain a \"tag\"\n> > > > identifying the specific hash calculation method, and also the value\n> > > > of the query's hash calculated according to that method. We could\n> > > > probably get away with saying that all such hash values must be uint64.\n> > > > The main difference from your function-OID idea, I think, is that\n> > > > I'm envisioning the tags as being small integers with well-known\n> > > > values, similarly to the way we manage stakind values in pg_statistic.\n> > > > In this way, an extension that wants a hash that the core knows how\n> > > > to calculate doesn't need its own copy of the code, and similarly\n> > > > one extension could publish a calculation method for use by other\n> > > > extensions.\n> > >\n> > > To move forward, I have made a patch that implements this idea (see\n> > > attachment). It is a POC, but passes all regression tests.\n> > [...]\n> > > 4. We should reserve position of default in-core generator\n> >\n> > From the discussion above I was under the impression that the core\n> > generator should be distinguished by a predefined kind.\n>\n> I don't really like this approach. IIUC, this patch as-is is meant to break\n> pg_stat_statements extensibility. If kind == 0 is reserved for in-core queryid\n> then you can't use pg_stat_statement with a different queryid generator\n> anymore. Unless you meant that kind == 0 is reserved for monitoring /\n> pg_stat_statement purpose and custom extension should register that specific\n> kind too if that's what they are supposed to implement?\n>\n> [...]\n>\n> I'm also unsure of how are extensions supposed to cooperate in general, as\n> I feel that queryids should be implemented for some \"intent\" (like monitoring,\n> planning optimization...). That being said I'm not sure if e.g. AQO heuristics\n> are too specific for its need or if it could be shared with other extension\n> that might be dealing with similar concerns.\n\nAssuming there are multiple providers and consumers of queryIds, every\nsuch consumer extension needs to know which type of queryId it wants to\nuse. E.g. in case of pg_stat_statements, it needs to be somehow\nconfigured to know which of those kinds to take, to preserve\nextensibility you're talking about. Does the answer make sense, or did\nyou mean something else?\n\nBtw, the approach in this thread still doesn't give a clue what to do\nwhen an extension needs to reuse some parts of core queryId generator,\nas in case with pg_stat_statements and \"IN\" condition merging.\n\n\n",
"msg_date": "Sat, 29 Jan 2022 18:12:05 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jan 29, 2022 at 06:12:05PM +0100, Dmitry Dolgov wrote:\n> > On Sat, Jan 29, 2022 at 03:51:33PM +0800, Julien Rouhaud wrote:\n> >\n> > I'm also unsure of how are extensions supposed to cooperate in general, as\n> > I feel that queryids should be implemented for some \"intent\" (like monitoring,\n> > planning optimization...). That being said I'm not sure if e.g. AQO heuristics\n> > are too specific for its need or if it could be shared with other extension\n> > that might be dealing with similar concerns.\n> \n> Assuming there are multiple providers and consumers of queryIds, every\n> such consumer extension needs to know which type of queryId it wants to\n> use. E.g. in case of pg_stat_statements, it needs to be somehow\n> configured to know which of those kinds to take, to preserve\n> extensibility you're talking about. Does the answer make sense, or did\n> you mean something else?\n\nI guess, but I don't think that the proposed approach does that.\n\nThe DBA should be able to configure a monitoring queryid provider, a planning\nqueryid provider... and the extensions should have a way to know which is\nwhich. And also I don't think that the DBA should be allowed to setup multiple\nmonitoring queryid providers, nor change them dynamically.\n\n> Btw, the approach in this thread still doesn't give a clue what to do\n> when an extension needs to reuse some parts of core queryId generator,\n> as in case with pg_stat_statements and \"IN\" condition merging.\n\nIndeed, as the query text normalization is not extensible.\n\n\n",
"msg_date": "Sun, 30 Jan 2022 01:48:20 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "> On Sun, Jan 30, 2022 at 01:48:20AM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> On Sat, Jan 29, 2022 at 06:12:05PM +0100, Dmitry Dolgov wrote:\n> > > On Sat, Jan 29, 2022 at 03:51:33PM +0800, Julien Rouhaud wrote:\n> > >\n> > > I'm also unsure of how are extensions supposed to cooperate in general, as\n> > > I feel that queryids should be implemented for some \"intent\" (like monitoring,\n> > > planning optimization...). That being said I'm not sure if e.g. AQO heuristics\n> > > are too specific for its need or if it could be shared with other extension\n> > > that might be dealing with similar concerns.\n> >\n> > Assuming there are multiple providers and consumers of queryIds, every\n> > such consumer extension needs to know which type of queryId it wants to\n> > use. E.g. in case of pg_stat_statements, it needs to be somehow\n> > configured to know which of those kinds to take, to preserve\n> > extensibility you're talking about. Does the answer make sense, or did\n> > you mean something else?\n>\n> I guess, but I don't think that the proposed approach does that.\n\nYes, it doesn't, I'm just channeling my understanding of the problem.\n\n\n",
"msg_date": "Sat, 29 Jan 2022 19:00:57 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On 1/28/22 9:51 PM, Dmitry Dolgov wrote:\n>> On Fri, Jan 21, 2022 at 11:33:22AM +0500, Andrey V. Lepikhov wrote:\n>> Registration of an queryId generator implemented by analogy with extensible\n>> methods machinery.\n> \n> Why not more like suggested with stakind and slots in some data\n> structure? All of those generators have to be iterated anyway, so not\n> sure if a hash table makes sense.\nMaybe. But it is not obvious. We don't really know, how many extensions \ncould set an queryId.\nFor example, adaptive planning extensions definitely wants to set an \nunique id (for example, simplistic counter) to trace specific \n{query,plan} across all executions (remember plancache too). And they \nwould register a personal generator for such purpose.\n> \n>> Also, I switched queryId to int64 type and renamed to\n>> 'label'.\n> \n> A name with \"id\" in it would be better I believe. Label could be think\n> of as \"the query belongs to a certain category\", while the purpose is\n> identification.\nI think, it is not a full true. Current jumbling generates not unique \nqueryId (i hope, intentionally) and pg_stat_statements uses queryId to \ngroup queries into classes.\nFor tracking specific query along execution path it performs additional \nefforts (to remember nesting query level, as an example).\nBTW, before [1], I tried to improve queryId, that can be stable for \npermutations of tables in 'FROM' section and so on. It would allow to \nreduce a number of pg_stat_statements entries (critical factor when you \nuse an ORM, like 1C for example).\nSo, i think queryId is an Id and a category too.\n> \n>> 2. We need a custom queryId, that is based on a generated queryId (according\n>> to the logic of pg_stat_statements).\n> \n> Could you clarify?\npg_stat_statements uses origin queryId and changes it for a reason \n(sometimes zeroed it, sometimes not). So you can't use this value in \nanother extension and be confident that you use original value, \ngenerated by JumbleQuery(). Custom queryId allows to solve this problem.\n> \n>> 4. We should reserve position of default in-core generator\n> \n> From the discussion above I was under the impression that the core\n> generator should be distinguished by a predefined kind.\nYes, but I think we should have a range of values, enough for use in \nthird party extensions.\n> \n>> 5. We should add an EXPLAIN hook, to allow an extension to print this custom\n>> queryId.\n> \n> Why? It would make sense if custom generation code will be generating\n> some complex structure, but the queryId itself is still a hash.\n> \nExtension can print not only queryId, but an explanation of a kind, \nmaybe additional logic.\nMoreover why an extension can't show some useful monitoring data, \ncollected during an query execution, in verbose mode?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/e50c1e8f-e5d6-5988-48fa-63dd992e9565%40postgrespro.ru\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Mon, 31 Jan 2022 14:59:17 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "> On Mon, Jan 31, 2022 at 02:59:17PM +0500, Andrey V. Lepikhov wrote:\n> On 1/28/22 9:51 PM, Dmitry Dolgov wrote:\n> > > On Fri, Jan 21, 2022 at 11:33:22AM +0500, Andrey V. Lepikhov wrote:\n> > > Registration of an queryId generator implemented by analogy with extensible\n> > > methods machinery.\n> >\n> > Why not more like suggested with stakind and slots in some data\n> > structure? All of those generators have to be iterated anyway, so not\n> > sure if a hash table makes sense.\n> Maybe. But it is not obvious. We don't really know, how many extensions\n> could set an queryId.\n> For example, adaptive planning extensions definitely wants to set an unique\n> id (for example, simplistic counter) to trace specific {query,plan} across\n> all executions (remember plancache too). And they would register a personal\n> generator for such purpose.\n\nI don't see how the number of extensions justify a hash table? I mean,\nall of the generators will still most likely be executed at once (not on\ndemand) and store the result somewhere.\n\nIn any way I would like to remind, that this functionality wants to be\non the pretty much hot path, which means strong evidences of no\nsignificant performance overhead are needed. And sure, there could be\nmultiple queryId consumers, but in the vast majority of cases only one\nqueryId will be generated.\n\n> > > 2. We need a custom queryId, that is based on a generated queryId (according\n> > > to the logic of pg_stat_statements).\n> >\n> > Could you clarify?\n> pg_stat_statements uses origin queryId and changes it for a reason\n> (sometimes zeroed it, sometimes not).\n\nIIRC it does that only for utility statements. I still fail to see the\nproblem, why would a custom extension not register a new generator if it\nneeds something different?\n\n> > > 5. We should add an EXPLAIN hook, to allow an extension to print this custom\n> > > queryId.\n> >\n> > Why? It would make sense if custom generation code will be generating\n> > some complex structure, but the queryId itself is still a hash.\n> >\n> Extension can print not only queryId, but an explanation of a kind, maybe\n> additional logic.\n> Moreover why an extension can't show some useful monitoring data, collected\n> during an query execution, in verbose mode?\n\nA good recommendation which pops up every now and then in hackers, is to\nconcentrace on what is immediately useful, leaving \"maybe useful\" things\nfor later. I have a strong feeling this is the case here.\n\n\n",
"msg_date": "Mon, 31 Jan 2022 17:28:31 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
},
{
"msg_contents": "On 29/1/2022 12:51, Julien Rouhaud wrote:\n>>> 4. We should reserve position of default in-core generator\n>>\n>> From the discussion above I was under the impression that the core\n>> generator should be distinguished by a predefined kind.\n> \n> I don't really like this approach. IIUC, this patch as-is is meant to break\n> pg_stat_statements extensibility. If kind == 0 is reserved for in-core queryid\n> then you can't use pg_stat_statement with a different queryid generator\n> anymore.\n\nYes, it is one more problem. Maybe if we want to make it extensible, we \ncould think about hooks in the pg_stat_statements too?\n\n> The patch also reserves kind == -1 for pg_stat_statements internal purpose, and\n> I'm not really sure why that's needed.\nMy idea - tags with positive numbers are reserved for generation \nresults, that is performance-critical.\nAs I see during the implementation, pg_stat_statements makes additional \nchanges on queryId (no matter which ones). Because our purpose is to \navoid interference in this place, I invented negative values, where \nextensions can store their queryIds, based on any generator or not. \nMaybe it is redundant - main idea here was to highlight the issue.\n> \n> I'm also unsure of how are extensions supposed to cooperate in general, as\n> I feel that queryids should be implemented for some \"intent\" (like monitoring,\n> planning optimization...). That being said I'm not sure if e.g. AQO heuristics\n> are too specific for its need or if it could be shared with other extension\n> that might be dealing with similar concerns.\nI think, it depends on a specific purpose of an extension.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Tue, 1 Feb 2022 00:03:51 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Multiple Query IDs for a rewritten parse tree"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\n\r\n\r\nSome time ago,the following patch clean up error handling in pg_basebackup's walmethods.c.\r\nhttps://github.com/postgres/postgres/commit/248c3a9\r\n\r\n\r\nThis patch keep the error state in the DirectoryMethodData struct,\r\nin most functions, the lasterrno is set correctly, but in function dir_existsfile(), \r\nthe lasterrno is not set when the file fails to open.\r\n\r\n\r\n\r\n\r\nIf this is a correction omission, I think this patch can fix this.\r\n\r\n\r\nCheers",
"msg_date": "Mon, 10 Jan 2022 00:19:28 +0800",
"msg_from": "\"=?ISO-8859-1?B?V2VpIFN1bg==?=\" <936739278@qq.com>",
"msg_from_op": true,
"msg_subject": "Add lasterrno setting for dir_existsfile()"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 12:19:28AM +0800, Wei Sun wrote:\n> Hi,\n> \n> Some time ago,the following patch clean up error handling in pg_basebackup's\n> walmethods.c.\n> https://github.com/postgres/postgres/commit/248c3a9\n> \n> This patch keep the error state in the DirectoryMethodData struct,\n> in most functions, the lasterrno is set correctly, but in function\n> dir_existsfile(), \n> the lasterrno is not set when the file fails to open.\n> \n> If this is a correction omission, I think this patch can fix this.\n> \n> Cheers\n\n> diff --git a/src/bin/pg_basebackup/walmethods.c b/src/bin/pg_basebackup/walmethods.c\n> index f74bd13..35cf5a8 100644\n> --- a/src/bin/pg_basebackup/walmethods.c\n> +++ b/src/bin/pg_basebackup/walmethods.c\n> @@ -580,7 +580,10 @@ dir_existsfile(const char *pathname)\n> \n> \tfd = open(tmppath, O_RDONLY | PG_BINARY, 0);\n> \tif (fd < 0)\n> +\t{\n> +\t\tdir_data->lasterrno = errno;\n> \t\treturn false;\n> +\t}\n> \tclose(fd);\n> \treturn true;\n> }\n\nLooking at this, the function is used to check if something exists, and\nreturn a boolean. I am not sure it is helpful to also return a ENOENT in\nthe lasterrno status field. It might be useful to set lasterrno if the\nopen fails and it is _not_ ENOENT.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 12 Aug 2022 18:22:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add lasterrno setting for dir_existsfile()"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 06:22:01PM -0400, Bruce Momjian wrote:\n> On Mon, Jan 10, 2022 at 12:19:28AM +0800, Wei Sun wrote:\n> > Hi,\n> > \n> > Some time ago,the following patch clean up error handling in pg_basebackup's\n> > walmethods.c.\n> > https://github.com/postgres/postgres/commit/248c3a9\n> > \n> > This patch keep the error state in the DirectoryMethodData struct,\n> > in most functions, the lasterrno is set correctly, but in function\n> > dir_existsfile(), \n> > the lasterrno is not set when the file fails to open.\n> > \n> > If this is a correction omission, I think this patch can fix this.\n> \n> Looking at this, the function is used to check if something exists, and\n> return a boolean. I am not sure it is helpful to also return a ENOENT in\n> the lasterrno status field. It might be useful to set lasterrno if the\n> open fails and it is _not_ ENOENT.\n\nThinking some more, how would you know to check lasterrno since exists\nand not exists are both valid outputs?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 12 Aug 2022 19:04:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add lasterrno setting for dir_existsfile()"
},
{
"msg_contents": "On Sat, Aug 13, 2022 at 4:34 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Aug 12, 2022 at 06:22:01PM -0400, Bruce Momjian wrote:\n> > On Mon, Jan 10, 2022 at 12:19:28AM +0800, Wei Sun wrote:\n> > > Hi,\n> > >\n> > > Some time ago,the following patch clean up error handling in pg_basebackup's\n> > > walmethods.c.\n> > > https://github.com/postgres/postgres/commit/248c3a9\n> > >\n> > > This patch keep the error state in the DirectoryMethodData struct,\n> > > in most functions, the lasterrno is set correctly, but in function\n> > > dir_existsfile(),\n> > > the lasterrno is not set when the file fails to open.\n> > >\n> > > If this is a correction omission, I think this patch can fix this.\n> >\n> > Looking at this, the function is used to check if something exists, and\n> > return a boolean. I am not sure it is helpful to also return a ENOENT in\n> > the lasterrno status field. It might be useful to set lasterrno if the\n> > open fails and it is _not_ ENOENT.\n>\n> Thinking some more, how would you know to check lasterrno since exists\n> and not exists are both valid outputs?\n\nI agree with Bruce here, ENOENT isn't a failure for open because it\nsays that file doesn't exist.\n\nIf we have the policy like every syscall failure must be captured in\nlasterrno and be reported by the callers accordingly, then the patch\n(of course, with the change that doesn't set lasterrno when errno is\nENOENT) proposed makes sense to me. Right now, the callers of\nexistsfile() aren't caring for the errno though. Every other open()\nsyscall failure in walmethods.c is captured in lasterrno.\n\nOtherwise, adding a comment in dir_existsfile() on why aren't\ncapturing lasterrno might help and avoid future discussions around\nthis.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Sat, 13 Aug 2022 14:46:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add lasterrno setting for dir_existsfile()"
},
{
"msg_contents": "On Sat, Aug 13, 2022 at 02:46:24PM +0530, Bharath Rupireddy wrote:\n> On Sat, Aug 13, 2022 at 4:34 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Fri, Aug 12, 2022 at 06:22:01PM -0400, Bruce Momjian wrote:\n> > > On Mon, Jan 10, 2022 at 12:19:28AM +0800, Wei Sun wrote:\n> > > > Hi,\n> > > >\n> > > > Some time ago,the following patch clean up error handling in pg_basebackup's\n> > > > walmethods.c.\n> > > > https://github.com/postgres/postgres/commit/248c3a9\n> > > >\n> > > > This patch keep the error state in the DirectoryMethodData struct,\n> > > > in most functions, the lasterrno is set correctly, but in function\n> > > > dir_existsfile(),\n> > > > the lasterrno is not set when the file fails to open.\n> > > >\n> > > > If this is a correction omission, I think this patch can fix this.\n> > >\n> > > Looking at this, the function is used to check if something exists, and\n> > > return a boolean. I am not sure it is helpful to also return a ENOENT in\n> > > the lasterrno status field. It might be useful to set lasterrno if the\n> > > open fails and it is _not_ ENOENT.\n> >\n> > Thinking some more, how would you know to check lasterrno since exists\n> > and not exists are both valid outputs?\n> \n> I agree with Bruce here, ENOENT isn't a failure for open because it\n> says that file doesn't exist.\n> \n> If we have the policy like every syscall failure must be captured in\n> lasterrno and be reported by the callers accordingly, then the patch\n> (of course, with the change that doesn't set lasterrno when errno is\n> ENOENT) proposed makes sense to me. Right now, the callers of\n> existsfile() aren't caring for the errno though. Every other open()\n> syscall failure in walmethods.c is captured in lasterrno.\n> \n> Otherwise, adding a comment in dir_existsfile() on why aren't\n> capturing lasterrno might help and avoid future discussions around\n> this.\n\nI have applied the attached patch to master to explain why we don't set\nlasterrno.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 31 Oct 2023 12:00:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add lasterrno setting for dir_existsfile()"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like there's a typo, attaching a tiny patch to fix it.\n\n *\n * When doing logical decoding - which relies on using cmin/cmax of catalog\n * tuples, via xl_heap_new_cid records - heap rewrites have to log enough\n- * information to allow the decoding backend to updates its internal mapping\n+ * information to allow the decoding backend to update its internal mapping\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 10 Jan 2022 09:42:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix a possible typo in rewriteheap.c code comments"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 9:42 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> It looks like there's a typo, attaching a tiny patch to fix it.\n>\n> *\n> * When doing logical decoding - which relies on using cmin/cmax of catalog\n> * tuples, via xl_heap_new_cid records - heap rewrites have to log enough\n> - * information to allow the decoding backend to updates its internal mapping\n> + * information to allow the decoding backend to update its internal mapping\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Jan 2022 10:22:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix a possible typo in rewriteheap.c code comments"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 10:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 10, 2022 at 9:42 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > It looks like there's a typo, attaching a tiny patch to fix it.\n> >\n> > *\n> > * When doing logical decoding - which relies on using cmin/cmax of catalog\n> > * tuples, via xl_heap_new_cid records - heap rewrites have to log enough\n> > - * information to allow the decoding backend to updates its internal mapping\n> > + * information to allow the decoding backend to update its internal mapping\n> >\n>\n> LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jan 2022 10:56:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix a possible typo in rewriteheap.c code comments"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been annoyed for quite a while that pg_upgrade tests lists all copied\nfiles - it's ~1500 lines or so in the make -C src/bin/pg_ugprade check case\n(obviously for a cluster with more objects it's basically unlimited). But\nwhenever I was looking into something about the issue, I didn't see the log\noutpu, leading me to believe somebody else had fixed it (although I thought I\ncomplained about this before, but I couldn't find it in the archives).\n\nTurns out that it only happens when the output is not a tty. And I notice it\nwhenever I redirect the log output to a file, pipe, or such.\n\n\nThis actually might not be intended?\n\n\nWhile util.c:pg_log_v() [1] says\n\n\t/* PG_VERBOSE and PG_STATUS are only output in verbose mode */\n\nit actually prints out the status report unconditionally:\n\n...\n\t\tcase PG_STATUS:\n\t\t\tif (isatty(fileno(stdout)))\n\t\t\t\tprintf(\" %s%-*.*s\\r\",...);\n...\n\t\t\telse\n\t\t\t\tprintf(\" %s\\n\", message);\n\t\t\tbreak;\n\nthis isn't bad when stdout is a tty, because the \\r will hide the repeated\noutput. But when its not, we just dump out the progress, regardless of\nverbosity.\n\nThis code appears to have been this way for a long time and I'm not quite sure\nwhat the intent really is.\n\nIt seems not unreasonable to log progress if a tty or if verbose is specified?\n\nI guess there's an argument to be made that outputting all that data\nunrequested isn't great in the tty case either. On a cluster with a large\nschema that could be quite a few MB of output - enough to slow things down\nover a low throughput / high latency link probably. On a test cluster with\n20k tables I had lying around, script -c \"pg_upgrade ....\" (which simulates a\ntty) results in a ~4MB typescript.\n\n\nA very minor thing is that we do the isatty() thing in every PG_STATUS logging\ninvocation. Each time that triggers a\nioctl(1, TCGETS, {B38400 opost isig icanon echo ...}) = 0\nwhich isn't a huge cost compared to actually copying a file, but it's not\nentirely free either.\n\nGreetings,\n\nAndres Freund\n\n\n[1] called from\n transfer_all_new_tablespaces() ->\n parallel_transfer_all_new_dbs() ->\n transfer_all_new_dbs() ->\n transfer_single_new_db() ->\n transfer_relfile() ->\n\t\t/* Copying files might take some time, so give feedback. */\n\t\tpg_log(PG_STATUS, \"status: %s\", old_file);\n\n\n",
"msg_date": "Sun, 9 Jan 2022 20:28:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-09 20:28:40 -0800, Andres Freund wrote:\n> Turns out that it only happens when the output is not a tty. And I notice it\n> whenever I redirect the log output to a file, pipe, or such.\n\nAh. More precisely, it happens when doing\n make -s -Otarget -j32 check-world,\nbut *not* when\n make -s -Otarget -j32 -C src/bin/pg_upgrade check\n\nbecause when there's only one target the subprocess ends up with the\nFDs make is invoked with, but when there's concurrency -Otarget causes\nstdin/out to be temporary files. Leading to the endless output \"Copying user\nrelation files\" output when doing check-world...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Jan 2022 20:36:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-09 20:28:40 -0800, Andres Freund wrote:\n>> Turns out that it only happens when the output is not a tty. And I notice it\n>> whenever I redirect the log output to a file, pipe, or such.\n\n> Ah. More precisely, it happens when doing\n> make -s -Otarget -j32 check-world,\n> but *not* when\n> make -s -Otarget -j32 -C src/bin/pg_upgrade check\n\nFun! That seems to me to be a strong argument for not letting\nthe behavior vary depending on isatty().\n\nI think I'd vote for just nuking that output altogether.\nIt seems of very dubious value.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jan 2022 01:14:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-10 01:14:32 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> Fun! That seems to me to be a strong argument for not letting\n> the behavior vary depending on isatty().\n\nYea.\n\n\n> I think I'd vote for just nuking that output altogether.\n> It seems of very dubious value.\n\nIt seems worthwhile in some form - on large cluster in copy mode, the \"Copying\nuser relation files\" step can take *quite* a while, and even link/clone mode\naren't fast. But perhaps what'd be really needed is something counting up\nactual progress in percentage of files and/or space...\n\nI think just coupling it to verbose mode makes the most sense, for now?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Jan 2022 22:39:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-10 01:14:32 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>> Fun! That seems to me to be a strong argument for not letting\n>> the behavior vary depending on isatty().\n\n> Yea.\n\n> I think just coupling it to verbose mode makes the most sense, for now?\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jan 2022 09:42:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "On Sun, Jan 9, 2022 at 10:39:58PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-01-10 01:14:32 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > Fun! That seems to me to be a strong argument for not letting\n> > the behavior vary depending on isatty().\n> \n> Yea.\n> \n> \n> > I think I'd vote for just nuking that output altogether.\n> > It seems of very dubious value.\n> \n> It seems worthwhile in some form - on large cluster in copy mode, the \"Copying\n> user relation files\" step can take *quite* a while, and even link/clone mode\n> aren't fast. But perhaps what'd be really needed is something counting up\n> actual progress in percentage of files and/or space...\n> \n> I think just coupling it to verbose mode makes the most sense, for now?\n\nAll of this logging is from the stage where I was excited pg_upgrade\nworked, and I wanted to give clear output if it failed in some way ---\nprinting the file names seems like an easy solution. I agree at this\npoint that logging should be reduced, and if they want more logging, the\nverbose option is the right way to get it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 10 Jan 2022 10:42:00 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 4:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Sun, Jan 9, 2022 at 10:39:58PM -0800, Andres Freund wrote:\n> > On 2022-01-10 01:14:32 -0500, Tom Lane wrote:\n> > > I think I'd vote for just nuking that output altogether.\n> > > It seems of very dubious value.\n> >\n> > It seems worthwhile in some form - on large cluster in copy mode, the \"Copying\n> > user relation files\" step can take *quite* a while, and even link/clone mode\n> > aren't fast. But perhaps what'd be really needed is something counting up\n> > actual progress in percentage of files and/or space...\n> >\n> > I think just coupling it to verbose mode makes the most sense, for now?\n>\n> All of this logging is from the stage where I was excited pg_upgrade\n> worked, and I wanted to give clear output if it failed in some way ---\n> printing the file names seems like an easy solution. I agree at this\n> point that logging should be reduced, and if they want more logging, the\n> verbose option is the right way to get it.\n\n+1\n\n\n",
"msg_date": "Wed, 16 Feb 2022 17:09:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-16 17:09:34 +1300, Thomas Munro wrote:\n> On Tue, Jan 11, 2022 at 4:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > On Sun, Jan 9, 2022 at 10:39:58PM -0800, Andres Freund wrote:\n> > > On 2022-01-10 01:14:32 -0500, Tom Lane wrote:\n> > > > I think I'd vote for just nuking that output altogether.\n> > > > It seems of very dubious value.\n> > >\n> > > It seems worthwhile in some form - on large cluster in copy mode, the \"Copying\n> > > user relation files\" step can take *quite* a while, and even link/clone mode\n> > > aren't fast. But perhaps what'd be really needed is something counting up\n> > > actual progress in percentage of files and/or space...\n> > >\n> > > I think just coupling it to verbose mode makes the most sense, for now?\n> >\n> > All of this logging is from the stage where I was excited pg_upgrade\n> > worked, and I wanted to give clear output if it failed in some way ---\n> > printing the file names seems like an easy solution. I agree at this\n> > point that logging should be reduced, and if they want more logging, the\n> > verbose option is the right way to get it.\n> \n> +1\n\nI got a bit stuck on how to best resolve this. I felt bad about removing all\ninteractive progress, because a pg_upgrade can take a while after all. But\nit's also not easy to come up with some good, without a substantially bigger\neffort than I want to invest.\n\nAfter all, I just want to be able to read check-world output. Nearly half of\nwhich is pg_upgrade test output right now.\n\nThe attached is my attempt at coming up with something halfway sane without\nrewriting pg_upgrade logging entirely. I think it mostly ends up with at least\nas sane output as the current code. I needed to add a separate\nprep_status_progress() function to make that work.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 18 Feb 2022 17:20:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "+ * If outputting to a tty / or , append newline. pg_log_v() will put the \n+ * individual progress items onto the next line. \n+ */ \n+ if (log_opts.isatty || log_opts.verbose) \n\nI guess the comment should say \"or in verbose mode\".\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 18 Feb 2022 19:46:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-18 19:46:26 -0600, Justin Pryzby wrote:\n> + * If outputting to a tty / or , append newline. pg_log_v() will put the\n> + * individual progress items onto the next line.\n> + */\n> + if (log_opts.isatty || log_opts.verbose)\n>\n> I guess the comment should say \"or in verbose mode\".\n\nIndeed. I think I got caught in a back-and-forth between different\nformulations.\n\nBaring that, anybody against committing this?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Feb 2022 17:07:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "> On 21 Feb 2022, at 02:07, Andres Freund <andres@anarazel.de> wrote:\n\n> Baring that, anybody against committing this?\n\nLGTM. The above mentioned comment was the only thing I found as well.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 21 Feb 2022 15:29:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
},
{
"msg_contents": "On 2022-02-21 15:29:09 +0100, Daniel Gustafsson wrote:\n> > On 21 Feb 2022, at 02:07, Andres Freund <andres@anarazel.de> wrote:\n> \n> > Baring that, anybody against committing this?\n> \n> LGTM. The above mentioned comment was the only thing I found as well.\n\nThanks for the review Justin and Daniel. Pushed.\n\n\n",
"msg_date": "Mon, 21 Feb 2022 08:35:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade verbosity when redirecting output to log file"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nDue to DST and also changes in local laws, there could be gaps in\nlocal time [1]. For instance, 1 second after \"2011-03-27 01:59:59 MSK\"\ngoes \"2011-03-27 03:00:00 MSK\":\n\n```\nselect (timestamptz '2011-03-27 01:59:59 MSK') at time zone 'MSK';\n timezone\n---------------------\n 2011-03-27 01:59:59\n(1 row)\n\nselect ((timestamptz '2011-03-27 01:59:59 MSK') + interval '1 second')\nat time zone 'MSK';\n timezone\n---------------------\n 2011-03-27 03:00:00\n(1 row)\n```\n\nThis makes '2011-03-27 02:00:00 MSK' an impossible timestamptz. I was\ncurious how `timezone(zone, timestamp)` aka `timestamp at time zone`\nhandles such dates and discovered that it seems to round impossible\ndates to the nearest possible one:\n\n```\nset time zone 'Europe/Moscow';\n\nselect (timestamp '2011-03-27 01:00:00') at time zone 'MSK';\n timezone\n------------------------\n 2011-03-27 01:00:00+03\n(1 row)\n\nselect (timestamp '2011-03-27 02:00:00') at time zone 'MSK';\n timezone\n------------------------\n 2011-03-27 01:00:00+03\n(1 row)\n```\n\nI don't know what the SQL standard says about it, but personally, I\nfind this behavior very convenient. Although it doesn't seem to be\ndocumented [2].\n\nSo I have two questions:\n\n1. Should this behavior be documented in the 9.9.4. AT TIME ZONE\nsection or maybe it's documented elsewhere and I just missed it?\n2. Is it possible to detect an impossible timestamptz's for users who\nwants stricter semantics? If there is a way I think it's worth\ndocumenting as well.\n\n[1]: https://en.wikipedia.org/wiki/Moscow_Time#Past_usage\n[2]: https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-ZONECONVERT\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 10 Jan 2022 15:04:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Undocumented behavior of timezone(zone,\n timestamp) for impossible timestamptz's"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Due to DST and also changes in local laws, there could be gaps in\n> local time [1].\n\nYup.\n\n> 1. Should this behavior be documented in the 9.9.4. AT TIME ZONE\n> section or maybe it's documented elsewhere and I just missed it?\n\nhttps://www.postgresql.org/docs/current/datetime-invalid-input.html\n\n> 2. Is it possible to detect an impossible timestamptz's for users who\n> wants stricter semantics? If there is a way I think it's worth\n> documenting as well.\n\nMaybe convert back and see if you get an identical result?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jan 2022 10:15:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented behavior of timezone(zone,\n timestamp) for impossible timestamptz's"
},
{
"msg_contents": "I wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n>> 1. Should this behavior be documented in the 9.9.4. AT TIME ZONE\n>> section or maybe it's documented elsewhere and I just missed it?\n\n> https://www.postgresql.org/docs/current/datetime-invalid-input.html\n\n... and reading that again, I realize that I screwed up the\nfall-back example :-(. 2:30 is not ambiguous; I should have\ndemonstrated the behavior for, say, 1:30. Will fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jan 2022 10:29:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented behavior of timezone(zone,\n timestamp) for impossible timestamptz's"
}
] |
[
{
"msg_contents": "I think this patch is necessary:\n\ndiff --git a/src/interfaces/ecpg/preproc/pgc.l b/src/interfaces/ecpg/preproc/pgc.l\nindex 07fee80a9c..3529b2ea86 100644\n--- a/src/interfaces/ecpg/preproc/pgc.l\n+++ b/src/interfaces/ecpg/preproc/pgc.l\n@@ -753,7 +753,7 @@ cppline\t\t\t{space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\\/\\*[^*/]*\\*+\n \t\t\t\t}\n <xui>{dquote}\t{\n \t\t\t\t\tBEGIN(state_before_str_start);\n-\t\t\t\t\tif (literallen == 2) /* \"U&\" */\n+\t\t\t\t\tif (literallen == 0)\n \t\t\t\t\t\tmmerror(PARSE_ERROR, ET_ERROR, \"zero-length delimited identifier\");\n \t\t\t\t\t/* The backend will truncate the identifier here. We do not as it does not change the result. */\n \t\t\t\t\tbase_yylval.str = psprintf(\"U&\\\"%s\\\"\", literalbuf);\n\nThe old code doesn't make sense. The literallen is the length of the\ndata in literalbuf, which clearly doesn't include the \"U&\" as the\ncomment suggests.\n\nA test case is to preprocess a file like this (ecpg test.pgc):\n\nexec sql select u&\"\n\nwhich currently does *not* give the above error, but it should.\n\n\n",
"msg_date": "Mon, 10 Jan 2022 14:14:37 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "small bug in ecpg unicode identifier error handling"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I think this patch is necessary:\n> -\t\t\t\t\tif (literallen == 2) /* \"U&\" */\n> +\t\t\t\t\tif (literallen == 0)\n\nSeems sensible, and matches the corresponding code in scan.l.\n+1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jan 2022 10:05:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: small bug in ecpg unicode identifier error handling"
},
{
"msg_contents": "On 10.01.22 14:14, Peter Eisentraut wrote:\n> I think this patch is necessary:\n> \n> diff --git a/src/interfaces/ecpg/preproc/pgc.l \n> b/src/interfaces/ecpg/preproc/pgc.l\n> index 07fee80a9c..3529b2ea86 100644\n> --- a/src/interfaces/ecpg/preproc/pgc.l\n> +++ b/src/interfaces/ecpg/preproc/pgc.l\n> @@ -753,7 +753,7 @@ cppline \n> {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\\/\\*[^*/]*\\*+\n> }\n> <xui>{dquote} {\n> BEGIN(state_before_str_start);\n> - if (literallen == 2) /* \"U&\" */\n> + if (literallen == 0)\n> mmerror(PARSE_ERROR, ET_ERROR, \"zero-length \n> delimited identifier\");\n> /* The backend will truncate the identifier here. \n> We do not as it does not change the result. */\n> base_yylval.str = psprintf(\"U&\\\"%s\\\"\", literalbuf);\n> \n> The old code doesn't make sense. The literallen is the length of the\n> data in literalbuf, which clearly doesn't include the \"U&\" as the\n> comment suggests.\n> \n> A test case is to preprocess a file like this (ecpg test.pgc):\n> \n> exec sql select u&\"\n> \n> which currently does *not* give the above error, but it should.\n\nCommitted.\n\nFor the record, the correct test case was actually\n\nexec sql select u&\"\";\n\n\n",
"msg_date": "Wed, 12 Jan 2022 11:00:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: small bug in ecpg unicode identifier error handling"
}
] |
[
{
"msg_contents": "Hello,\n\nA colleague of mine was surprised to discover the following statements raised \nan error:\n\npostgres=# CREATE TYPE abc_enum AS ENUM ('a', 'b', 'c');\nCREATE TYPE\npostgres=# CREATE DOMAIN abc_domain AS abc_enum; \nCREATE DOMAIN\npostgres=# SELECT 'a'::abc_domain = 'a'::abc_domain; \nERROR: operator does not exist: abc_domain = abc_domain\nLINE 1: SELECT 'a'::abc_domain = 'a'::abc_domain;\n ^\nHINT: No operator matches the given name and argument types. You might need \nto add explicit type casts.\n\nThis has been already discussed a long time ago, and the idea was rejected at \nthe time since there was no demand for it:\n\nhttps://www.postgresql.org/message-id/flat/BANLkTi%3DaGxDbGPSF043V2K-C2vF2YzGz9w%40mail.gmail.com#da4826d2cbbaca20e3440aadb3093158\n\nGiven that we implemented that behaviour for domains over ranges and \nmultiranges, I don't see the harm in doing the same for domains over enums.\n\nWhat do you think ?\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 10 Jan 2022 17:01:48 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": true,
"msg_subject": "Matching domains-over-enums to anyenum types"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> This has been already discussed a long time ago, and the idea was rejected at \n> the time since there was no demand for it:\n> https://www.postgresql.org/message-id/flat/BANLkTi%3DaGxDbGPSF043V2K-C2vF2YzGz9w%40mail.gmail.com#da4826d2cbbaca20e3440aadb3093158\n\nI see that one of the considerations in that thread was the lack\nof arrays over domains. We've since fixed that, so probably it'd\nbe reasonable to take a fresh look, but I'm not sure that the\nconclusion would be the same as what I proposed then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jan 2022 12:32:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Matching domains-over-enums to anyenum types"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI am trying to migrate my postgres to linux, as we are moving away from\nwindows.\nI am trying both dump/restore and logical decoding, but people are not\nhappy with performance.\nIs there a way/tooling I can use around WAL shipping/physical replication\nhere ?\n\n\nthanks\nRajesh\n\nHi Hackers,I am trying to migrate my postgres to linux, as we are moving away from windows.I am trying both dump/restore and logical decoding, but people are not happy with performance.Is there a way/tooling I can use around WAL shipping/physical replication here ?thanksRajesh",
"msg_date": "Tue, 11 Jan 2022 00:26:03 +0530",
"msg_from": "rajesh singarapu <rajesh.rs0541@gmail.com>",
"msg_from_op": true,
"msg_subject": "Postgres Replication from windows to linux"
},
{
"msg_contents": "On 10.01.22 19:56, rajesh singarapu wrote:\n> I am trying to migrate my postgres to linux, as we are moving away from \n> windows.\n> I am trying both dump/restore and logical decoding, but people are not \n> happy with performance.\n> Is there a way/tooling I can use around WAL shipping/physical \n> replication here ?\n\nCross-platform physical replication is always risky. It might work, but \nthere is no easy way to find out whether everything is ok afterwards.\n\nAside from the issue of possible storage format differences, I would \nalso be worried about locale differences affecting text sort order.\n\n\n",
"msg_date": "Wed, 12 Jan 2022 14:38:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Replication from windows to linux"
}
] |
[
{
"msg_contents": "While trying to make sense of some recent buildfarm failures,\nI happened to notice that the default query issued by\nthe TAP sub wait_for_catchup looks like\n\nSELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = '<whatever>';\n\nISTM there are two things wrong with this:\n\n1. Since pg_current_wal_lsn() is re-evaluated each time, we're\neffectively setting a moving target for the standby to reach.\nAdmittedly we're not going to be issuing any new DML while\nwaiting in wait_for_catchup, but background activity such as\nautovacuum could be creating new WAL. Thus, the test is likely\nto wait longer than it needs to. In the worst case, we'd never\ncatch up until the primary server has been totally quiescent\nfor awhile.\n\n2. Aside from being slower than necessary, this also makes the\ntest squishy and formally incorrect, because the standby might\nget the opportunity to replay more WAL than the test intends.\n\nSo I think we need to fix it to capture the target WAL position\nat the start, as I've done in the attached patch. In principle\nthis might make things a bit slower because of the extra\ntransaction required, but I don't notice any above-the-noise\ndifference on my own workstation.\n\nAnother thing that is bothering me a bit is that a number of the\ncallers use $node->lsn('insert') as the target. This also seems\nrather dubious, because that could be ahead of what's been written\nout. These callers are just taking it on faith that something will\neventually cause that extra WAL to get written out (and become\navailable to the standby). Again, that seems to make the test\nslower than it need be, with a worst-case scenario being that it\neventually times out. Admittedly this is unlikely to be a big\nproblem unless some background op issues an abortive transaction\nat just the wrong time. Nonetheless, I wonder if we shouldn't\nstandardize on \"thou shalt use the write position\", because I\ndon't think the other alternatives have anything to recommend them.\nI've not addressed that below, though I did tweak the comment about\nthat parameter.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 10 Jan 2022 14:31:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Isn't wait_for_catchup slightly broken?"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 02:31:38PM -0500, Tom Lane wrote:\n> \n> So I think we need to fix it to capture the target WAL position\n> at the start, as I've done in the attached patch.\n\n+1, it looks sensible to me.\n\n> In principle\n> this might make things a bit slower because of the extra\n> transaction required, but I don't notice any above-the-noise\n> difference on my own workstation.\n\nI'm wondering if the environments where this extra transaction could make\na noticeable difference are also environments where doing that extra\ntransaction can save some iteration(s), which would be at least as costly.\n\n\n",
"msg_date": "Tue, 11 Jan 2022 14:25:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Isn't wait_for_catchup slightly broken?"
},
{
"msg_contents": "I wrote:\n> Another thing that is bothering me a bit is that a number of the\n> callers use $node->lsn('insert') as the target. This also seems\n> rather dubious, because that could be ahead of what's been written\n> out. These callers are just taking it on faith that something will\n> eventually cause that extra WAL to get written out (and become\n> available to the standby). Again, that seems to make the test\n> slower than it need be, with a worst-case scenario being that it\n> eventually times out. Admittedly this is unlikely to be a big\n> problem unless some background op issues an abortive transaction\n> at just the wrong time. Nonetheless, I wonder if we shouldn't\n> standardize on \"thou shalt use the write position\", because I\n> don't think the other alternatives have anything to recommend them.\n\nHere's a version that makes sure that callers specify a write position not\nan insert position. I also simplified the callers wherever it turned\nout that they could just use the default parameters.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 15 Jan 2022 17:58:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Isn't wait_for_catchup slightly broken?"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jan 15, 2022 at 05:58:02PM -0500, Tom Lane wrote:\n> \n> Here's a version that makes sure that callers specify a write position not\n> an insert position. I also simplified the callers wherever it turned\n> out that they could just use the default parameters.\n\nLGTM, and passes make check-world on my machine.\n\n\n",
"msg_date": "Sun, 16 Jan 2022 17:39:38 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Isn't wait_for_catchup slightly broken?"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Jan 15, 2022 at 05:58:02PM -0500, Tom Lane wrote:\n>> Here's a version that makes sure that callers specify a write position not\n>> an insert position. I also simplified the callers wherever it turned\n>> out that they could just use the default parameters.\n\n> LGTM, and passes make check-world on my machine.\n\nPushed, thanks for reviewing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jan 2022 13:30:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Isn't wait_for_catchup slightly broken?"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is a follow-up of the work done in b69aba7 for cryptohashes, but\nthis time for HMAC. The main issue here is related to SCRAM, where we\nhave a lot of code paths that have no idea about what kind of failure\nis happening when an error happens, and this exists since v10 where\nSCRAM has been introduced, for some of them, frontend and backend\nincluded. \\password is one example.\n\nThe set of errors improved here would only trigger in scenarios that\nare unlikely going to happen, like an OOM or an internal OpenSSL\nerror. It would be possible to create a HMAC from a MD5, which would\ncause an error when compiling with OpenSSL and FIPS enabled, but the\nonly callers of the pg_hmac_* routines involve SHA-256 in core through\nSCRAM, so I don't see much a point in backpatching any of the things\nproposed here.\n\nThe attached patch creates a new routine call pg_hmac_error() that one\ncan use to grab details about the error that happened, in the same\nfashion as what has been done for cryptohashes. The logic is not that\ncomplicated, but note that the fallback HMAC implementation relies\nitself on cryptohashes, so there are cases where we need to look at\nthe error from pg_cryptohash_error() and store it in the HMAC private\ncontext.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 11 Jan 2022 13:56:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "Hi,\n\nOn 11.01.2022 07:56, Michael Paquier wrote:\n > Thoughts?\n\nA few comments after a quick glance...\n\n+ * Returns a static string providing errors about an error that happened\n\n\"errors about an error\" looks odd.\n\n\n+static const char *\n+SSLerrmessage(unsigned long ecode)\n+{\n+\tif (ecode == 0)\n+\t\treturn NULL;\n+\n+\t/*\n+\t * This may return NULL, but we would fall back to a default error path if\n+\t * that were the case.\n+\t */\n+\treturn ERR_reason_error_string(ecode);\n+}\n\nWe already have SSLerrmessage elsewhere and it's documented to never \nreturn NULL. I find that confusing.\n\nIf I have two distinct pg_hmac_ctx's, are their errreason's idependent \nfrom one another or do they really point to the same static buffer?\n\n\nRegards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Tue, 11 Jan 2022 10:50:50 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 10:50:50AM +0300, Sergey Shinderuk wrote:\n> A few comments after a quick glance...\n\nThanks!\n\n> + * Returns a static string providing errors about an error that happened\n> \n> \"errors about an error\" looks odd.\n\nSure, that could be reworded. What about \"providing details about an\nerror\"?\n\n> We already have SSLerrmessage elsewhere and it's documented to never return\n> NULL. I find that confusing.\n\nThis name is chosen on purpose. There could be some refactoring done\nwith those things.\n\n> If I have two distinct pg_hmac_ctx's, are their errreason's idependent from\n> one another or do they really point to the same static buffer?\n\nEach errreason could be different, as each computation could fail for\na different reason. If they fail for the same reason, they would\npoint to the same error context strings.\n--\nMichael",
"msg_date": "Tue, 11 Jan 2022 16:57:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "On 11.01.2022 10:57, Michael Paquier wrote:\n> On Tue, Jan 11, 2022 at 10:50:50AM +0300, Sergey Shinderuk wrote:\n>> + * Returns a static string providing errors about an error that happened\n>>\n>> \"errors about an error\" looks odd.\n> \n> Sure, that could be reworded. What about \"providing details about an\n> error\"?\n\nYeah, that's better. I thought \"providing errors about an error\" was a \ntypo, but now I see the same comment was committed in b69aba745. Is it \njust me? :)\n\nThanks,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Tue, 11 Jan 2022 11:08:59 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 11:08:59AM +0300, Sergey Shinderuk wrote:\n> Yeah, that's better. I thought \"providing errors about an error\" was a\n> typo, but now I see the same comment was committed in b69aba745. Is it just\n> me? :)\n\nIt is not only you :) I have applied a fix to fix the comments on\nHEAD and REL_14_STABLE.\n\nAttached is a rebased patch for the HMAC portions, with a couple of\nfixes I noticed while going through this stuff again (mostly around\nSASLprep and pg_fe_scram_build_secret), and a fix for a conflict\ncoming from 9cb5518. psql's \\password is wrong to assume that the\nonly error that can happen for scran-sha-256 is an OOM, but we'll get\nthere.\n--\nMichael",
"msg_date": "Wed, 12 Jan 2022 12:56:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 12:56:17PM +0900, Michael Paquier wrote:\n> Attached is a rebased patch for the HMAC portions, with a couple of\n> fixes I noticed while going through this stuff again (mostly around\n> SASLprep and pg_fe_scram_build_secret), and a fix for a conflict\n> coming from 9cb5518. psql's \\password is wrong to assume that the\n> only error that can happen for scran-sha-256 is an OOM, but we'll get\n> there.\n\nWith an attachment, that's even better. (Thanks, Daniel.)\n--\nMichael",
"msg_date": "Wed, 12 Jan 2022 20:32:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "On 12.01.2022 14:32, Michael Paquier wrote:\n> On Wed, Jan 12, 2022 at 12:56:17PM +0900, Michael Paquier wrote:\n>> Attached is a rebased patch for the HMAC portions, with a couple of\n>> fixes I noticed while going through this stuff again (mostly around\n>> SASLprep and pg_fe_scram_build_secret), and a fix for a conflict\n>> coming from 9cb5518. psql's \\password is wrong to assume that the\n>> only error that can happen for scran-sha-256 is an OOM, but we'll get\n>> there.\n> \n> With an attachment, that's even better. (Thanks, Daniel.)\nGave it a thorough read. Looks good, except for errstr not set in a \ncouple of places (see the diff attached).\n\nDidn't test it.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/",
"msg_date": "Thu, 13 Jan 2022 02:01:24 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 02:01:24AM +0300, Sergey Shinderuk wrote:\n> Gave it a thorough read. Looks good, except for errstr not set in a couple\n> of places (see the diff attached).\n\nThanks for the review. The comments about pg_hmac_ctx->data were\nwrong from the beginning, coming, I guess, from one of the earlier\npatch versions where this was discussed. So I have applied that\nindependently.\n\nI have also spent a good amount of time on that to close the loop and\nmake sure that no code paths are missing an error context, adjusted a\ncouple of comments to explain more the role of *errstr in all the\nSCRAM routines, and finally applied it on HEAD.\n--\nMichael",
"msg_date": "Thu, 13 Jan 2022 16:24:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
},
{
"msg_contents": "On 13.01.2022 10:24, Michael Paquier wrote:\n> Thanks for the review. The comments about pg_hmac_ctx->data were\n> wrong from the beginning, coming, I guess, from one of the earlier\n> patch versions where this was discussed. So I have applied that\n> independently.\n> \n> I have also spent a good amount of time on that to close the loop and\n> make sure that no code paths are missing an error context, adjusted a\n> couple of comments to explain more the role of *errstr in all the\n> SCRAM routines, and finally applied it on HEAD.\nThanks!\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Thu, 13 Jan 2022 11:26:05 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Improve error handling of HMAC computations and SCRAM"
}
] |
[
{
"msg_contents": "Please see attached a small patch to document why some text-processing\nfunctions are marked as leakproof, while some others are not.\n\nThis is more or less a verbatim copy of Tom's comment in email thread at\n[1].\n\nI could not find an appropriate spot to place these comments, so I placed\nthem on bttextcmp() function, The only other place that I could see we can\nplace these comments is in the file src/backend/optimizer/README, because\nthere is some consideration given to leakproof functions in optimizer docs.\nBut these comments seem quite out of place in optimizer docs.\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/673096.1630006990%40sss.pgh.pa.us#cd378cba4b990fda070c6fa4b51a069c\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/",
"msg_date": "Mon, 10 Jan 2022 23:07:22 -0800",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Patch: Code comments: why some text-handling functions are leakproof"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 2:07 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> Please see attached a small patch to document why some text-processing functions are marked as leakproof, while some others are not.\n>\n> This is more or less a verbatim copy of Tom's comment in email thread at [1].\n>\n> I could not find an appropriate spot to place these comments, so I placed them on bttextcmp() function, The only other place that I could see we can place these comments is in the file src/backend/optimizer/README, because there is some consideration given to leakproof functions in optimizer docs. But these comments seem quite out of place in optimizer docs.\n\nIt doesn't seem particularly likely that someone who is thinking about\nchanging this in the future would notice the comment in the place\nwhere you propose to put it, nor that they would read the optimizer\nREADME.\n\nFurthermore, I don't know that everyone agrees with Tom about this. I\ndo agree that it's more important to mark relational operators\nleakproof than other things, and I also agree that conservatism is\nwarranted. But that does not mean that someone could not make a\ncompelling argument for marking other functions leakproof.\n\nI think we will be better off leaving this alone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Jan 2022 15:26:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Code comments: why some text-handling functions are\n leakproof"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 11, 2022 at 2:07 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>> This is more or less a verbatim copy of Tom's comment in email thread at [1].\n>> \n>> I could not find an appropriate spot to place these comments, so I placed them on bttextcmp() function, The only other place that I could see we can place these comments is in the file src/backend/optimizer/README, because there is some consideration given to leakproof functions in optimizer docs. But these comments seem quite out of place in optimizer docs.\n\n> It doesn't seem particularly likely that someone who is thinking about\n> changing this in the future would notice the comment in the place\n> where you propose to put it, nor that they would read the optimizer\n> README.\n\nAgreed. I think if we wanted to make an upgrade in the way function\nleakproofness is documented, we ought to add a <sect1> about it in\nxfunc.sgml, adjacent to the one about function volatility categories.\nThis could perhaps consolidate some of the existing documentation mentions\nof leakproofness, as well as adding text similar to what Gurjeet suggests.\n\n> Furthermore, I don't know that everyone agrees with Tom about this. I\n> do agree that it's more important to mark relational operators\n> leakproof than other things, and I also agree that conservatism is\n> warranted. But that does not mean that someone could not make a\n> compelling argument for marking other functions leakproof.\n\nISTM the proposed text does a reasonable job of explaining why\nwe made the decisions currently embedded in pg_proc.proleakproof.\nIf we make some other decisions in future, updating the rationale\nin the docs would be an appropriate part of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Feb 2022 17:02:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Code comments: why some text-handling functions are\n leakproof"
},
{
"msg_contents": "I'm going to mark this returned with feedback.\n\nIf you have a chance to update the patch moving the documentation to\nxfunc.sgml the way Tom describes make sure to create a new commitfest\nentry. I would suggest submitting the patch as a followup on this\nthread so when it's added to the commitfest it links to this whole\ndiscussion.\n\n\nOn Mon, 28 Feb 2022 at 17:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Jan 11, 2022 at 2:07 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> >> This is more or less a verbatim copy of Tom's comment in email thread at [1].\n> >>\n> >> I could not find an appropriate spot to place these comments, so I placed them on bttextcmp() function, The only other place that I could see we can place these comments is in the file src/backend/optimizer/README, because there is some consideration given to leakproof functions in optimizer docs. But these comments seem quite out of place in optimizer docs.\n>\n> > It doesn't seem particularly likely that someone who is thinking about\n> > changing this in the future would notice the comment in the place\n> > where you propose to put it, nor that they would read the optimizer\n> > README.\n>\n> Agreed. I think if we wanted to make an upgrade in the way function\n> leakproofness is documented, we ought to add a <sect1> about it in\n> xfunc.sgml, adjacent to the one about function volatility categories.\n> This could perhaps consolidate some of the existing documentation mentions\n> of leakproofness, as well as adding text similar to what Gurjeet suggests.\n>\n> > Furthermore, I don't know that everyone agrees with Tom about this. I\n> > do agree that it's more important to mark relational operators\n> > leakproof than other things, and I also agree that conservatism is\n> > warranted. But that does not mean that someone could not make a\n> > compelling argument for marking other functions leakproof.\n>\n> ISTM the proposed text does a reasonable job of explaining why\n> we made the decisions currently embedded in pg_proc.proleakproof.\n> If we make some other decisions in future, updating the rationale\n> in the docs would be an appropriate part of that.\n>\n> regards, tom lane\n>\n>\n\n\n--\ngreg\n\n\n",
"msg_date": "Mon, 28 Mar 2022 14:55:01 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Code comments: why some text-handling functions are\n leakproof"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a very basic question related to replication slots. Why should\nthe master/primary server maintain the replication slot info like lsn\ncorresponding to each standby server etc. Instead, why can't each\nstandby server send the lsn that it needs, and master/primary server\nmaintain the minimum lsn across all of the standby servers so that the\ninformation could be used for cleanup/removal of WAL segments?\n\nThe minimum lsn could as well be streamed to all of the standby\nservers while streaming the WAL records, so that the cleanup on the\nstandby server as well happens as per the minimum lsn. Also, even if\nthe primary server crashes, any standby server becoming the master is\nwell aware of the minimum lsn and the WAL records required for all of\nthe remaining standby servers are intact.\n\nThanks,\nRKN\n\n\n",
"msg_date": "Tue, 11 Jan 2022 16:48:59 +0530",
"msg_from": "RKN Sai Krishna <rknsaiforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Query regarding replication slots"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 04:48:59PM +0530, RKN Sai Krishna wrote:\n> Hi All,\n> \n> I have a very basic question related to replication slots. Why should\n> the master/primary server maintain the replication slot info like lsn\n> corresponding to each standby server etc. Instead, why can't each\n> standby server send the lsn that it needs, and master/primary server\n> maintain the minimum lsn across all of the standby servers so that the\n> information could be used for cleanup/removal of WAL segments?\n\nBecause the information is needed even if the standby servers are not\navailable, and if you only have a global xmin then you can't do much when\nremoving one of the slots.\n\n\n",
"msg_date": "Tue, 11 Jan 2022 19:37:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query regarding replication slots"
}
] |
[
{
"msg_contents": "Hi\n\nAttached a patch to improve the tab completion for foreigh table.\n\nAlso modified some DOC description of ALTER TABLE at [1] in according with CREATE INDEX at [2].\n\nIn [1], we use \"ALTER INDEX ATTACH PARTITION\"\nIn [2], we use \"ALTER INDEX ... ATTACH PARTITION\"\n\nI think the format in [2] is better.\n\n[1] https://www.postgresql.org/docs/devel/sql-altertable.html\n[2] https://www.postgresql.org/docs/devel/sql-createindex.html\n\nRegards,\nTang",
"msg_date": "Tue, 11 Jan 2022 12:43:21 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[PATCH]Add tab completion for foreigh table"
},
{
"msg_contents": "\n\nOn 2022/01/11 21:43, tanghy.fnst@fujitsu.com wrote:\n> Hi\n> \n> Attached a patch to improve the tab completion for foreigh table.\n\nThanks!\n\nIsn't it better to tab-complete not only \"PARTITION OF\" but also \"(\" for CREATE FOREIGN TABLE?\n\n\n> Also modified some DOC description of ALTER TABLE at [1] in according with CREATE INDEX at [2].\n> \n> In [1], we use \"ALTER INDEX ATTACH PARTITION\"\n> In [2], we use \"ALTER INDEX ... ATTACH PARTITION\"\n> \n> I think the format in [2] is better.\n\nAgreed.\n\nIMO it's better to make the docs changes in separate patch because they are not directly related to the improvement of tab-completion.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 13 Jan 2022 12:38:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH]Add tab completion for foreigh table"
},
{
"msg_contents": "On Thursday, January 13, 2022 12:38 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> Isn't it better to tab-complete not only \"PARTITION OF\" but also \"(\" for CREATE\r\n> FOREIGN TABLE?\r\n\r\nThanks for your review. Left bracket completion added.\r\n\r\n> IMO it's better to make the docs changes in separate patch because they are not\r\n> directly related to the improvement of tab-completion.\r\n\r\nAgreed. The former one patch was divided into two. \r\n0001 patch, added tab completion for foreign table.\r\n0002 patch, modified some doc description.\r\n\r\nRegards,\r\nTang",
"msg_date": "Thu, 13 Jan 2022 06:57:42 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH]Add tab completion for foreigh table"
}
] |
[
{
"msg_contents": "I have created a patch to enable the Boyer-More-Horspool search\nalgorithm (B-M-H) for LIKE queries.\n\nB-M-H needs to initialize the skip table and keep it during SQL execution.\nIn this patch, flinfo->fn_extra is used to keep the skip table.\n\nThe conditions under which B-M-H can be used are as follows.\n\n(1) B-M-H in LIKE search supports only single-byte character sets and UTF8.\nMultibyte character sets does not support, because it may contain another\ncharacters in the byte sequence. For UTF-8, it works fine, because in\nUTF-8 the byte sequence of one character cannot contain another character.\n\n(2) The pattern string should be stable parameter, because B-M-H needs to\nkeep\nthe skip table generated from the pattern string during the execution of\nthe query.\n\n(3) The pattern string should be at least 4 characters.\nFor example, '%AB%' can use B-M-H.\n\n(4) The first and last character of the pattern string should be '%'.\n\n(5) Characters other than the first and last of the pattern string\nshould not be '%', '_'. However, escaped characters such as\n'\\%', '\\_' are available.\n\nAlso, this patch changes the collation validity check in functions\n(such as textlike) to be performed at the first execution of the query,\ninstead of each function execution.\nI have measured the performance with the following query.\n\n---------- ---------- ---------- ---------- ---------- ---------- ----------\nSET client_min_messages TO notice;\n\n\\timing\n\nDO $$\nDECLARE\n cnt integer := 0;\n total integer := 0;\nBEGIN\n FOR i IN 1..500 LOOP\n select count(*) into cnt\n from pg_catalog.pg_description d\n where d.description like '%greater%';\n\n total := total + cnt;\n END LOOP;\n\n RAISE NOTICE 'TOTAL: %', total;\nEND\n$$\n;\n---------- ---------- ---------- ---------- ---------- ---------- ----------\n\nResult\nWithout patch: 257.504ms\nWith patch: 191.638ms\n\nRegards,\nAtsushi Ogawa",
"msg_date": "Tue, 11 Jan 2022 22:55:16 +0900",
"msg_from": "Atsushi Ogawa <atsushi.ogawa@gmail.com>",
"msg_from_op": true,
"msg_subject": "Boyer-More-Horspool searching LIKE queries"
},
{
"msg_contents": "On 11/01/2022 15:55, Atsushi Ogawa wrote:\n> I have created a patch to enable the Boyer-More-Horspool search\n> algorithm (B-M-H) for LIKE queries.\n\nCool!\n\n> The conditions under which B-M-H can be used are as follows.\n> \n> (1) B-M-H in LIKE search supports only single-byte character sets and UTF8.\n> Multibyte character sets does not support, because it may contain another\n> characters in the byte sequence. For UTF-8, it works fine, because in\n> UTF-8 the byte sequence of one character cannot contain another character.\n\nYou can make it work with any encoding, if you check for that case after \nyou find a match. See how text_position() does it.\n\n> (3) The pattern string should be at least 4 characters.\n> For example, '%AB%' can use B-M-H.\n\nTo be precise, the patch checks that the pattern string is at least 4 \n*bytes* long. A pattern like E'%\\U0001F418%' would benefit too.\n\nIf I'm reading the code correctly, it doesn't account for escapes \ncorrectly. It will use B-M-H for a pattern like '%\\\\%', even though \nthat's just searching for a single backslash and won't benefit from B-M-H.\n\n> (4) The first and last character of the pattern string should be '%'.\n\nI wonder if we can do better than that. If you have a pattern like \n'%foo%bar', its pretty obvious (to a human) that you can quickly check \nif the string ends in 'bar', and then check if it also contains the \nsubstring 'foo'. Is there some way to generalize that?\n\nLooking at MatchText() in like.c, there is this piece of code:\n\n> \t\telse if (*p == '%')\n> \t\t{\n> \t\t\tchar\t\tfirstpat;\n> \n> \t\t\t/*\n> \t\t\t * % processing is essentially a search for a text position at\n> \t\t\t * which the remainder of the text matches the remainder of the\n> \t\t\t * pattern, using a recursive call to check each potential match.\n> \t\t\t *\n> \t\t\t * If there are wildcards immediately following the %, we can skip\n> \t\t\t * over them first, using the idea that any sequence of N _'s and\n> \t\t\t * one or more %'s is equivalent to N _'s and one % (ie, it will\n> \t\t\t * match any sequence of at least N text characters). In this way\n> \t\t\t * we will always run the recursive search loop using a pattern\n> \t\t\t * fragment that begins with a literal character-to-match, thereby\n> \t\t\t * not recursing more than we have to.\n> \t\t\t */\n> \t\t\tNextByte(p, plen);\n> \n> \t\t\twhile (plen > 0)\n> \t\t\t{\n> \t\t\t\tif (*p == '%')\n> \t\t\t\t\tNextByte(p, plen);\n> \t\t\t\telse if (*p == '_')\n> \t\t\t\t{\n> \t\t\t\t\t/* If not enough text left to match the pattern, ABORT */\n> \t\t\t\t\tif (tlen <= 0)\n> \t\t\t\t\t\treturn LIKE_ABORT;\n> \t\t\t\t\tNextChar(t, tlen);\n> \t\t\t\t\tNextByte(p, plen);\n> \t\t\t\t}\n> \t\t\t\telse\n> \t\t\t\t\tbreak;\t\t/* Reached a non-wildcard pattern char */\n> \t\t\t}\n\nCould we use B-M-H to replace that piece of code?\n\nHow does the performance compare with regular expressions? Would it be \npossible to use this for trivial regular expressions too? Or could we \nspeed up the regexp engine to take advantage of B-M-H, and use it for \nLIKE? Or something like that?\n\n> I have measured the performance with the following query.\n\nSetting up the B-M-H table adds some initialization overhead, so this \nwould be a loss for cases where the LIKE is executed only once, and/or \nthe haystack strings are very small. That's probably OK, the overhead is \nprobably small, and those cases are probably not performance-critical. \nBut would be nice to measure that too.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 11 Jan 2022 22:17:18 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Boyer-More-Horspool searching LIKE queries"
},
{
"msg_contents": "Thanks for the comments.\n\n> > The conditions under which B-M-H can be used are as follows.\n> >\n> > (1) B-M-H in LIKE search supports only single-byte character sets and\nUTF8.\n> > Multibyte character sets does not support, because it may contain\nanother\n> > characters in the byte sequence. For UTF-8, it works fine, because in\n> > UTF-8 the byte sequence of one character cannot contain another\ncharacter.\n>\n> You can make it work with any encoding, if you check for that case after\n> you find a match. See how text_position() does it.\n\nI saw the text_position(). I would like to do the same.\n\n> > (3) The pattern string should be at least 4 characters.\n> > For example, '%AB%' can use B-M-H.\n>\n> To be precise, the patch checks that the pattern string is at least 4\n> *bytes* long. A pattern like E'%\\U0001F418%' would benefit too.\n\n*bytes* is precise. I will revise the comment of code.\n\n> If I'm reading the code correctly, it doesn't account for escapes\n> correctly. It will use B-M-H for a pattern like '%\\\\%', even though\n> that's just searching for a single backslash and won't benefit from B-M-H.\n\nYou are correct. I will fix it.\n\n> > (4) The first and last character of the pattern string should be '%'.\n>\n> I wonder if we can do better than that. If you have a pattern like\n> '%foo%bar', its pretty obvious (to a human) that you can quickly check\n> if the string ends in 'bar', and then check if it also contains the\n> substring 'foo'. Is there some way to generalize that?\n\nI think the following optimizations are possible.\n\n(1)%foo%bar\n Check if the string ends with 'bar' and search for 'foo' by B-M-H.\n\n(2)foo%bar%\n Check if the string starts with 'foo' and search for 'bar' by B-M-H.\n\n(3)foo%bar%baz\n Check if the string starts with 'foo' and string ends with 'baz' and\n search for 'bar' by B-M-H.\n\n> Looking at MatchText() in like.c, there is this piece of code:\n>\n> > else if (*p == '%')\n> > {\n> > char firstpat;\n> >\n> > /*\n> > * % processing is essentially a search for a\ntext position at\n> > * which the remainder of the text matches the\nremainder of the\n> > * pattern, using a recursive call to check each\npotential match.\n> > *\n> > * If there are wildcards immediately following\nthe %, we can skip\n> > * over them first, using the idea that any\nsequence of N _'s and\n> > * one or more %'s is equivalent to N _'s and one\n% (ie, it will\n> > * match any sequence of at least N text\ncharacters). In this way\n> > * we will always run the recursive search loop\nusing a pattern\n> > * fragment that begins with a literal\ncharacter-to-match, thereby\n> > * not recursing more than we have to.\n> > */\n> > NextByte(p, plen);\n> >\n> > while (plen > 0)\n> > {\n> > if (*p == '%')\n> > NextByte(p, plen);\n> > else if (*p == '_')\n> > {\n> > /* If not enough text left to\nmatch the pattern, ABORT */\n> > if (tlen <= 0)\n> > return LIKE_ABORT;\n> > NextChar(t, tlen);\n> > NextByte(p, plen);\n> > }\n> > else\n> > break; /* Reached a\nnon-wildcard pattern char */\n> > }\n>\n> Could we use B-M-H to replace that piece of code?\n\nFor example, in a pattern such as %foo%bar%, it is possible to first search\nfor 'foo' by B-M-H, and then search for 'bar' by B-M-H. It would be nice if\nsuch a\nprocess could be generalized to handle various LIKE search patterns.\n\n> How does the performance compare with regular expressions? Would it be\n> possible to use this for trivial regular expressions too? Or could we\n> speed up the regexp engine to take advantage of B-M-H, and use it for\n> LIKE? Or something like that?\n\nI think regular expressions in postgresql is slower than LIKE.\nI compared it with the following two SQLs.\n\n(1)LIKE: execution time is about 0.8msec\nselect count(*) from pg_catalog.pg_description d where d.description like\n'%greater%';\n\n(2)regular expression: execution time is about 3.1 msec\nselect count(*) from pg_catalog.pg_description d where d.description ~\n'greater';\n\nFor trivial regular expressions, it may be better to use LIKE.\n\n> > I have measured the performance with the following query.\n>\n> Setting up the B-M-H table adds some initialization overhead, so this\n> would be a loss for cases where the LIKE is executed only once, and/or\n> the haystack strings are very small. That's probably OK, the overhead is\n> probably small, and those cases are probably not performance-critical.\n> But would be nice to measure that too.\n\nI tried to measure the case where LIKE is executed only once and\nthe haystack string are very small.\n\n---------------------------------------------------------------\nSET client_min_messages TO notice;\n\n\\timing\n\nDO $$\nDECLARE\n cnt integer := 0;\n total integer := 0;\nBEGIN\n FOR i IN 1..10000 LOOP\n select count(*) into cnt from pg_class where oid = 2662 and relname\nlike '%cl%';\n total := total + cnt;\n END LOOP;\n\n RAISE NOTICE 'TOTAL: %', total;\nEND\n$$\n;\n---------------------------------------------------------------\n\nwithout patch: 74.499msec\nwith patch 77.321msec\n\nIn this case, the patched version will be a few percent slower, but I think\nthe overhead is small.\n\nRegards,\nAtsushi Ogawa\n\n\n\n2022年1月12日(水) 5:17 Heikki Linnakangas <hlinnaka@iki.fi>:\n\n> On 11/01/2022 15:55, Atsushi Ogawa wrote:\n> > I have created a patch to enable the Boyer-More-Horspool search\n> > algorithm (B-M-H) for LIKE queries.\n>\n> Cool!\n>\n> > The conditions under which B-M-H can be used are as follows.\n> >\n> > (1) B-M-H in LIKE search supports only single-byte character sets and\n> UTF8.\n> > Multibyte character sets does not support, because it may contain another\n> > characters in the byte sequence. For UTF-8, it works fine, because in\n> > UTF-8 the byte sequence of one character cannot contain another\n> character.\n>\n> You can make it work with any encoding, if you check for that case after\n> you find a match. See how text_position() does it.\n>\n> > (3) The pattern string should be at least 4 characters.\n> > For example, '%AB%' can use B-M-H.\n>\n> To be precise, the patch checks that the pattern string is at least 4\n> *bytes* long. A pattern like E'%\\U0001F418%' would benefit too.\n>\n> If I'm reading the code correctly, it doesn't account for escapes\n> correctly. It will use B-M-H for a pattern like '%\\\\%', even though\n> that's just searching for a single backslash and won't benefit from B-M-H.\n>\n> > (4) The first and last character of the pattern string should be '%'.\n>\n> I wonder if we can do better than that. If you have a pattern like\n> '%foo%bar', its pretty obvious (to a human) that you can quickly check\n> if the string ends in 'bar', and then check if it also contains the\n> substring 'foo'. Is there some way to generalize that?\n>\n> Looking at MatchText() in like.c, there is this piece of code:\n>\n> > else if (*p == '%')\n> > {\n> > char firstpat;\n> >\n> > /*\n> > * % processing is essentially a search for a text\n> position at\n> > * which the remainder of the text matches the\n> remainder of the\n> > * pattern, using a recursive call to check each\n> potential match.\n> > *\n> > * If there are wildcards immediately following\n> the %, we can skip\n> > * over them first, using the idea that any\n> sequence of N _'s and\n> > * one or more %'s is equivalent to N _'s and one\n> % (ie, it will\n> > * match any sequence of at least N text\n> characters). In this way\n> > * we will always run the recursive search loop\n> using a pattern\n> > * fragment that begins with a literal\n> character-to-match, thereby\n> > * not recursing more than we have to.\n> > */\n> > NextByte(p, plen);\n> >\n> > while (plen > 0)\n> > {\n> > if (*p == '%')\n> > NextByte(p, plen);\n> > else if (*p == '_')\n> > {\n> > /* If not enough text left to\n> match the pattern, ABORT */\n> > if (tlen <= 0)\n> > return LIKE_ABORT;\n> > NextChar(t, tlen);\n> > NextByte(p, plen);\n> > }\n> > else\n> > break; /* Reached a\n> non-wildcard pattern char */\n> > }\n>\n> Could we use B-M-H to replace that piece of code?\n>\n> How does the performance compare with regular expressions? Would it be\n> possible to use this for trivial regular expressions too? Or could we\n> speed up the regexp engine to take advantage of B-M-H, and use it for\n> LIKE? Or something like that?\n>\n> > I have measured the performance with the following query.\n>\n> Setting up the B-M-H table adds some initialization overhead, so this\n> would be a loss for cases where the LIKE is executed only once, and/or\n> the haystack strings are very small. That's probably OK, the overhead is\n> probably small, and those cases are probably not performance-critical.\n> But would be nice to measure that too.\n>\n> - Heikki\n>\n\nThanks for the comments.> > The conditions under which B-M-H can be used are as follows.> >> > (1) B-M-H in LIKE search supports only single-byte character sets and UTF8.> > Multibyte character sets does not support, because it may contain another> > characters in the byte sequence. For UTF-8, it works fine, because in> > UTF-8 the byte sequence of one character cannot contain another character.>> You can make it work with any encoding, if you check for that case after> you find a match. See how text_position() does it.I saw the text_position(). I would like to do the same.> > (3) The pattern string should be at least 4 characters.> > For example, '%AB%' can use B-M-H.>> To be precise, the patch checks that the pattern string is at least 4> *bytes* long. A pattern like E'%\\U0001F418%' would benefit too.*bytes* is precise. I will revise the comment of code.> If I'm reading the code correctly, it doesn't account for escapes> correctly. It will use B-M-H for a pattern like '%\\\\%', even though> that's just searching for a single backslash and won't benefit from B-M-H.You are correct. I will fix it.> > (4) The first and last character of the pattern string should be '%'.>> I wonder if we can do better than that. If you have a pattern like> '%foo%bar', its pretty obvious (to a human) that you can quickly check> if the string ends in 'bar', and then check if it also contains the> substring 'foo'. Is there some way to generalize that?I think the following optimizations are possible.(1)%foo%bar Check if the string ends with 'bar' and search for 'foo' by B-M-H.(2)foo%bar% Check if the string starts with 'foo' and search for 'bar' by B-M-H.(3)foo%bar%baz Check if the string starts with 'foo' and string ends with 'baz' and search for 'bar' by B-M-H.> Looking at MatchText() in like.c, there is this piece of code:>> > else if (*p == '%')> > {> > char firstpat;> >> > /*> > * % processing is essentially a search for a text position at> > * which the remainder of the text matches the remainder of the> > * pattern, using a recursive call to check each potential match.> > *> > * If there are wildcards immediately following the %, we can skip> > * over them first, using the idea that any sequence of N _'s and> > * one or more %'s is equivalent to N _'s and one % (ie, it will> > * match any sequence of at least N text characters). In this way> > * we will always run the recursive search loop using a pattern> > * fragment that begins with a literal character-to-match, thereby> > * not recursing more than we have to.> > */> > NextByte(p, plen);> >> > while (plen > 0)> > {> > if (*p == '%')> > NextByte(p, plen);> > else if (*p == '_')> > {> > /* If not enough text left to match the pattern, ABORT */> > if (tlen <= 0)> > return LIKE_ABORT;> > NextChar(t, tlen);> > NextByte(p, plen);> > }> > else> > break; /* Reached a non-wildcard pattern char */> > }>> Could we use B-M-H to replace that piece of code?For example, in a pattern such as %foo%bar%, it is possible to first searchfor 'foo' by B-M-H, and then search for 'bar' by B-M-H. It would be nice if such a process could be generalized to handle various LIKE search patterns.> How does the performance compare with regular expressions? Would it be> possible to use this for trivial regular expressions too? Or could we> speed up the regexp engine to take advantage of B-M-H, and use it for> LIKE? Or something like that?I think regular expressions in postgresql is slower than LIKE.I compared it with the following two SQLs.(1)LIKE: execution time is about 0.8msecselect count(*) from pg_catalog.pg_description d where d.description like '%greater%';(2)regular expression: execution time is about 3.1 msecselect count(*) from pg_catalog.pg_description d where d.description ~ 'greater';For trivial regular expressions, it may be better to use LIKE.> > I have measured the performance with the following query.>> Setting up the B-M-H table adds some initialization overhead, so this> would be a loss for cases where the LIKE is executed only once, and/or> the haystack strings are very small. That's probably OK, the overhead is> probably small, and those cases are probably not performance-critical.> But would be nice to measure that too.I tried to measure the case where LIKE is executed only once andthe haystack string are very small.---------------------------------------------------------------SET client_min_messages TO notice;\\timingDO $$DECLARE cnt integer := 0; total integer := 0;BEGIN FOR i IN 1..10000 LOOP select count(*) into cnt from pg_class where oid = 2662 and relname like '%cl%'; total := total + cnt; END LOOP; RAISE NOTICE 'TOTAL: %', total;END$$;---------------------------------------------------------------without patch: 74.499msecwith patch 77.321msecIn this case, the patched version will be a few percent slower, but I thinkthe overhead is small.Regards,Atsushi Ogawa2022年1月12日(水) 5:17 Heikki Linnakangas <hlinnaka@iki.fi>:On 11/01/2022 15:55, Atsushi Ogawa wrote:\n> I have created a patch to enable the Boyer-More-Horspool search\n> algorithm (B-M-H) for LIKE queries.\n\nCool!\n\n> The conditions under which B-M-H can be used are as follows.\n> \n> (1) B-M-H in LIKE search supports only single-byte character sets and UTF8.\n> Multibyte character sets does not support, because it may contain another\n> characters in the byte sequence. For UTF-8, it works fine, because in\n> UTF-8 the byte sequence of one character cannot contain another character.\n\nYou can make it work with any encoding, if you check for that case after \nyou find a match. See how text_position() does it.\n\n> (3) The pattern string should be at least 4 characters.\n> For example, '%AB%' can use B-M-H.\n\nTo be precise, the patch checks that the pattern string is at least 4 \n*bytes* long. A pattern like E'%\\U0001F418%' would benefit too.\n\nIf I'm reading the code correctly, it doesn't account for escapes \ncorrectly. It will use B-M-H for a pattern like '%\\\\%', even though \nthat's just searching for a single backslash and won't benefit from B-M-H.\n\n> (4) The first and last character of the pattern string should be '%'.\n\nI wonder if we can do better than that. If you have a pattern like \n'%foo%bar', its pretty obvious (to a human) that you can quickly check \nif the string ends in 'bar', and then check if it also contains the \nsubstring 'foo'. Is there some way to generalize that?\n\nLooking at MatchText() in like.c, there is this piece of code:\n\n> else if (*p == '%')\n> {\n> char firstpat;\n> \n> /*\n> * % processing is essentially a search for a text position at\n> * which the remainder of the text matches the remainder of the\n> * pattern, using a recursive call to check each potential match.\n> *\n> * If there are wildcards immediately following the %, we can skip\n> * over them first, using the idea that any sequence of N _'s and\n> * one or more %'s is equivalent to N _'s and one % (ie, it will\n> * match any sequence of at least N text characters). In this way\n> * we will always run the recursive search loop using a pattern\n> * fragment that begins with a literal character-to-match, thereby\n> * not recursing more than we have to.\n> */\n> NextByte(p, plen);\n> \n> while (plen > 0)\n> {\n> if (*p == '%')\n> NextByte(p, plen);\n> else if (*p == '_')\n> {\n> /* If not enough text left to match the pattern, ABORT */\n> if (tlen <= 0)\n> return LIKE_ABORT;\n> NextChar(t, tlen);\n> NextByte(p, plen);\n> }\n> else\n> break; /* Reached a non-wildcard pattern char */\n> }\n\nCould we use B-M-H to replace that piece of code?\n\nHow does the performance compare with regular expressions? Would it be \npossible to use this for trivial regular expressions too? Or could we \nspeed up the regexp engine to take advantage of B-M-H, and use it for \nLIKE? Or something like that?\n\n> I have measured the performance with the following query.\n\nSetting up the B-M-H table adds some initialization overhead, so this \nwould be a loss for cases where the LIKE is executed only once, and/or \nthe haystack strings are very small. That's probably OK, the overhead is \nprobably small, and those cases are probably not performance-critical. \nBut would be nice to measure that too.\n\n- Heikki",
"msg_date": "Fri, 14 Jan 2022 23:40:44 +0900",
"msg_from": "Atsushi Ogawa <atsushi.ogawa@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Boyer-More-Horspool searching LIKE queries"
}
] |
[
{
"msg_contents": "I'm working on a table access method that stores indexes in a structure\nthat looks like an LSM tree. Changes get written to small segment files,\nwhich then get merged into larger segment files.\n\nIt's really tough to manage these files using existing fork/buffer/page\nfiles, because when you delete a large segment it leaves a lot of empty\nspace. It's a lot easier to write the segments into separate files on disk\nand then delete them as needed.\n\nI could do that, but then I lose the advantages of having data in native\nPostgres files, including support for buffering and locking.\n\nIt's important to have the segments stored contiguously on disk. I've\nbenchmarked it; it makes a huge performance difference.\n\nQuestions:\n\n1. Are there any other disadvantages to storing data in my own files on\ndisk, instead of in files managed by Postgres?\n\n2. Is it possible to increase the number of forks? I could store each level\nof the LSM tree in its own fork very efficiently. Forks could get truncated\nas needed. A dozen forks would handle it nicely.\n\nI'm working on a table access method that stores indexes in a structure that looks like an LSM tree. Changes get written to small segment files, which then get merged into larger segment files.It's really tough to manage these files using existing fork/buffer/page files, because when you delete a large segment it leaves a lot of empty space. It's a lot easier to write the segments into separate files on disk and then delete them as needed.I could do that, but then I lose the advantages of having data in native Postgres files, including support for buffering and locking.It's important to have the segments stored contiguously on disk. I've benchmarked it; it makes a huge performance difference.Questions:1. Are there any other disadvantages to storing data in my own files on disk, instead of in files managed by Postgres?2. Is it possible to increase the number of forks? I could store each level of the LSM tree in its own fork very efficiently. Forks could get truncated as needed. A dozen forks would handle it nicely.",
"msg_date": "Tue, 11 Jan 2022 12:39:06 -0600",
"msg_from": "Chris Cleveland <ccleve+github@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "More data files / forks"
},
{
"msg_contents": "\nOn 1/11/22 19:39, Chris Cleveland wrote:\n> I'm working on a table access method that stores indexes in a structure\n> that looks like an LSM tree. Changes get written to small segment files,\n> which then get merged into larger segment files.\n> \n> It's really tough to manage these files using existing fork/buffer/page\n> files, because when you delete a large segment it leaves a lot of empty\n> space. It's a lot easier to write the segments into separate files on\n> disk and then delete them as needed.\n> \n\nAnd is that empty space actually a problem? You can reuse that for new\ndata, no? It's a bit like empty space in regular data files - we could\ntry keeping it much lower, but it'd be harmful in practice.\n\n> I could do that, but then I lose the advantages of having data in native\n> Postgres files, including support for buffering and locking.\n> \n> It's important to have the segments stored contiguously on disk. I've\n> benchmarked it; it makes a huge performance difference.\n> \n\nYeah, I'm sure it's beneficial for sequential scans, readahead, etc. But\nyou can get most of that benefit by smart allocation strategy - instead\nof working with individual pages, allocate larger chunks of pages. So\ninstead of grabbing pages one by one, \"reserve\" them in e.g. 1MB chunks,\nor something.\n\nNot sure how exactly you do the book-keeping, ofc. I wonder if BRIN\nmight serve as an inspiration, as it maintains revmap and actual index\ntuples in the same fork. Not the same thing, but perhaps similar?\n\nThe other thing that comes to mind is logtape.c, which works with\nmultiple \"logical tapes\" stored in a single file - a bit like the\nsegments you're talking about. But maybe the assumptions about segments\nbeing written/read exactly once is too limiting for your use case.\n\n> Questions:\n> \n> 1. Are there any other disadvantages to storing data in my own files on\n> disk, instead of in files managed by Postgres?\n> \n\nWell, you simply don't get many of the built-in benefits you mentioned,\nvarious tools may not expect that, and so on.\n\n> 2. Is it possible to increase the number of forks? I could store each\n> level of the LSM tree in its own fork very efficiently. Forks could get\n> truncated as needed. A dozen forks would handle it nicely.\n> \n\nYou're right the number of forks is fixed, and it's one of the places\nthat's not extensible. I don't recall any proposals to change that,\nthough, and even if we decided to do that, I doubt we'd allow the number\nof forks to be entirely dynamic.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 12 Jan 2022 01:28:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: More data files / forks"
}
] |
[
{
"msg_contents": "Hi,\n\nThe January commitfest should have started almost two weeks ago, but given that\nnothing happened until now I think that it's safe to assume that either\neveryone forgot or no one wanted to volunteer.\n\nI'm therfore volunteering to manage this commitfest, although since it's\nalready quite late it's probably going to be a bit chaotic and a best effort,\nbut it's better than nothing.\n\nAs of today, there's a total of 292 patches for this commitfest and 240 still\nactive patches, 15 of them being there since 10 or more commitfests.\n\nStatus summary:\n\n- Needs review: 190.\n- Waiting on Author: 23.\n- Ready for Committer: 27.\n- Committed: 43.\n- Returned with Feedback: 1.\n- Withdrawn: 7.\n- Rejected: 1.\n\nNote that I don't have admin permissions on the cf app, so I'd be glad if\nsomething could grant it!\n\n\n",
"msg_date": "Wed, 12 Jan 2022 13:41:42 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 11:11 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> The January commitfest should have started almost two weeks ago, but given that\n> nothing happened until now I think that it's safe to assume that either\n> everyone forgot or no one wanted to volunteer.\n>\n> I'm therfore volunteering to manage this commitfest,\n>\n\nThanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 12 Jan 2022 15:44:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 6:42 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Note that I don't have admin permissions on the cf app, so I'd be glad if\n> something could grant it!\n\nGranted!\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 12 Jan 2022 16:16:36 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 04:16:36PM +0100, Magnus Hagander wrote:\n> On Wed, Jan 12, 2022 at 6:42 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Note that I don't have admin permissions on the cf app, so I'd be glad if\n> > something could grant it!\n> \n> Granted!\n\nThanks Magnus!\n\n\n",
"msg_date": "Wed, 12 Jan 2022 23:21:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 01:41:42PM +0800, Julien Rouhaud wrote:\n> The January commitfest should have started almost two weeks ago, but given that\n> nothing happened until now I think that it's safe to assume that either\n> everyone forgot or no one wanted to volunteer.\n\nThanks, Julien!\n--\nMichael",
"msg_date": "Thu, 13 Jan 2022 16:30:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "> On Wed, Jan 12, 2022 at 01:41:42PM +0800, Julien Rouhaud wrote:\n> Hi,\n>\n> The January commitfest should have started almost two weeks ago, but given that\n> nothing happened until now I think that it's safe to assume that either\n> everyone forgot or no one wanted to volunteer.\n>\n> I'm therfore volunteering to manage this commitfest, although since it's\n> already quite late it's probably going to be a bit chaotic and a best effort,\n> but it's better than nothing.\n\nMuch appreciated, thanks!\n\n\n",
"msg_date": "Thu, 13 Jan 2022 16:04:31 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "Hi,\n\nThis is the beginning of the 3rd week of this commit fest.\n\nSince my last email 5 days ago, 6 patches were committed and a few patches\nclosed. There are still overall 229 active patches, most of them\nunsurprisingly waiting for review.\n\nThe cfbot is doing a great job at early problem detection, including on less\ncommon platforms. I'd like to remind all hackers that the latest branch now\nhas everything included to easily test your own patchset on a private github\nrepository the same way that the cfbot will. That's a 5 minutes configuration,\nyou will find all the details at\nhttps://github.com/postgres/postgres/blob/master/src/tools/ci/README. Thanks\nagain to everyone involved in that feature, I'm personally a big fan already!\n\n\nStatus summary:\n\n- Needs review: 157.\n- Waiting on Author: 47.\n- Ready for Committer: 25.\n- Committed: 49.\n- Returned with Feedback: 2.\n- Withdrawn: 8.\n- Rejected: 4.\n\n\n",
"msg_date": "Mon, 17 Jan 2022 23:38:43 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "Hi,\n\nThis is the 4th week of this commitfest.\n\nSince last week, 5 entries were committed. There are still overall 223 active\npatches, the vast majority needing review. If you signed up to review patches,\nyou still have a whole week to help patch making progress and getting\ncommitted!\n\nStatus summary:\n- Needs review: 142.\n- Waiting on Author: 57.\n- Ready for Committer: 24.\n- Committed: 54.\n- Moved to next CF: 1.\n- Returned with Feedback: 2.\n- Rejected: 4.\n- Withdrawn: 8.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 11:14:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "Hi,\n\nIt's now at least Feb. 1st anywhere on earth, so the commit fest is now over.\n\nSince last week 5 entries were committed, 1 withdrawn, 3 returned with\nfeedback, 2 already moved to the next commitfest and 1 rejected.\n\nThis gives a total of 211 patches still alive, most of them ready for the next\nand final pg15 commitfest.\n\nStatus summary:\n- Needs review: 147.\n- Waiting on Author: 38.\n- Ready for Committer: 26.\n- Committed: 59.\n- Moved to next CF: 3.\n- Returned with Feedback: 5.\n- Rejected: 5.\n- Withdrawn: 9.\n- Total: 292.\n\nI will take care of closing the current commit fest and moving the entries to\nthe next one shortly.\n\n\n",
"msg_date": "Thu, 3 Feb 2022 00:15:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "I gave two reviews and received one review but the patches have been\n\"Moved to next CF\". Should I update them to \"Returned with Feedback\"\ngiven they all did get feedback? I was under the impression \"Moved to\nnext CF\" was only for patches that didn't get feedback in a CF and\nwere still waiting for feedback.\n\nOn Wed, 2 Feb 2022 at 11:16, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> It's now at least Feb. 1st anywhere on earth, so the commit fest is now over.\n>\n> Since last week 5 entries were committed, 1 withdrawn, 3 returned with\n> feedback, 2 already moved to the next commitfest and 1 rejected.\n>\n> This gives a total of 211 patches still alive, most of them ready for the next\n> and final pg15 commitfest.\n>\n> Status summary:\n> - Needs review: 147.\n> - Waiting on Author: 38.\n> - Ready for Committer: 26.\n> - Committed: 59.\n> - Moved to next CF: 3.\n> - Returned with Feedback: 5.\n> - Rejected: 5.\n> - Withdrawn: 9.\n> - Total: 292.\n>\n> I will take care of closing the current commit fest and moving the entries to\n> the next one shortly.\n>\n>\n\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 2 Feb 2022 12:09:06 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 02, 2022 at 12:09:06PM -0500, Greg Stark wrote:\n> I gave two reviews and received one review but the patches have been\n> \"Moved to next CF\".\n\nFor now I only moved to the next commit fest the patches that were in \"Needs\nReview\" or \"Ready for Committer\". I'm assuming that you failed to update the\ncf entry accordingly after your reviews, so yeah the patches were moved.\n\nI unfortunately don't have a lot of time right now and the commit fest still\nneeds to be closed, so I prefer to use my time triaging the patches that were\nmarked as Waiting on Author rather than going through a couple hundred of\nthreads yet another time.\n\n> Should I update them to \"Returned with Feedback\"\n> given they all did get feedback? I was under the impression \"Moved to\n> next CF\" was only for patches that didn't get feedback in a CF and\n> were still waiting for feedback.\n\nMy understanding of \"Returned with Feedback\" is that the patch implements\nsomething wanted, but as proposed won't be accepted without a major redesign or\nsomething like that. Not patches that are going through normal \"review /\naddressing reviews\" cycles. And definitely not bug fixes either.\n\nIf we close all patches that had a review just because they weren't perfect in\ntheir initial submission, we're just going to force everyone to re-register\ntheir patch for every single commit fest. I don't see that doing anything\napart from making sure that everyone stops contributing.\n\n\n",
"msg_date": "Thu, 3 Feb 2022 01:28:53 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Thu, Feb 03, 2022 at 01:28:53AM +0800, Julien Rouhaud wrote:\n> \n> My understanding of \"Returned with Feedback\" is that the patch implements\n> something wanted, but as proposed won't be accepted without a major redesign or\n> something like that. Not patches that are going through normal \"review /\n> addressing reviews\" cycles. And definitely not bug fixes either.\n> \n> If we close all patches that had a review just because they weren't perfect in\n> their initial submission, we're just going to force everyone to re-register\n> their patch for every single commit fest. I don't see that doing anything\n> apart from making sure that everyone stops contributing.\n> \n\nI had the same problem last time, \"Returned with feedback\" didn't feel\nfine in some cases.\n\nAfter reading this i started to wish there was some kind of guide about\nthis, and of course the wiki has that guide (outdated yes but something\nto start with).\n\nhttps://wiki.postgresql.org/wiki/CommitFest_Checklist#Sudden_Death_Overtime\n\nThis needs some love, still mentions rrreviewers for example, but if we\nupdated and put here a clear definition of the states maybe it could\nhelp to do CF managment.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 2 Feb 2022 12:45:40 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Feb 02, 2022 at 12:45:40PM -0500, Jaime Casanova wrote:\n> On Thu, Feb 03, 2022 at 01:28:53AM +0800, Julien Rouhaud wrote:\n> > \n> > My understanding of \"Returned with Feedback\" is that the patch implements\n> > something wanted, but as proposed won't be accepted without a major redesign or\n> > something like that. Not patches that are going through normal \"review /\n> > addressing reviews\" cycles. And definitely not bug fixes either.\n> > \n> > If we close all patches that had a review just because they weren't perfect in\n> > their initial submission, we're just going to force everyone to re-register\n> > their patch for every single commit fest. I don't see that doing anything\n> > apart from making sure that everyone stops contributing.\n> > \n> \n> I had the same problem last time, \"Returned with feedback\" didn't feel\n> fine in some cases.\n> \n> After reading this i started to wish there was some kind of guide about\n> this, and of course the wiki has that guide (outdated yes but something\n> to start with).\n> \n> https://wiki.postgresql.org/wiki/CommitFest_Checklist#Sudden_Death_Overtime\n>\n> This needs some love, still mentions rrreviewers for example\n\nYes, I looked at it but to be honest it doesn't make any sense.\n\nIt feels like this is punishing patches that get reviewed at the end of the\ncommitfest or that previously got an incorrect review, and somehow tries to\nsalvage patches from authors that don't review anything.\n\n> but if we\n> updated and put here a clear definition of the states maybe it could\n> help to do CF managment.\n\nI'm all for it, but looking at the current commit fest focusing on unresponsive\nauthors (e.g. close one way or another patches that have been waiting on author\nfor more than X days) should already help quite a lot.\n\n\n",
"msg_date": "Thu, 3 Feb 2022 01:56:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> On Thu, Feb 03, 2022 at 01:28:53AM +0800, Julien Rouhaud wrote:\n>> If we close all patches that had a review just because they weren't perfect in\n>> their initial submission, we're just going to force everyone to re-register\n>> their patch for every single commit fest. I don't see that doing anything\n>> apart from making sure that everyone stops contributing.\n\n> I had the same problem last time, \"Returned with feedback\" didn't feel\n> fine in some cases.\n\nAgreed, we're not here to cause make-work for submitters. RWF is\nappropriate if the patch has been in Waiting On Author for awhile\nand doesn't seem to be going anywhere, but otherwise we should\njust punt it to the next CF.\n\nAnyway, thanks to Julien for doing this mostly-thankless task\nthis time!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Feb 2022 13:00:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Feb 02, 2022 at 01:00:18PM -0500, Tom Lane wrote:\n> \n> Anyway, thanks to Julien for doing this mostly-thankless task\n> this time!\n> \n\nAgreed, great work!\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 2 Feb 2022 13:10:39 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Feb 02, 2022 at 01:10:39PM -0500, Jaime Casanova wrote:\n> On Wed, Feb 02, 2022 at 01:00:18PM -0500, Tom Lane wrote:\n> > \n> > Anyway, thanks to Julien for doing this mostly-thankless task\n> > this time!\n> > \n> \n> Agreed, great work!\n\nThanks a lot :)\n\n\n",
"msg_date": "Thu, 3 Feb 2022 12:58:53 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Wed, Feb 02, 2022 at 01:00:18PM -0500, Tom Lane wrote:\n> Agreed, we're not here to cause make-work for submitters. RWF is\n> appropriate if the patch has been in Waiting On Author for awhile\n> and doesn't seem to be going anywhere, but otherwise we should\n> just punt it to the next CF.\n\nFWIW, I just apply a two-week rule here, as of half the commit fest\nperiod to let people the time to react:\n- If a patch has been waiting on author since the 15th of January,\nmark it as RwF.\n- If it has been left as waiting on author after the 15th of January,\nmove it to the next CF.\n\n> Anyway, thanks to Julien for doing this mostly-thankless task\n> this time!\n\n+1.\n--\nMichael",
"msg_date": "Sun, 6 Feb 2022 15:49:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "Hi,\n\nOn Sun, Feb 06, 2022 at 03:49:50PM +0900, Michael Paquier wrote:\n> On Wed, Feb 02, 2022 at 01:00:18PM -0500, Tom Lane wrote:\n> > Agreed, we're not here to cause make-work for submitters. RWF is\n> > appropriate if the patch has been in Waiting On Author for awhile\n> > and doesn't seem to be going anywhere, but otherwise we should\n> > just punt it to the next CF.\n> \n> FWIW, I just apply a two-week rule here, as of half the commit fest\n> period to let people the time to react:\n> - If a patch has been waiting on author since the 15th of January,\n> mark it as RwF.\n> - If it has been left as waiting on author after the 15th of January,\n> move it to the next CF.\n\nThanks. Note that I was planning to do that on Monday, as it didn't seemed\nrushed enough to spend time on it during the weekend.\n\n\n",
"msg_date": "Sun, 6 Feb 2022 14:57:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
},
{
"msg_contents": "On Sun, Feb 06, 2022 at 02:57:45PM +0800, Julien Rouhaud wrote:\n> \n> On Sun, Feb 06, 2022 at 03:49:50PM +0900, Michael Paquier wrote:\n> > \n> > FWIW, I just apply a two-week rule here, as of half the commit fest\n> > period to let people the time to react:\n> > - If a patch has been waiting on author since the 15th of January,\n> > mark it as RwF.\n> > - If it has been left as waiting on author after the 15th of January,\n> > move it to the next CF.\n> \n> Thanks. Note that I was planning to do that on Monday, as it didn't seemed\n> rushed enough to spend time on it during the weekend.\n\nAnd that's now done. I also sent an email to warn the authors of those\npatches and closed the 2022-01 commit fest.\n\n\n",
"msg_date": "Mon, 7 Feb 2022 14:14:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 2022-01 Commitfest"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nI've noticed that on my branch with amcheck improvements cfbot on windows\nserver 2019 fails stream replication test.\nhttps://cirrus-ci.com/task/5353503093686272\n\nI don't see any relation of it to the changes in my patch. Furthermore it\nalso fails on the other СF branch\nhttps://cirrus-ci.com/task/4599128897355776\n\nIs it known cfbot problem? Do I need to do something to my amcheck СF\nbranch mentioned above for it to become green on cfbot eventually?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, hackers!I've noticed that on my branch with amcheck improvements cfbot on windows server 2019 fails stream replication test. https://cirrus-ci.com/task/5353503093686272I don't see any relation of it to the changes in my patch. Furthermore it also fails on the other СF branch https://cirrus-ci.com/task/4599128897355776Is it known cfbot problem? Do I need to do something to my amcheck СF branch mentioned above for it to become green on cfbot eventually?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 12 Jan 2022 14:45:06 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Stream replication test fails of cfbot/windows server 2019"
},
{
"msg_contents": "Hello.\n\nLooks like logical replication also affected:\n\n[08:26:35.599] # poll_query_until timed out executing this query:\n[08:26:35.599] # SELECT count(1) = 0 FROM pg_subscription_rel WHERE\nsrsubstate NOT IN ('r', 's');\n[08:26:35.599] # expecting this output:\n[08:26:35.599] # t\n[08:26:35.599] # last actual query output:\n[08:26:35.599] # f\n\nhttps://cirrus-ci.com/task/6532060239101952\nhttps://cirrus-ci.com/task/4755551606276096\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Wed, 12 Jan 2022 13:51:24 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stream replication test fails of cfbot/windows server 2019"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 01:51:24PM +0300, Michail Nikolaev wrote:\n> Hello.\n> \n> Looks like logical replication also affected:\n> \n> [08:26:35.599] # poll_query_until timed out executing this query:\n> [08:26:35.599] # SELECT count(1) = 0 FROM pg_subscription_rel WHERE\n> srsubstate NOT IN ('r', 's');\n> [08:26:35.599] # expecting this output:\n> [08:26:35.599] # t\n> [08:26:35.599] # last actual query output:\n> [08:26:35.599] # f\n> \n> https://cirrus-ci.com/task/6532060239101952\n> https://cirrus-ci.com/task/4755551606276096\n\nIndeed, and yet CI on postgres tree doesn't exhibit any problem:\nhttps://cirrus-ci.com/github/postgres/postgres\n\n\n",
"msg_date": "Wed, 12 Jan 2022 19:24:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stream replication test fails of cfbot/windows server 2019"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 12:24 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Wed, Jan 12, 2022 at 01:51:24PM +0300, Michail Nikolaev wrote:\n> > https://cirrus-ci.com/task/6532060239101952\n> > https://cirrus-ci.com/task/4755551606276096\n\nFor the record, cfbot only started running the recovery tests on\nWindows a couple of weeks ago (when the new improved .cirrus.yml\nlanded in the tree). I don't know if it's significant that Pavel's\npatch is failing every time:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3464\n\n... while one mentioned by Michail has lower frequency random failures:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/2979\n\n> Indeed, and yet CI on postgres tree doesn't exhibit any problem:\n> https://cirrus-ci.com/github/postgres/postgres\n\n(It's very cool that we have that turned on now!) That has run ~35\ntimes (once per commit) and never failed. Across all cfbot branches,\ncfbot is triggering over 100 builds a day, so something like 1400\nsince we started running the recovery test on Windows, so it's not a\nfair comparison: plenty more chances for random/timing based failures\nto show up.\n\nI don't know how many different kinds of flakiness we're suffering\nfrom on Windows. Could these cases be explained by the FD_CLOSE\nproblem + timing differences?\n\n\n",
"msg_date": "Thu, 13 Jan 2022 12:40:00 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stream replication test fails of cfbot/windows server 2019"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-13 12:40:00 +1300, Thomas Munro wrote:\n> On Thu, Jan 13, 2022 at 12:24 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > On Wed, Jan 12, 2022 at 01:51:24PM +0300, Michail Nikolaev wrote:\n> > > https://cirrus-ci.com/task/6532060239101952\n> > > https://cirrus-ci.com/task/4755551606276096\n>\n> For the record, cfbot only started running the recovery tests on\n> Windows a couple of weeks ago (when the new improved .cirrus.yml\n> landed in the tree). I don't know if it's significant that Pavel's\n> patch is failing every time:\n>\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3464\n\nI think we need to make wait_for_catchup() log something more useful. It's\npretty hard to debug failures involving it right now.\n\nE.g. the logs for https://cirrus-ci.com/task/5353503093686272 tells us that\n\n Waiting for replication conn standby_1's replay_lsn to pass '0/3023208' on primary\n # poll_query_until timed out executing this query:\n # SELECT '0/3023208' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';\n # expecting this output:\n # t\n # last actual query output:\n #\n # with stderr:\n timed out waiting for catchup at t/001_stream_rep.pl line 50.\n\nand there's lots of instances of that query in the primary's logs. But nowhere\ndo we see what the actual replication progress is. So we don't know if the\nproblem is just that there's a record that doesn't need to be flushed to disk,\nor something more fundamental.\n\nI think instead of croak(\"timed out waiting for catchup\") we should make\nwait_for_catchup() query the primary all columns of pg_stat_replication and\nreport those. And perhaps also report the result of\n SELECT * FROM pg_control_recovery(), pg_control_checkpoint();\non the standby?\n\n\n\n> I don't know how many different kinds of flakiness we're suffering\n> from on Windows. Could these cases be explained by the FD_CLOSE\n> problem + timing differences?\n\nMaybe. There's certainly something odd going on:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5353503093686272/log/src/test/recovery/tmp_check/log/001_stream_rep_primary.log\nhttps://api.cirrus-ci.com/v1/artifact/task/5353503093686272/log/src/test/recovery/tmp_check/log/001_stream_rep_standby_1.log\n\nstandby_1:\n 2022-01-12 08:21:36.543 GMT [8584][walreceiver] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n\nprimary:\n 2022-01-12 08:21:38.855 GMT [6276][postmaster] LOG: received fast shutdown request\n 2022-01-12 08:21:39.146 GMT [6276][postmaster] LOG: database system is shut down\n 2022-01-12 08:21:50.235 GMT [5524][postmaster] LOG: starting PostgreSQL 15devel, compiled by Visual C++ build 1929, 64-bit\n 2022-01-12 08:21:50.417 GMT [5524][postmaster] LOG: database system is ready to accept connections\n\nstandby_1:\n 2022-01-12 08:21:53.469 GMT [5108][walsender] [standby_2][2/0:0] LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1\n 2022-01-12 08:28:33.949 GMT [6484][postmaster] LOG: database system is shut down\n\nafaict standby_1's walreceiver never realized that the primary stopped?\n\n\nMore evidence to that fact is that the above \"last actual query output:\" shows nothing\nrather than 'f' for\n SELECT '0/3023208' <= replay_lsn AND state = 'streaming' FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';\n\n\n\nI wonder if it's relevant that the .cirrus file uses \"unix sockets\" on\nwindows as well, to avoid port conflicts (otherwise I saw frequent spurious\ntest failures due to port conflicts).\n # Avoids port conflicts between concurrent tap test runs\n PG_TEST_USE_UNIX_SOCKETS: 1\n\nIt's not particularly hard to imagine that either our \"windows unix socket\"\nsupport still has some bugs, or that windows' implementation of unix sockets\nis borked.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jan 2022 16:34:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stream replication test fails of cfbot/windows server 2019"
},
{
"msg_contents": ">\n> For the record, cfbot only started running the recovery tests on\n> Windows a couple of weeks ago (when the new improved .cirrus.yml\n> landed in the tree). I don't know if it's significant that Pavel's\n> patch is failing every time:\n>\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3464\n>\n> ... while one mentioned by Michail has lower frequency random failures:\n>\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/2979\n\n\nThomas, it's not exactly so. The patch\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3464\nfailed\non the described reason only once.\nPrevious redness was due to compiler warnings in previous patch versions.\n\nFurthermore, I've sent an updated contrib patch with very minor\nimprovements (completely unrelated to stream replication), and now test\npasses.\nI suppose there's some intermittent problem with windows cfbot\ninfrastructure which arise randomly sometimes and affects some random\npatches being tested. I have feeling that all patches pushed around\nyesterday's morning were affected and now everything works good.\n\nIt's pity I still have no idea about the source of the problems besides my\nlook at different cfbot behavior sometimes.\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nFor the record, cfbot only started running the recovery tests on\nWindows a couple of weeks ago (when the new improved .cirrus.yml\nlanded in the tree). I don't know if it's significant that Pavel's\npatch is failing every time:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3464\n\n... while one mentioned by Michail has lower frequency random failures:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/2979Thomas, it's not exactly so. The patch https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3464 failed on the described reason only once.Previous redness was due to compiler warnings in previous patch versions.Furthermore, I've sent an updated contrib patch with very minor improvements (completely unrelated to stream replication), and now test passes.I suppose there's some intermittent problem with windows cfbot infrastructure which arise randomly sometimes and affects some random patches being tested. I have feeling that all patches pushed around yesterday's morning were affected and now everything works good.It's pity I still have no idea about the source of the problems besides my look at different cfbot behavior sometimes.--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 13 Jan 2022 14:59:44 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Stream replication test fails of cfbot/windows server 2019"
}
] |
[
{
"msg_contents": "The existing PQcancel API is using blocking IO. This makes PQcancel\nimpossible to use in an event loop based codebase, without blocking the\nevent loop until the call returns.\n\nThis patch adds a new cancellation API to libpq which is called\nPQcancelConnectionStart. This API can be used to send cancellations in a\nnon-blocking fashion. To do this it internally uses regular PGconn\nconnection establishment. This has as a downside that\nPQcancelConnectionStart cannot be safely called from a signal handler.\n\nLuckily, this should be fine for most usages of this API. Since most\ncode that's using an event loop handles signals in that event loop as\nwell (as opposed to calling functions from the signal handler directly).\n\nThere are also a few advantages of this approach:\n1. No need to add and maintain a second non-blocking connection\n establishment codepath.\n2. Cancel connections benefit automatically from any improvements made\n to the normal connection establishment codepath. Examples of things\n that it currently gets for free currently are TLS support and\n keepalive settings.\n\nThis patch also includes a test for this new API (and also the already\nexisting cancellation APIs). The test can be easily run like this:\n\n cd src/test/modules/libpq_pipeline\n make && ./libpq_pipeline cancel\n\nNOTE: I have not tested this with GSS for the moment. My expectation is\nthat using this new API with a GSS connection will result in a\nCONNECTION_BAD status when calling PQcancelStatus. The reason for this\nis that GSS reads will also need to communicate back that an EOF was\nfound, just like I've done for TLS reads and unencrypted reads. Since in\ncase of a cancel connection an EOF is actually expected, and should not\nbe treated as an error.",
"msg_date": "Wed, 12 Jan 2022 15:22:18 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-12 15:22:18 +0000, Jelte Fennema wrote:\n> This patch also includes a test for this new API (and also the already\n> existing cancellation APIs). The test can be easily run like this:\n>\n> cd src/test/modules/libpq_pipeline\n> make && ./libpq_pipeline cancel\n\nRight now tests fails to build on windows with:\n\n[15:45:10.518] src/interfaces/libpq/libpqdll.def : fatal error LNK1121: duplicate ordinal number '189' [c:\\cirrus\\libpq.vcxproj]\non fails tests on other platforms. See\nhttps://cirrus-ci.com/build/4791821363576832\n\n\n> NOTE: I have not tested this with GSS for the moment. My expectation is\n> that using this new API with a GSS connection will result in a\n> CONNECTION_BAD status when calling PQcancelStatus. The reason for this\n> is that GSS reads will also need to communicate back that an EOF was\n> found, just like I've done for TLS reads and unencrypted reads. Since in\n> case of a cancel connection an EOF is actually expected, and should not\n> be treated as an error.\n\nThe failures do not seem related to this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jan 2022 16:44:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Attached is an updated patch which I believe fixes windows and the other test failures.\nAt least on my machine make check-world passes now when compiled with --enable-tap-tests\n\nI also included a second patch which adds some basic documentation for the libpq tests.",
"msg_date": "Thu, 13 Jan 2022 14:51:40 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 2022-01-13 at 14:51 +0000, Jelte Fennema wrote:\r\n> Attached is an updated patch which I believe fixes windows and the other test failures.\r\n> At least on my machine make check-world passes now when compiled with --enable-tap-tests\r\n> \r\n> I also included a second patch which adds some basic documentation for the libpq tests.\r\n\r\nThis is not a full review by any means, but here are my thoughts so\r\nfar:\r\n\r\n> NOTE: I have not tested this with GSS for the moment. My expectation is\r\n> that using this new API with a GSS connection will result in a\r\n> CONNECTION_BAD status when calling PQcancelStatus. The reason for this\r\n> is that GSS reads will also need to communicate back that an EOF was\r\n> found, just like I've done for TLS reads and unencrypted reads.\r\n\r\nFor what it's worth, I did a smoke test with a Kerberos environment via\r\n\r\n\r\n ./libpq_pipeline cancel '... gssencmode=require'\r\n\r\nand the tests claim to pass.\r\n\r\n> 2. Cancel connections benefit automatically from any improvements made\r\n> to the normal connection establishment codepath. Examples of things\r\n> that it currently gets for free currently are TLS support and\r\n> keepalive settings.\r\n\r\nThis seems like a big change compared to PQcancel(); one that's not\r\nreally hinted at elsewhere. Having the async version of an API open up\r\na completely different code path with new features is pretty surprising\r\nto me.\r\n\r\nAnd does the backend actually handle cancel requests via TLS (or GSS)?\r\nIt didn't look that way from a quick scan, but I may have missed\r\nsomething.\r\n\r\n> @@ -1555,6 +1665,7 @@ print_test_list(void)\r\n> printf(\"singlerow\\n\");\r\n> printf(\"transaction\\n\");\r\n> printf(\"uniqviol\\n\");\r\n> + printf(\"cancel\\n\");\r\n> }\r\n\r\nThis should probably go near the top; it looks like the existing list\r\nis alphabetized.\r\n\r\nThe new cancel tests don't print any feedback. It'd be nice to get the\r\nsame sort of output as the other tests.\r\n\r\n> /* issue a cancel request */\r\n> extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);\r\n> +extern PGcancelConn * PQcancelConnectStart(PGconn *conn);\r\n> +extern PGcancelConn * PQcancelConnect(PGconn *conn);\r\n> +extern PostgresPollingStatusType PQcancelConnectPoll(PGcancelConn * cancelConn);\r\n> +extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);\r\n> +extern int PQcancelSocket(const PGcancelConn * cancelConn);\r\n> +extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);\r\n> +extern void PQcancelFinish(PGcancelConn * cancelConn);\r\n\r\nThat's a lot of new entry points, most of which don't do anything\r\nexcept call their twin after a pointer cast. How painful would it be to\r\njust use the existing APIs as-is, and error out when calling\r\nunsupported functions if conn->cancelRequest is true?\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 9 Mar 2022 00:27:42 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Thu, 2022-01-13 at 14:51 +0000, Jelte Fennema wrote:\n>> 2. Cancel connections benefit automatically from any improvements made\n>> to the normal connection establishment codepath. Examples of things\n>> that it currently gets for free currently are TLS support and\n>> keepalive settings.\n\n> This seems like a big change compared to PQcancel(); one that's not\n> really hinted at elsewhere. Having the async version of an API open up\n> a completely different code path with new features is pretty surprising\n> to me.\n\nWell, the patch lacks any user-facing doco at all, so a-fortiori this\npoint is not covered. I trust the plan was to write docs later.\n\nI kind of feel that this patch is going in the wrong direction.\nI do see the need for a version of PQcancel that can encrypt the\ntransmitted cancel request (and yes, that should work on the backend\nside; see recursion in ProcessStartupPacket). I have not seen\nrequests for a non-blocking version, and this doesn't surprise me.\nI feel that the whole non-blocking aspect of libpq probably belongs\nto another era when people didn't trust threads.\n\nSo what I'd do is make a version that just takes a PGconn, sends the\ncancel request, and returns success or failure; never mind the\nnon-blocking aspect. One possible long-run advantage of this is that\nit might be possible to \"sync\" the cancel request so that we know,\nor at least can find out afterwards, exactly which query got\ncancelled; something that's fundamentally impossible if the cancel\nfunction works from a clone data structure that is disconnected\nfrom the current connection state.\n\n(Note that it probably makes sense to make a clone PGconn to pass\nto fe-connect.c, internally to this function. I just don't want\nto expose that to the app.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Mar 2022 17:41:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-24 17:41:53 -0400, Tom Lane wrote:\n> I kind of feel that this patch is going in the wrong direction.\n> I do see the need for a version of PQcancel that can encrypt the\n> transmitted cancel request (and yes, that should work on the backend\n> side; see recursion in ProcessStartupPacket). I have not seen\n> requests for a non-blocking version, and this doesn't surprise me.\n> I feel that the whole non-blocking aspect of libpq probably belongs\n> to another era when people didn't trust threads.\n\nThat's not a whole lot of fun if you think of cases like postgres_fdw (or\ncitus as in Jelte's case), which run inside the backend. Even with just a\nsingle postgres_fdw, we don't really want to end up in an uninterruptible\nPQcancel() that doesn't even react to pg_terminate_backend().\n\nEven if using threads weren't an issue, I don't really buy the premise - most\nnetworking code has moved *away* from using dedicated threads for each\nconnection. It just doesn't scale.\n\n\nLeaving PQcancel aside, we use the non-blocking libpq stuff widely\nourselves. I think walreceiver, isolationtester, pgbench etc would be *much*\nharder to get working equally well if there was just blocking calls. If\nanything, we're getting to the point where purely blocking functionality\nshouldn't be added anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 24 Mar 2022 15:49:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 6:49 PM Andres Freund <andres@anarazel.de> wrote:\n> That's not a whole lot of fun if you think of cases like postgres_fdw (or\n> citus as in Jelte's case), which run inside the backend. Even with just a\n> single postgres_fdw, we don't really want to end up in an uninterruptible\n> PQcancel() that doesn't even react to pg_terminate_backend().\n>\n> Even if using threads weren't an issue, I don't really buy the premise - most\n> networking code has moved *away* from using dedicated threads for each\n> connection. It just doesn't scale.\n>\n> Leaving PQcancel aside, we use the non-blocking libpq stuff widely\n> ourselves. I think walreceiver, isolationtester, pgbench etc would be *much*\n> harder to get working equally well if there was just blocking calls. If\n> anything, we're getting to the point where purely blocking functionality\n> shouldn't be added anymore.\n\n+1. I think having a non-blocking version of PQcancel() available is a\ngreat idea, and I've wanted it myself. See commit\nae9bfc5d65123aaa0d1cca9988037489760bdeae.\n\nThat said, I don't think that this particular patch is going in the\nright direction. I think Jacob's comment upthread is right on point:\n\"This seems like a big change compared to PQcancel(); one that's not\nreally hinted at elsewhere. Having the async version of an API open up\na completely different code path with new features is pretty\nsurprising to me.\" It seems to me that we want to end up with similar\ncode paths for PQcancel() and the non-blocking version of cancel. We\ncould get there in two ways. One way would be to implement the\nnon-blocking functionality in a manner that matches exactly what\nPQcancel() does now. I imagine that the existing code from PQcancel()\nwould move, with some amount of change, into a new set of non-blocking\nAPIs. Perhaps PQcancel() would then be rewritten to use those new APIs\ninstead of hand-rolling the same logic. The other possible approach\nwould be to first change the blocking version of PQcancel() to use the\nregular connection code instead of its own idiosyncratic logic, and\nthen as a second step, extend it with non-blocking interfaces that use\nthe regular non-blocking connection code. With either of these\napproaches, we end up with the functionality working similarly in the\nblocking and non-blocking code paths.\n\nLeaving the question of approach aside, I think it's fairly clear that\nthis patch cannot be seriously considered for v15. One problem is the\nlack of user-facing documentation, but there's a other stuff that just\ndoesn't look sufficiently well-considered. For example, it updates the\ncomment for pqsecure_read() to say \"Returns -1 in case of failures,\nexcept in the case of clean connection closure then it returns -2.\"\nBut that function calls any of three different implementation\nfunctions depending on the situation and the patch only updates one of\nthem. And it updates that function to return -2 when the is\nECONNRESET, which seems to fly in the face of the comment's idea that\nthis is the \"clean connection closure\" case. I think it's probably a\nbad sign that this function is tinkering with logic in this sort of\nlow-level function anyway. pqReadData() is a really general function\nthat manages to work with non-blocking I/O already, so why does\nnon-blocking query cancellation need to change its return values, or\nwhether or not it drops data in certain cases?\n\nI'm also skeptical about the fact that we end up with a whole bunch of\nnew functions that are just wrappers around existing functions. That's\nnot a scalable approach. Every function that we have for a PGconn will\neventually need a variant that deals with a PGcancelConn. That seems\nkind of pointless, especially considering that a PGcancelConn is\n*exactly* a PGconn in disguise. If we decide to pursue the approach of\nusing the existing infrastructure for PGconn objects to handle query\ncancellation, we ought to manipulate them using the same functions we\ncurrently do, with some kind of mode or flag or switch or something\nthat you can use to turn a regular PGconn into something that cancels\na query. Maybe you create the PGconn and call\nPQsprinkleMagicCancelDust() on it, and then you just proceed using the\nexisting functions, or something like that. Then, not only do the\nexisting functions not need query-cancel analogues, but any new\nfunctions we add in the future don't either.\n\nI'll set the target version for this patch to 16. I hope work continues.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 25 Mar 2022 14:34:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That said, I don't think that this particular patch is going in the\n> right direction. I think Jacob's comment upthread is right on point:\n> \"This seems like a big change compared to PQcancel(); one that's not\n> really hinted at elsewhere. Having the async version of an API open up\n> a completely different code path with new features is pretty\n> surprising to me.\" It seems to me that we want to end up with similar\n> code paths for PQcancel() and the non-blocking version of cancel. We\n> could get there in two ways. One way would be to implement the\n> non-blocking functionality in a manner that matches exactly what\n> PQcancel() does now. I imagine that the existing code from PQcancel()\n> would move, with some amount of change, into a new set of non-blocking\n> APIs. Perhaps PQcancel() would then be rewritten to use those new APIs\n> instead of hand-rolling the same logic. The other possible approach\n> would be to first change the blocking version of PQcancel() to use the\n> regular connection code instead of its own idiosyncratic logic, and\n> then as a second step, extend it with non-blocking interfaces that use\n> the regular non-blocking connection code. With either of these\n> approaches, we end up with the functionality working similarly in the\n> blocking and non-blocking code paths.\n\nI think you misunderstand where the real pain point is. The reason\nthat PQcancel's functionality is so limited has little to do with\nblocking vs non-blocking, and everything to do with the fact that\nit's designed to be safe to call from a SIGINT handler. That makes\nit quite impractical to invoke OpenSSL, and probably our GSS code\nas well. If we want support for all connection-time options then\nwe have to make a new function that does not promise signal safety.\n\nI'm prepared to yield on the question of whether we should provide\na non-blocking version, though I still say that (a) an easier-to-call,\none-step blocking alternative would be good too, and (b) it should\nnot be designed around the assumption that there's a completely\nindependent state object being used to perform the cancel. Even in\nthe non-blocking case, callers should only deal with the original\nPGconn.\n\n> Leaving the question of approach aside, I think it's fairly clear that\n> this patch cannot be seriously considered for v15.\n\nYeah, I don't think it's anywhere near fully baked yet. On the other\nhand, we do have a couple of weeks left.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Mar 2022 14:46:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 2:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think you misunderstand where the real pain point is. The reason\n> that PQcancel's functionality is so limited has little to do with\n> blocking vs non-blocking, and everything to do with the fact that\n> it's designed to be safe to call from a SIGINT handler. That makes\n> it quite impractical to invoke OpenSSL, and probably our GSS code\n> as well. If we want support for all connection-time options then\n> we have to make a new function that does not promise signal safety.\n\nWell, that's a fair point, but it's somewhat orthogonal to the one I'm\nmaking, which is that a non-blocking version of function X might be\nexpected to share code or at least functionality with X itself. Having\nsomething that is named in a way that implies asynchrony without other\ndifferences but which is actually different in other important ways is\nno good.\n\n> I'm prepared to yield on the question of whether we should provide\n> a non-blocking version, though I still say that (a) an easier-to-call,\n> one-step blocking alternative would be good too, and (b) it should\n> not be designed around the assumption that there's a completely\n> independent state object being used to perform the cancel. Even in\n> the non-blocking case, callers should only deal with the original\n> PGconn.\n\nWell, this sounds like you're arguing for the first of the two\napproaches I thought would be acceptable, rather than the second.\n\n> > Leaving the question of approach aside, I think it's fairly clear that\n> > this patch cannot be seriously considered for v15.\n>\n> Yeah, I don't think it's anywhere near fully baked yet. On the other\n> hand, we do have a couple of weeks left.\n\nWe do?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 25 Mar 2022 15:22:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, that's a fair point, but it's somewhat orthogonal to the one I'm\n> making, which is that a non-blocking version of function X might be\n> expected to share code or at least functionality with X itself. Having\n> something that is named in a way that implies asynchrony without other\n> differences but which is actually different in other important ways is\n> no good.\n\nYeah. We need to choose a name for these new function(s) that is\nsufficiently different from \"PQcancel\" that people won't expect them\nto behave exactly the same as that does. I lack any good ideas about\nthat, how about you?\n\n>> Yeah, I don't think it's anywhere near fully baked yet. On the other\n>> hand, we do have a couple of weeks left.\n\n> We do?\n\nUm, you did read the psql-release discussion about setting the feature\nfreeze deadline, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Mar 2022 15:34:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Thanks for all the feedback everyone. I'll try to send a new patch\nlater this week that includes user facing docs and a simplified API.\nFor now a few responses:\n\n> Yeah. We need to choose a name for these new function(s) that is\n> sufficiently different from \"PQcancel\" that people won't expect them\n> to behave exactly the same as that does. I lack any good ideas about\n> that, how about you?\n\nSo I guess the names I proposed were not great, since everyone seems to be falling over them. \nBut I'd like to make my intention clear with the current naming. After this patch there would be \nfour different APIs for starting a cancelation:\n1. PQrequestCancel: deprecated+old, not signal-safe function for requesting query cancellation, only uses a specific set of connection options\n2. PQcancel: Cancel queries in a signal safe way, to be signal-safe it only uses a limited set of connection options\n3. PQcancelConnect: Cancel queries in a non-signal safe way that uses all connection options\n4. PQcancelConnectStart: Cancel queries in a non-signal safe and non-blocking way that uses all connection options\n\nSo the idea was that you should not look at PQcancelConnectStart as the non-blocking\nversion of PQcancel, but as the non-blocking version of PQcancelConnect. I'll try to \nthink of some different names too, but IMHO these names could be acceptable\nwhen their differences are addressed sufficiently in the documentation. \n\nOne other approach to naming that comes to mind now is repurposing PQrequestCancel:\n1. PQrequestCancel: Cancel queries in a non-signal safe way that uses all connection options\n2. PQrequestCancelStart: Cancel queries in a non-signal safe and non-blocking way that uses all connection options\n3. PQcancel: Cancel queries in a signal safe way, to be signal-safe it only uses a limited set of connection options\n\n> I think it's probably a\n> bad sign that this function is tinkering with logic in this sort of\n> low-level function anyway. pqReadData() is a really general function\n> that manages to work with non-blocking I/O already, so why does\n> non-blocking query cancellation need to change its return values, or\n> whether or not it drops data in certain cases?\n\nThe reason for this low level change is that the cancellation part of the\nPostgres protocol is following a different, much more simplistic design \nthan all the other parts. The client does not expect a response message back \nfrom the server after sending the cancellation request. The expectation \nis that the server signals completion by closing the connection, i.e. sending EOF. \nFor all other parts of the protocol, connection termination should be initiated\nclient side by sending a Terminate message. So the server closing (sending\nEOF) is always unexpected and is thus currently considered an error by pqReadData.\n\nBut since this is not the case for the cancellation protocol, the result is\nchanged to -2 in case of EOF to make it possible to distinguish between\nan EOF and an actual error.\n\n> And it updates that function to return -2 when the is\n> ECONNRESET, which seems to fly in the face of the comment's idea that\n> this is the \"clean connection closure\" case. \n\nThe diff sadly does not include the very relevant comment right above these\nlines. Pasting the whole case statement here to clear up this confusion:\n\ncase SSL_ERROR_ZERO_RETURN:\n\n\t/*\n\t * Per OpenSSL documentation, this error code is only returned for\n\t * a clean connection closure, so we should not report it as a\n\t * server crash.\n\t */\n\tappendPQExpBufferStr(&conn->errorMessage,\n\t\t\t\t\t\t libpq_gettext(\"SSL connection has been closed unexpectedly\\n\"));\n\tresult_errno = ECONNRESET;\n\tn = -2;\n\tbreak;\n\n\n> For example, it updates the\n> comment for pqsecure_read() to say \"Returns -1 in case of failures,\n> except in the case of clean connection closure then it returns -2.\"\n> But that function calls any of three different implementation\n> functions depending on the situation and the patch only updates one of\n> them. \n\nThat comment is indeed not describing what is happening correctly and I'll \ntry to make it clearer. The main reason for it being incorrect is coming from \nthe fact that receiving EOFs is handled in different places based on the \nencryption method:\n\n1. Unencrypted TCP: EOF is not returned as an error by pqsecure_read, but detected by pqReadData (see comments related to definitelyEOF)\n2. OpenSSL: EOF is returned as an error by pqsecure_read (see copied case statement above)\n3. GSS: When writing the patch I was not sure how EOF handling worked here, but given that the tests passed for Jacob on GSS, I'm guessing it works the same as unencrypted TCP.\n\n\n",
"msg_date": "Mon, 28 Mar 2022 09:28:19 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "I attached a new version of this patch. Which does three main things:\n1. Change the PQrequestCancel implementation to use the regular \n connection establishement code, to support all connection options \n including encryption.\n2. Add PQrequestCancelStart which is a thread-safe and non-blocking \n version of this new PQrequestCancel implementation.\n3. Add PQconnectComplete, which completes a connection started by \n PQrequestCancelStart. This is useful if you want a thread-safe, but \n blocking cancel (without having a need for signal safety).\n\nThis change un-deprecates PQrequestCancel, since now there's actually an \nadvantage to using it over PQcancel. It also includes user facing documentation\nfor all these functions. \n\nAs a API design change from the previous version, PQrequestCancelStart now\nreturns a regular PGconn for the cancel connection.\n\n@Tom Lane regarding this:\n> Even in the non-blocking case, callers should only deal with the original PGconn.\n\nThis would by definition result in non-threadsafe code (afaict). So I refrained from doing this.\nThe blocking version doesn't expose a PGconn at all, but the non-blocking one now returns a new PGconn.\n\nThere's two more changes that I at least want to do before considering this patch mergable:\n1. Go over all the functions that can be called with a PGconn, but should not be \n called with a cancellation PGconn and error out or exit early.\n2. Copy over the SockAddr from the original connection and always connect to \n the same socket. I believe with the current code the cancellation could end up\n at the wrong server if there are multiple hosts listed in the connection string.\n\nAnd there's a third item that I would like to do as a bonus:\n3. Actually use the non-blocking API for the postgres_fdw code to implement a \n timeout. Which would allow this comment can be removed:\n\t/*\n\t * Issue cancel request. Unfortunately, there's no good way to limit the\n\t * amount of time that we might block inside PQgetCancel().\n\t */\n \nSo a next version of this patch can be expected somewhere later this week.\nBut any feedback on the current version would be appreciated. Because\nthese 3 changes won't change the overall design much.\n\nJelte",
"msg_date": "Wed, 30 Mar 2022 16:08:16 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Note that the patch is still variously failing in cirrus.\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/37/3511\n\nYou may already know that it's possible to trigger the cirrus ci tasks using a\ngithub branch. See src/tools/ci/README.\n\n\n",
"msg_date": "Thu, 31 Mar 2022 00:47:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Attached is the latest version of this patch, which I think is now in a state\nin which it could be merged. The changes are:\n\n1. Don't do host and address discovery for cancel connections. It now \n reuses raddr and whichhost from the original connection. This makes\n sure the cancel always goes to the right server, even when DNS records \n change or another server would be chosen now in case of connnection\n strings containing multiple hosts.\n2. Fix the windows CI failure. This is done by both using the threadsafe code \n in the the dblink cancellation code, and also by not erroring a cancellation\n connection on windows in case of any errors. This last one is to work around\n the issue described in this thread:\n https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de\n\nI also went over most of the functions that take a PGconn, to see if they needed\nextra checks to guard against being executed on cancel. So far all seemed fine,\neither they should be okay to execute against a cancellation connection, or \nthey failed already anyway because a cancellation connection never reaches\nthe CONNECTION_OK state. So I didn't add any checks specifically for cancel\nconnections. I'll do this again next week with a fresh head, to see if I haven't \nmissed any cases.\n\nI'll try to find some time early next week to implement non-blocking cancellation\nusage in postgres_fdw, i.e. the bonus task I mentioned in my previous email. But \nI don't think it's necessary to have that implemented before merging.",
"msg_date": "Fri, 1 Apr 2022 16:13:07 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hereby what I consider the final version of this patch. I don't have any\nchanges planned myself (except for ones that come up during review). \nThings that changed since the previous iteration:\n1. postgres_fdw now uses the non-blocking cancellation API (including test).\n2. Added some extra sleeps to the cancellation test, to remove random failures on FreeBSD.",
"msg_date": "Mon, 4 Apr 2022 15:21:54 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Resending with a problematic email removed from CC...\n\nOn Mon, Apr 04, 2022 at 03:21:54PM +0000, Jelte Fennema wrote:\n> 2. Added some extra sleeps to the cancellation test, to remove random failures on FreeBSD.\n\nApparently there's still an occasional issue.\nhttps://cirrus-ci.com/task/6613309985128448\n\nresult 232/352 (error): ERROR: duplicate key value violates unique constraint \"ppln_uniqviol_pkey\"\nDETAIL: Key (id)=(116) already exists.\n\nThis shows that the issue is pretty rare:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3511\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 24 Jun 2022 19:36:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "(resent because it was blocked from the mailing-list due to inclusion of a blocked email address in the To line)\n\nFrom: Andres Freund <andres@anarazel.de>\n> On 2022-04-04 15:21:54 +0000, Jelte Fennema wrote:\n> > 2. Added some extra sleeps to the cancellation test, to remove random\n> > failures on FreeBSD.\n> \n> That's extremely extremely rarely the solution to address test reliability\n> issues. It'll fail when running test under valgrind etc.\n> \n> Why do you need sleeps / can you find another way to make the test reliable?\n\nThe problem they are solving is racy behaviour between sending the query\nand sending the cancellation. If the cancellation is handled before the query\nis started, then the query doesn't get cancelled. To solve this problem I used\nthe sleeps to wait a bit before sending the cancelation request.\n\nWhen I wrote this, I couldn't think of a better way to do it then with sleeps.\nBut I didn't like it either (and I still don't). These emails made me start to think\nagain, about other ways of solving the problem. I think I've found another \nsolution (see attached patch). The way I solve it now is by using another \nconnection to check the state of the first one.\n\nJelte",
"msg_date": "Mon, 27 Jun 2022 09:29:39 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 07:36:16PM -0500, Justin Pryzby wrote:\n> Resending with a problematic email removed from CC...\n> \n> On Mon, Apr 04, 2022 at 03:21:54PM +0000, Jelte Fennema wrote:\n> > 2. Added some extra sleeps to the cancellation test, to remove random failures on FreeBSD.\n> \n> Apparently there's still an occasional issue.\n> https://cirrus-ci.com/task/6613309985128448\n\nI think that failure is actually not related to this patch.\n\nThere are probably others, but I noticed because it also affected one of my\npatches, which changes nothing relevant.\nhttps://cirrus-ci.com/task/5904044051922944\n\n\n\n",
"msg_date": "Mon, 27 Jun 2022 06:45:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2022-Jun-27, Justin Pryzby wrote:\n\n> On Fri, Jun 24, 2022 at 07:36:16PM -0500, Justin Pryzby wrote:\n\n> > Apparently there's still an occasional issue.\n> > https://cirrus-ci.com/task/6613309985128448\n> \n> I think that failure is actually not related to this patch.\n\nYeah, it's not -- Kyotaro diagnosed it as a problem in libpq's pipeline\nmode. I hope to push his fix soon, but there are nearby problems that I\nhaven't been able to track down a good fix for. I'm looking into the\nwhole.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 27 Jun 2022 14:29:07 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Jelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> [ non-blocking PQcancel ]\n\nI pushed the 0001 patch (libpq_pipeline documentation) with a bit\nof further wordsmithing.\n\nAs for 0002, I'm not sure that's anywhere near ready. I doubt it's\na great idea to un-deprecate PQrequestCancel with a major change\nin its behavior. If there is anybody out there still using it,\nthey're not likely to appreciate that. Let's leave that alone and\npick some other name.\n\nI'm also finding the entire design of PQrequestCancelStart etc to\nbe horribly confusing --- it's not *bad* necessarily, but the chosen\nfunction names are seriously misleading. PQrequestCancelStart doesn't\nactually \"start\" anything, so the apparent parallel with PQconnectStart\nis just wrong. It's also fairly unclear what the state of a cancel\nPQconn is after the request cycle is completed, and whether you can\nre-use it (especially after a failed request), and whether you have\nto dispose of it separately.\n\nOn the whole it feels like a mistake to have two separate kinds of\nPGconn with fundamentally different behaviors and yet no distinction\nin the API. I think I'd recommend having a separate struct type\n(which might internally contain little more than a pointer to a\ncloned PGconn), and provide only a limited set of operations on it.\nSeems like create, start/continue cancel request, destroy, and\nfetch error message ought to be enough. I don't see a reason why we\nneed to support all of libpq's inquiry operations on such objects ---\nfor instance, if you want to know which host is involved, you could\nperfectly well query the parent PGconn. Nor do I want to run around\nand add code to every single libpq entry point to make it reject cancel\nPGconns if it can't support them, but we'd have to do so if there's\njust one struct type.\n\nI'm not seeing the use-case for PQconnectComplete. If you want\na non-blocking cancel request, why would you then use a blocking\noperation to complete the request? Seems like it'd be better\nto have just a monolithic cancel function for those who don't\nneed non-blocking.\n\nThis change:\n\n--- a/src/interfaces/libpq/libpq-fe.h\n+++ b/src/interfaces/libpq/libpq-fe.h\n@@ -59,12 +59,15 @@ typedef enum\n {\n \tCONNECTION_OK,\n \tCONNECTION_BAD,\n+\tCONNECTION_CANCEL_FINISHED,\n \t/* Non-blocking mode only below here */\n\nis an absolute non-starter: it breaks ABI for every libpq client,\neven ones that aren't using this facility. Why do we need a new\nConnStatusType value anyway? Seems like PostgresPollingStatusType\ncovers what we need: once you reach PGRES_POLLING_OK, the cancel\nrequest is done.\n\nThe test case is still not very bulletproof on slow machines,\nas it seems to be assuming that 30 seconds == forever. It\nwould be all right to use $PostgreSQL::Test::Utils::timeout_default,\nbut I'm not sure that that's easily retrievable by C code.\nMaybe make the TAP test pass it in with another optional switch\nto libpq_pipeline? Alternatively, we could teach libpq_pipeline\nto do getenv(\"PG_TEST_TIMEOUT_DEFAULT\") with a fallback to 180,\nbut that feels like it might be overly familiar with the innards\nof Utils.pm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Sep 2022 17:53:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Thanks for all the feedback. I attached a new patch that I think\naddresses all of it. Below some additional info.\n\n> On the whole it feels like a mistake to have two separate kinds of\n> PGconn with fundamentally different behaviors and yet no distinction\n> in the API. I think I'd recommend having a separate struct type\n> (which might internally contain little more than a pointer to a\n> cloned PGconn), and provide only a limited set of operations on it.\n\nIn my first version of this patch, this is exactly what I did. But then\nI got this feedback from Jacob, so I changed it to reusing PGconn:\n\n> > /* issue a cancel request */\n> > extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);\n> > +extern PGcancelConn * PQcancelConnectStart(PGconn *conn);\n> > +extern PGcancelConn * PQcancelConnect(PGconn *conn);\n> > +extern PostgresPollingStatusType PQcancelConnectPoll(PGcancelConn * cancelConn);\n> > +extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);\n> > +extern int PQcancelSocket(const PGcancelConn * cancelConn);\n> > +extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);\n> > +extern void PQcancelFinish(PGcancelConn * cancelConn);\n>\n> That's a lot of new entry points, most of which don't do anything\n> except call their twin after a pointer cast. How painful would it be to\n> just use the existing APIs as-is, and error out when calling\n> unsupported functions if conn->cancelRequest is true?\n\nI changed it back to use PGcancelConn as per your suggestion and I \nagree that the API got better because of it.\n\n> + CONNECTION_CANCEL_FINISHED,\n> /* Non-blocking mode only below here */\n> \n> is an absolute non-starter: it breaks ABI for every libpq client,\n> even ones that aren't using this facility. \n\nI removed this now. The main reason was so it was clear that no\nqueries could be sent over the connection, like is normally the case\nwhen CONNECTION_OK happens. I don't think this is as useful anymore\nnow that this patch has a dedicated PGcancelStatus function.\nNOTE: The CONNECTION_STARTING ConnStatusType is still necessary.\nBut to keep ABI compatibility I moved it to the end of the enum.\n\n> Alternatively, we could teach libpq_pipeline\n> to do getenv(\"PG_TEST_TIMEOUT_DEFAULT\") with a fallback to 180,\n> but that feels like it might be overly familiar with the innards\n> of Utils.pm.\n\nI went with this approach, because this environment variable was\nalready used in 2 other places than Utils.pm: \n- contrib/test_decoding/sql/twophase.sql\n- src/test/isolation/isolationtester.c\n\nSo, one more place seemed quite harmless.\n\nP.S. I noticed a logical conflict between this patch and my libpq load \nbalancing patch. Because this patch depends on the connhost array \nis constructed the exact same on the second invocation of connectOptions2.\nBut the libpq loadbalancing patch breaks this assumption. I'm making\na mental (and public) note that whichever of these patches gets merged last\nshould address this issue.",
"msg_date": "Wed, 5 Oct 2022 13:23:34 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 10/5/22 06:23, Jelte Fennema wrote:\n> In my first version of this patch, this is exactly what I did. But then\n> I got this feedback from Jacob, so I changed it to reusing PGconn:\n> \n>> [snip]\n> \n> I changed it back to use PGcancelConn as per your suggestion and I \n> agree that the API got better because of it.\n\nSorry for the whiplash!\n\nIs the latest attachment the correct version? I don't see any difference\nbetween the latest 0001 and the previous version's 0002 -- it has no\nreferences to PG_TEST_TIMEOUT_DEFAULT, PGcancelConn, etc.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 4 Nov 2022 08:58:34 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Ugh, it indeed seems like I somehow messed up sending the new patch. \nHere's the correct one.",
"msg_date": "Tue, 15 Nov 2022 11:38:00 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "> On 15 Nov 2022, at 12:38, Jelte Fennema <Jelte.Fennema@microsoft.com> wrote:\n\n> Here's the correct one.<0001-Add-non-blocking-version-of-PQcancel.patch>\n\nThis version of the patch no longer applies, a rebased version is needed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 29 Nov 2022 20:17:47 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "> This version of the patch no longer applies, a rebased version is needed.\n\nAttached is a patch that applies cleanly again and is also changed\nto use the recently introduced libpq_append_conn_error.\n\nI also attached a patch that runs pgindent after the introduction of\nlibpq_append_conn_error. I noticed that this hadn't happened when\ntrying to run pgindent on my own changes.",
"msg_date": "Wed, 30 Nov 2022 09:20:42 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Is there anything that is currently blocking this patch? I'd quite\nlike it to get into PG16.\n\nEspecially since I ran into another use case that I would want to use\nthis patch for recently: Adding an async cancel function to Python\nit's psycopg3 library. This library exposes both a Connection class\nand an AsyncConnection class (using python its asyncio feature). But\none downside of the AsyncConnection type is that it doesn't have a\ncancel method.\n\nI ran into this while changing the PgBouncer tests to use python. And\nthe cancellation tests were the only tests that required me to use a\nThreadPoolExecutor instead of simply being able to use async-await\nstyle programming:\nhttps://github.com/pgbouncer/pgbouncer/blob/master/test/test_cancel.py#LL9C17-L9C17\n\n\n",
"msg_date": "Thu, 19 Jan 2023 12:10:01 +0100",
"msg_from": "Jelte Fennema <me@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "After discussing this patch privately with Andres I created a new\nversion of this patch.\nThe main changes are:\n1. Build on top of a refactor to addrinfo handling I had done for\nanother patch of mine (libpq load balancing). This allows creation of\na fake addrinfo list, which made it possible to remove lots of special\ncases for cancel requests from PQconnectPoll\n2. Move -2 return value of pqReadData to a separate commit.\n3. Move usage of new cancel APIs to a separate commit.\n4. Move most of the logic that's specific to cancel requests to cancel\nrelated functions, e.g. PQcancelPoll does more than simply forwarding\nto PQconnectPoll now.\n5. Copy over the connhost data from the original connection, instead\nof assuming that it will be rebuilt identically in the cancel\nconnection. The main reason for this is that when/if the loadbalancing\npatch gets merged, then it won't necessarily be rebuilt identically\nanymore.",
"msg_date": "Thu, 26 Jan 2023 17:42:37 +0100",
"msg_from": "Jelte Fennema <me@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Another small update. Mostly some trivial cleanup in the comments/docs/code. But\nalso change patch 0005 to call PQcancelFinish in more error cases.",
"msg_date": "Fri, 27 Jan 2023 12:50:27 +0100",
"msg_from": "Jelte Fennema <me@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "This looks like it needs a rebase.\n\n=== Applying patches on top of PostgreSQL commit ID\n71a75626d5271f2bcdbdc43b8c13065c4634fd9f ===\n=== applying patch ./v11-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch\npatching file src/interfaces/libpq/fe-auth-scram.c\npatching file src/interfaces/libpq/fe-auth.c\npatching file src/interfaces/libpq/fe-connect.c\nHunk #35 FAILED at 3216.\nHunk #36 succeeded at 3732 (offset 27 lines).\nHunk #37 succeeded at 3782 (offset 27 lines).\nHunk #38 succeeded at 3795 (offset 27 lines).\nHunk #39 succeeded at 7175 (offset 27 lines).\n1 out of 39 hunks FAILED -- saving rejects to file\nsrc/interfaces/libpq/fe-connect.c.rej\npatching file src/interfaces/libpq/fe-exec.c\npatching file src/interfaces/libpq/fe-lobj.c\npatching file src/interfaces/libpq/fe-misc.c\npatching file src/interfaces/libpq/fe-protocol3.c\npatching file src/interfaces/libpq/fe-secure-common.c\npatching file src/interfaces/libpq/fe-secure-gssapi.c\nHunk #3 succeeded at 590 (offset 2 lines).\npatching file src/interfaces/libpq/fe-secure-openssl.c\nHunk #3 succeeded at 415 (offset 5 lines).\nHunk #4 succeeded at 967 (offset 5 lines).\nHunk #5 succeeded at 993 (offset 5 lines).\nHunk #6 succeeded at 1037 (offset 5 lines).\nHunk #7 succeeded at 1089 (offset 5 lines).\nHunk #8 succeeded at 1122 (offset 5 lines).\nHunk #9 succeeded at 1140 (offset 5 lines).\nHunk #10 succeeded at 1239 (offset 5 lines).\nHunk #11 succeeded at 1250 (offset 5 lines).\nHunk #12 succeeded at 1265 (offset 5 lines).\nHunk #13 succeeded at 1278 (offset 5 lines).\nHunk #14 succeeded at 1315 (offset 5 lines).\nHunk #15 succeeded at 1326 (offset 5 lines).\nHunk #16 succeeded at 1383 (offset 5 lines).\nHunk #17 succeeded at 1399 (offset 5 lines).\nHunk #18 succeeded at 1452 (offset 5 lines).\nHunk #19 succeeded at 1494 (offset 5 lines).\npatching file src/interfaces/libpq/fe-secure.c\npatching file src/interfaces/libpq/libpq-int.h\n\n\n",
"msg_date": "Tue, 28 Feb 2023 15:59:03 -0500",
"msg_from": "Gregory Stark <stark@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 28 Feb 2023 at 15:59, Gregory Stark <stark@postgresql.org> wrote:\n>\n> This looks like it needs a rebase.\n\nSo I'm updating the patch to Waiting on Author\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:09:09 -0500",
"msg_from": "Greg S <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 1 Mar 2023 at 20:09, Greg S <stark.cfm@gmail.com> wrote:\n>\n> On Tue, 28 Feb 2023 at 15:59, Gregory Stark <stark@postgresql.org> wrote:\n> >\n> > This looks like it needs a rebase.\n\ndone",
"msg_date": "Wed, 1 Mar 2023 20:47:46 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 1 Mar 2023 at 14:48, Jelte Fennema <postgres@jeltef.nl> wrote:\n\n> > > This looks like it needs a rebase.\n>\n> done\n\nGreat. Please update the CF entry to Needs Review or Ready for\nCommitter as appropriate :)\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:50:45 -0500",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 1 Mar 2023 at 20:51, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> Great. Please update the CF entry to Needs Review or Ready for\n> Committer as appropriate :)\n\nI realised I rebased a slightly outdated version of my branch (thanks\nto git its --force-with-lease flag). Attached is the newest version\nrebased (only patch 0004 changed slightly).\n\nAnd I updated the CF entry to Ready for Committer now.",
"msg_date": "Wed, 1 Mar 2023 21:00:49 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Updated wording in the docs slightly.\n\nOn Wed, 1 Mar 2023 at 21:00, Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> On Wed, 1 Mar 2023 at 20:51, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> > Great. Please update the CF entry to Needs Review or Ready for\n> > Committer as appropriate :)\n>\n> I realised I rebased a slightly outdated version of my branch (thanks\n> to git its --force-with-lease flag). Attached is the newest version\n> rebased (only patch 0004 changed slightly).\n>\n> And I updated the CF entry to Ready for Committer now.",
"msg_date": "Mon, 6 Mar 2023 16:42:13 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and\nfe-connect.c. Every hunk is failing which perhaps means the code\nyou're patching has been moved or refactored?\n\n\n",
"msg_date": "Tue, 14 Mar 2023 13:46:07 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and\n> fe-connect.c. Every hunk is failing which perhaps means the code\n> you're patching has been moved or refactored?\n\nThe cfbot is giving up after\nv14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,\nbut that's been superseded (at least in part) by b6dfee28f.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Mar 2023 13:58:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 14 Mar 2023 at 13:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> > It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and\n> > fe-connect.c. Every hunk is failing which perhaps means the code\n> > you're patching has been moved or refactored?\n>\n> The cfbot is giving up after\n> v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,\n> but that's been superseded (at least in part) by b6dfee28f.\n\nAh, same with Jelte Fennema's patch for load balancing in libpq.\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:03:36 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "The rebase was indeed trivial (git handled everything automatically),\nbecause my first patch was doing a superset of the changes that were\ncommitted in b6dfee28f. Attached are the new patches.\n\nOn Tue, 14 Mar 2023 at 19:04, Greg Stark <stark@mit.edu> wrote:\n>\n> On Tue, 14 Mar 2023 at 13:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> > > It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and\n> > > fe-connect.c. Every hunk is failing which perhaps means the code\n> > > you're patching has been moved or refactored?\n> >\n> > The cfbot is giving up after\n> > v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,\n> > but that's been superseded (at least in part) by b6dfee28f.\n>\n> Ah, same with Jelte Fennema's patch for load balancing in libpq.\n>\n> --\n> greg",
"msg_date": "Wed, 15 Mar 2023 09:49:23 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Rebased after conflicts with bfc9497ece01c7c45437bc36387cb1ebe346f4d2\n\nAlso included the fix for feedback from Daniel on patch 2, which he\nhad shared in the load balancing thread.\n\nOn Wed, 15 Mar 2023 at 09:49, Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> The rebase was indeed trivial (git handled everything automatically),\n> because my first patch was doing a superset of the changes that were\n> committed in b6dfee28f. Attached are the new patches.\n>\n> On Tue, 14 Mar 2023 at 19:04, Greg Stark <stark@mit.edu> wrote:\n> >\n> > On Tue, 14 Mar 2023 at 13:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> > > > It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and\n> > > > fe-connect.c. Every hunk is failing which perhaps means the code\n> > > > you're patching has been moved or refactored?\n> > >\n> > > The cfbot is giving up after\n> > > v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,\n> > > but that's been superseded (at least in part) by b6dfee28f.\n> >\n> > Ah, same with Jelte Fennema's patch for load balancing in libpq.\n> >\n> > --\n> > greg",
"msg_date": "Wed, 22 Mar 2023 13:32:30 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hi Jelte,\n\nI had a look into your patchset (v16), did a quick review and played a\nbit with the feature.\n\nPatch 2 is missing the documentation about PQcancelSocket() and contains\na few typos; please find attached a (fixup) patch to correct these.\n\n\n--- a/src/interfaces/libpq/libpq-fe.h\n+++ b/src/interfaces/libpq/libpq-fe.h\n@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);\n /* Synchronous (blocking) */\n extern void PQreset(PGconn *conn);\n \n+/* issue a cancel request */\n+extern PGcancelConn * PQcancelSend(PGconn *conn);\n[...]\n\nMaybe I'm missing something, but this function above seems a bit\nstrange. Namely, I wonder why it returns a PGcancelConn and what's the\npoint of requiring the user to call PQcancelStatus() to see if something\ngot wrong. Maybe it could be defined as:\n\n int PQcancelSend(PGcancelConn *cancelConn);\n\nwhere the return value would be status? And the user would only need to\ncall PQcancelErrorMessage() in case of error. This would leave only one\nsingle way to create a PGcancelConn value (i.e. PQcancelConn()), which\nseems less confusing to me.\n\nJelte Fennema wrote:\n> Especially since I ran into another use case that I would want to use\n> this patch for recently: Adding an async cancel function to Python\n> it's psycopg3 library. This library exposes both a Connection class\n> and an AsyncConnection class (using python its asyncio feature). But\n> one downside of the AsyncConnection type is that it doesn't have a\n> cancel method.\n\nAs part of my testing, I've implemented non-blocking cancellation in\nPsycopg, based on v16 on this patchset. Overall this worked fine and\nseems useful; if you want to try it:\n\n https://github.com/dlax/psycopg3/tree/pg16/non-blocking-pqcancel\n\n(The only thing I found slightly inconvenient is the need to convey the\nconnection encoding (from PGconn) when handling error message from the\nPGcancelConn.)\n\nCheers,\nDenis",
"msg_date": "Tue, 28 Mar 2023 16:53:18 +0200",
"msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 28 Mar 2023 at 16:54, Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n> I had a look into your patchset (v16), did a quick review and played a\n> bit with the feature.\n>\n> Patch 2 is missing the documentation about PQcancelSocket() and contains\n> a few typos; please find attached a (fixup) patch to correct these.\n\nThanks applied that patch and attached a new patchset\n\n> Namely, I wonder why it returns a PGcancelConn and what's the\n> point of requiring the user to call PQcancelStatus() to see if something\n> got wrong. Maybe it could be defined as:\n>\n> int PQcancelSend(PGcancelConn *cancelConn);\n>\n> where the return value would be status? And the user would only need to\n> call PQcancelErrorMessage() in case of error. This would leave only one\n> single way to create a PGcancelConn value (i.e. PQcancelConn()), which\n> seems less confusing to me.\n\nTo clarify what you mean, the API would then be like this:\nPGcancelConn cancelConn = PQcancelConn(conn);\nif (PQcancelSend(cancelConn) == CONNECTION_BAD) {\n printf(\"ERROR %s\\n\", PQcancelErrorMessage(cancelConn))\n exit(1)\n}\n\nInstead of:\nPGcancelConn cancelConn = PQcancelSend(conn);\nif (PQcancelStatus(cancelConn) == CONNECTION_BAD) {\n printf(\"ERROR %s\\n\", PQcancelErrorMessage(cancelConn))\n exit(1)\n}\n\nThose are so similar, that I have no preference either way. If more\npeople prefer one over the other I'm happy to change it, but for now\nI'll keep it as is.\n\n> As part of my testing, I've implemented non-blocking cancellation in\n> Psycopg, based on v16 on this patchset. Overall this worked fine and\n> seems useful; if you want to try it:\n>\n> https://github.com/dlax/psycopg3/tree/pg16/non-blocking-pqcancel\n\nThat's great to hear! I'll try to take a closer look at that change tomorrow.\n\n> (The only thing I found slightly inconvenient is the need to convey the\n> connection encoding (from PGconn) when handling error message from the\n> PGcancelConn.)\n\nCould you expand a bit more on this? And if you have any idea on how\nto improve the API with regards to this?",
"msg_date": "Tue, 28 Mar 2023 17:54:06 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Jelte Fennema a �crit :\n> > Namely, I wonder why it returns a PGcancelConn and what's the\n> > point of requiring the user to call PQcancelStatus() to see if something\n> > got wrong. Maybe it could be defined as:\n> >\n> > int PQcancelSend(PGcancelConn *cancelConn);\n> >\n> > where the return value would be status? And the user would only need to\n> > call PQcancelErrorMessage() in case of error. This would leave only one\n> > single way to create a PGcancelConn value (i.e. PQcancelConn()), which\n> > seems less confusing to me.\n> \n> To clarify what you mean, the API would then be like this:\n> PGcancelConn cancelConn = PQcancelConn(conn);\n> if (PQcancelSend(cancelConn) == CONNECTION_BAD) {\n> printf(\"ERROR %s\\n\", PQcancelErrorMessage(cancelConn))\n> exit(1)\n> }\n\nI'm not sure it's worth returning the connection status, maybe just an\nint value (the return value of connectDBComplete() for instance).\n\nMore importantly, not having PQcancelSend() creating the PGcancelConn\nmakes reuse of that value, passing through PQcancelReset(), more\nintuitive. E.g., in the tests:\n\ndiff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c\nindex 6764ab513b..91363451af 100644\n--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c\n+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c\n@@ -217,17 +217,18 @@ test_cancel(PGconn *conn, const char *conninfo)\n \t\tpg_fatal(\"failed to run PQrequestCancel: %s\", PQerrorMessage(conn));\n \tconfirm_query_cancelled(conn);\n \n+\tcancelConn = PQcancelConn(conn);\n+\n \t/* test PQcancelSend */\n \tsend_cancellable_query(conn, monitorConn);\n-\tcancelConn = PQcancelSend(conn);\n-\tif (PQcancelStatus(cancelConn) == CONNECTION_BAD)\n+\tif (PQcancelSend(cancelConn) == CONNECTION_BAD)\n \t\tpg_fatal(\"failed to run PQcancelSend: %s\", PQcancelErrorMessage(cancelConn));\n \tconfirm_query_cancelled(conn);\n-\tPQcancelFinish(cancelConn);\n+\n+\tPQcancelReset(cancelConn);\n \n \t/* test PQcancelConn and then polling with PQcancelPoll */\n \tsend_cancellable_query(conn, monitorConn);\n-\tcancelConn = PQcancelConn(conn);\n \tif (PQcancelStatus(cancelConn) == CONNECTION_BAD)\n \t\tpg_fatal(\"bad cancel connection: %s\", PQcancelErrorMessage(cancelConn));\n \twhile (true)\n\nOtherwise, it's not clear if the PGcancelConn created by PQcancelSend()\nshould be reused or not. But maybe that's a matter of documentation?\n\n\n> > As part of my testing, I've implemented non-blocking cancellation in\n> > Psycopg, based on v16 on this patchset. Overall this worked fine and\n> > seems useful; if you want to try it:\n> >\n> > https://github.com/dlax/psycopg3/tree/pg16/non-blocking-pqcancel\n> \n> That's great to hear! I'll try to take a closer look at that change tomorrow.\n\nSee also https://github.com/psycopg/psycopg/issues/534 if you want to\ndiscuss about this.\n\n> > (The only thing I found slightly inconvenient is the need to convey the\n> > connection encoding (from PGconn) when handling error message from the\n> > PGcancelConn.)\n> \n> Could you expand a bit more on this? And if you have any idea on how\n> to improve the API with regards to this?\n\nThe thing is that we need the connection encoding (client_encoding) when\neventually forwarding the result of PQcancelErrorMessage(), decoded, to\nthe user. More specifically, it seems to me that we'd the encoding of\nthe *cancel connection*, but since PQparameterStatus() cannot be used\nwith a PGcancelConn, I use that of the PGconn. Roughly, in Python:\n\n encoding = conn.parameter_status(b\"client_encoding\")\n # i.e, in C: char *encoding PQparameterStatus(conn, \"client_encoding\");\n cancel_conn = conn.cancel_conn()\n # i.e., in C: PGcancelConn *cancelConn = PQcancelConn(conn);\n # [... then work with with cancel_conn ...]\n if cancel_conn.status == ConnStatus.BAD:\n raise OperationalError(cancel_conn.error_message().decode(encoding))\n\nThis feels a bit non-atomic to me; isn't there a risk that\nclient_encoding be changed between PQparameterStatus(conn) and\nPQcancelConn(conn) calls?\n\nSo maybe PQcancelParameterStatus(PGcancelConn *cancelConn, char *name)\nis needed?\n\n\n",
"msg_date": "Wed, 29 Mar 2023 10:43:14 +0200",
"msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 29 Mar 2023 at 10:43, Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n> More importantly, not having PQcancelSend() creating the PGcancelConn\n> makes reuse of that value, passing through PQcancelReset(), more\n> intuitive. E.g., in the tests:\n\nYou convinced me. Attached is an updated patch where PQcancelSend\ntakes the PGcancelConn and returns 1 or 0.\n\n> The thing is that we need the connection encoding (client_encoding) when\n> eventually forwarding the result of PQcancelErrorMessage(), decoded, to\n> the user.\n\nCancel connections don't have an encoding specified. They never\nreceive an error from the server. All errors come from the machine\nthat libpq is on. So I think you're making the decoding more\ncomplicated than it needs to be.",
"msg_date": "Wed, 29 Mar 2023 17:58:51 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Jelte Fennema a �crit :\n> On Wed, 29 Mar 2023 at 10:43, Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n> > More importantly, not having PQcancelSend() creating the PGcancelConn\n> > makes reuse of that value, passing through PQcancelReset(), more\n> > intuitive. E.g., in the tests:\n> \n> You convinced me. Attached is an updated patch where PQcancelSend\n> takes the PGcancelConn and returns 1 or 0.\n\nPatch 5 is missing respective changes; please find attached a fixup\npatch for these.",
"msg_date": "Thu, 30 Mar 2023 10:07:28 +0200",
"msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 30 Mar 2023 at 10:07, Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n> Patch 5 is missing respective changes; please find attached a fixup\n> patch for these.\n\nThanks, attached are newly rebased patches that include this change. I\nalso cast the result of PQcancelSend to to void in the one case where\nit's ignored on purpose. Note that the patchset shrunk by one, since\nthe original patch 0002 has been committed now.",
"msg_date": "Thu, 30 Mar 2023 12:17:21 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "The patch set does not apply any more.\n\nI tried to rebase locally; even leaving out 1 (\"libpq: Run pgindent \nafter a9e9a9f32b3\"), patch 4 (\"Start using new libpq cancel APIs\") is \nharder to resolve following 983ec23007b (I suppose).\n\nAppart from that, the implementation in v19 sounds good to me, and seems \nworthwhile. FWIW, as said before, I also implemented it in Psycopg in a \nsort of an end-to-end validation.\n\n\n",
"msg_date": "Fri, 7 Apr 2023 10:01:01 +0200",
"msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Okay, I rebased again. Indeed 983ec23007b gave the most problems.\n\nOn Fri, 7 Apr 2023 at 10:02, Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n>\n> The patch set does not apply any more.\n>\n> I tried to rebase locally; even leaving out 1 (\"libpq: Run pgindent\n> after a9e9a9f32b3\"), patch 4 (\"Start using new libpq cancel APIs\") is\n> harder to resolve following 983ec23007b (I suppose).\n>\n> Appart from that, the implementation in v19 sounds good to me, and seems\n> worthwhile. FWIW, as said before, I also implemented it in Psycopg in a\n> sort of an end-to-end validation.",
"msg_date": "Fri, 21 Apr 2023 10:20:35 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "I noticed that cfbot was unable to run tests due to some rebase\nconflict. It seems the pgindent changes from patch 1 have now been\nmade.\nSo adding the rebased patches without patch 1 now to unblock cfbot.",
"msg_date": "Mon, 19 Jun 2023 12:52:48 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Rebased again to resolve some conflicts",
"msg_date": "Mon, 17 Jul 2023 15:00:50 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Trivial observation: these patches obviously introduce many instances\nof words derived from \"cancel\", but they don't all conform to\nestablished project decisions (cf 21f1e15a) about how to spell them.\nWe follow the common en-US usage: \"canceled\", \"canceling\" but\n\"cancellation\". Blame Webstah et al.\n\nhttps://english.stackexchange.com/questions/176957/cancellation-canceled-canceling-us-usage\n\n\n",
"msg_date": "Mon, 13 Nov 2023 15:38:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Mon, 13 Nov 2023 at 03:39, Thomas Munro <thomas.munro@gmail.com> wrote:\n> We follow the common en-US usage: \"canceled\", \"canceling\" but\n> \"cancellation\". Blame Webstah et al.\n\nI changed all the places that were not adhering to those spellings.\nThere were also a few of such places in parts of the codebase that\nthese changes didn't touch. I included a new 0001 patch to fix those.\n\nI do feel like this patchset is pretty much in a committable state. So\nit would be very much appreciated if any comitter could help push it\nover the finish line.",
"msg_date": "Thu, 14 Dec 2023 13:57:47 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 14 Dec 2023 at 13:57, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> I changed all the places that were not adhering to those spellings.\n\nIt seems I forgot a /g on my sed command to do this so it turned out I\nmissed one that caused the test to fail to compile... Attached is a\nfixed version.\n\nI also updated the patchset to use the EOF detection provided by\n0a5c46a7a488f2f4260a90843bb9de6c584c7f4e instead of introducing a new\nway of EOF detection using a -2 return value.",
"msg_date": "Wed, 20 Dec 2023 14:46:52 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 20 Dec 2023 at 19:17, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n>\n> On Thu, 14 Dec 2023 at 13:57, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> > I changed all the places that were not adhering to those spellings.\n>\n> It seems I forgot a /g on my sed command to do this so it turned out I\n> missed one that caused the test to fail to compile... Attached is a\n> fixed version.\n>\n> I also updated the patchset to use the EOF detection provided by\n> 0a5c46a7a488f2f4260a90843bb9de6c584c7f4e instead of introducing a new\n> way of EOF detection using a -2 return value.\n\nCFBot shows that the patch does not apply anymore as in [1]:\npatching file doc/src/sgml/libpq.sgml\n...\npatching file src/interfaces/libpq/exports.txt\nHunk #1 FAILED at 191.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/interfaces/libpq/exports.txt.rej\npatching file src/interfaces/libpq/fe-connect.c\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_3511.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 07:29:07 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 02:59, vignesh C <vignesh21@gmail.com> wrote:\n> Please post an updated version for the same.\n\nDone.",
"msg_date": "Fri, 26 Jan 2024 11:44:24 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Pushed 0001.\n\nI wonder, would it make sense to put all these new functions in a\nseparate file fe-cancel.c?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n",
"msg_date": "Fri, 26 Jan 2024 13:11:02 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 13:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I wonder, would it make sense to put all these new functions in a\n> separate file fe-cancel.c?\n\n\nOkay I tried doing that. I think the end result is indeed quite nice,\nhaving all the cancellation related functions together in a file. But\nit did require making a bunch of static functions in fe-connect\nextern, and adding them to libpq-int.h. On one hand that seems fine to\nme, on the other maybe that indicates that this cancellation logic\nmakes sense to be in the same file as the other connection functions\n(in a sense, connecting is all that a cancel request does).",
"msg_date": "Fri, 26 Jan 2024 17:52:45 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Jan-26, Jelte Fennema-Nio wrote:\n\n> Okay I tried doing that. I think the end result is indeed quite nice,\n> having all the cancellation related functions together in a file. But\n> it did require making a bunch of static functions in fe-connect\n> extern, and adding them to libpq-int.h. On one hand that seems fine to\n> me, on the other maybe that indicates that this cancellation logic\n> makes sense to be in the same file as the other connection functions\n> (in a sense, connecting is all that a cancel request does).\n\nYeah, I see that point of view as well. I like the end result; the\nadditional protos in libpq-int.h don't bother me. Does anybody else\nwants to share their opinion on it? If none, then I'd consider going\nahead with this version.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"We’ve narrowed the problem down to the customer’s pants being in a situation\n of vigorous combustion\" (Robert Haas, Postgres expert extraordinaire)\n\n\n",
"msg_date": "Fri, 26 Jan 2024 18:19:41 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 18:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Yeah, I see that point of view as well. I like the end result; the\n> additional protos in libpq-int.h don't bother me. Does anybody else\n> wants to share their opinion on it? If none, then I'd consider going\n> ahead with this version.\n\nTo be clear, I'm +1 on the new file structure (although if people feel\nstrongly against it, I don't care enough to make a big deal out of\nit).\n\n@Alvaro did you have any other comments on the contents of the patch btw?\n\n\n",
"msg_date": "Sat, 27 Jan 2024 00:14:58 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 22:22, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n>\n> On Fri, 26 Jan 2024 at 13:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > I wonder, would it make sense to put all these new functions in a\n> > separate file fe-cancel.c?\n>\n>\n> Okay I tried doing that. I think the end result is indeed quite nice,\n> having all the cancellation related functions together in a file. But\n> it did require making a bunch of static functions in fe-connect\n> extern, and adding them to libpq-int.h. On one hand that seems fine to\n> me, on the other maybe that indicates that this cancellation logic\n> makes sense to be in the same file as the other connection functions\n> (in a sense, connecting is all that a cancel request does).\n\nCFBot shows that the patch has few compilation errors as in [1]:\n[17:07:07.621] /usr/bin/ld:\n../../../src/fe_utils/libpgfeutils.a(cancel.o): in function\n`handle_sigint':\n[17:07:07.621] cancel.c:(.text+0x50): undefined reference to `PQcancel'\n[17:07:07.621] /usr/bin/ld:\n../../../src/fe_utils/libpgfeutils.a(cancel.o): in function\n`SetCancelConn':\n[17:07:07.621] cancel.c:(.text+0x10c): undefined reference to `PQfreeCancel'\n[17:07:07.621] /usr/bin/ld: cancel.c:(.text+0x114): undefined\nreference to `PQgetCancel'\n[17:07:07.621] /usr/bin/ld:\n../../../src/fe_utils/libpgfeutils.a(cancel.o): in function\n`ResetCancelConn':\n[17:07:07.621] cancel.c:(.text+0x148): undefined reference to `PQfreeCancel'\n[17:07:07.621] /usr/bin/ld:\n../../../src/fe_utils/libpgfeutils.a(connect_utils.o): in function\n`disconnectDatabase':\n[17:07:07.621] connect_utils.c:(.text+0x2fc): undefined reference to\n`PQcancelConn'\n[17:07:07.621] /usr/bin/ld: connect_utils.c:(.text+0x307): undefined\nreference to `PQcancelSend'\n[17:07:07.621] /usr/bin/ld: connect_utils.c:(.text+0x30f): undefined\nreference to `PQcancelFinish'\n[17:07:07.623] /usr/bin/ld: ../../../src/interfaces/libpq/libpq.so:\nundefined reference to `PQcancelPoll'\n[17:07:07.626] collect2: error: ld returned 1 exit status\n[17:07:07.626] make[3]: *** [Makefile:31: pg_amcheck] Error 1\n[17:07:07.626] make[2]: *** [Makefile:45: all-pg_amcheck-recurse] Error 2\n[17:07:07.626] make[2]: *** Waiting for unfinished jobs....\n[17:07:08.126] /usr/bin/ld: ../../../src/interfaces/libpq/libpq.so:\nundefined reference to `PQcancelPoll'\n[17:07:08.130] collect2: error: ld returned 1 exit status\n[17:07:08.131] make[3]: *** [Makefile:42: initdb] Error 1\n[17:07:08.131] make[2]: *** [Makefile:45: all-initdb-recurse] Error 2\n[17:07:08.492] /usr/bin/ld: ../../../src/interfaces/libpq/libpq.so:\nundefined reference to `PQcancelPoll'\n[17:07:08.495] collect2: error: ld returned 1 exit status\n[17:07:08.496] make[3]: *** [Makefile:50: pg_basebackup] Error 1\n[17:07:08.496] make[2]: *** [Makefile:45: all-pg_basebackup-recurse] Error 2\n[17:07:09.060] /usr/bin/ld: parallel.o: in function `sigTermHandler':\n[17:07:09.060] parallel.c:(.text+0x1aa): undefined reference to `PQcancel'\n\nPlease post an updated version for the same.\n\n[1] - https://cirrus-ci.com/task/6210637211107328\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 28 Jan 2024 08:45:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Sun, 28 Jan 2024 at 04:15, vignesh C <vignesh21@gmail.com> wrote:\n> CFBot shows that the patch has few compilation errors as in [1]:\n> [17:07:07.621] /usr/bin/ld:\n> ../../../src/fe_utils/libpgfeutils.a(cancel.o): in function\n> `handle_sigint':\n> [17:07:07.621] cancel.c:(.text+0x50): undefined reference to `PQcancel'\n\nI forgot to update ./configure based builds with the new file, only\nmeson was working. Also it seems I trimmed the header list fe-cancel.c\na bit too much for OSX, so I added unistd.h back.\n\nBoth of those are fixed now.",
"msg_date": "Sun, 28 Jan 2024 10:51:48 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Sun, 28 Jan 2024 at 10:51, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> Both of those are fixed now.\n\nOkay, there turned out to also be an issue on Windows with\nsetKeepalivesWin32 not being available in fe-cancel.c. That's fixed\nnow too (as well as some minor formatting issues).",
"msg_date": "Sun, 28 Jan 2024 13:39:42 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Jan-28, Jelte Fennema-Nio wrote:\n\n> On Sun, 28 Jan 2024 at 10:51, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> > Both of those are fixed now.\n> \n> Okay, there turned out to also be an issue on Windows with\n> setKeepalivesWin32 not being available in fe-cancel.c. That's fixed\n> now too (as well as some minor formatting issues).\n\nThanks! I committed 0001 now. I also renamed the new\npq_parse_int_param to pqParseIntParam, for consistency with other\nroutines there. Please rebase the other patches.\n\nThanks,\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nThou shalt check the array bounds of all strings (indeed, all arrays), for\nsurely where thou typest \"foo\" someone someday shall type\n\"supercalifragilisticexpialidocious\" (5th Commandment for C programmers)\n\n\n",
"msg_date": "Mon, 29 Jan 2024 12:44:44 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Mon, 29 Jan 2024 at 12:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Thanks! I committed 0001 now. I also renamed the new\n> pq_parse_int_param to pqParseIntParam, for consistency with other\n> routines there. Please rebase the other patches.\n\nAwesome! Rebased, and renamed pq_release_conn_hosts to\npqReleaseConnHosts for the same consistency reasons.",
"msg_date": "Mon, 29 Jan 2024 13:28:22 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Jan-29, Jelte Fennema-Nio wrote:\n\n> On Mon, 29 Jan 2024 at 12:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Thanks! I committed 0001 now. I also renamed the new\n> > pq_parse_int_param to pqParseIntParam, for consistency with other\n> > routines there. Please rebase the other patches.\n> \n> Awesome! Rebased, and renamed pq_release_conn_hosts to\n> pqReleaseConnHosts for the same consistency reasons.\n\nThank you, looks good.\n\nI propose the following minor/trivial fixes over your initial 3 patches.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)",
"msg_date": "Fri, 2 Feb 2024 13:19:38 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, 2 Feb 2024 at 13:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Thank you, looks good.\n>\n> I propose the following minor/trivial fixes over your initial 3 patches.\n\nAll of those seem good like fixes. Attached is an updated patchset\nwhere they are all applied. As well as adding a missing word (\"been\")\nin a comment that I noticed while reading your fixes.",
"msg_date": "Fri, 2 Feb 2024 15:03:39 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hello,\n\nThe patched docs claim that PQrequestCancel is insecure, but neither the\ncode nor docs explain why. The docs for PQcancel on the other hand do\nmention that encryption is not used; does that apply to PQrequestCancel\nas well and is that the reason? If so, I think we should copy the\nwarning and perhaps include a code comment about that. Also, maybe that\nfinal phrase in PQcancel should be a <caution> box: remove from \"So, for\nexample\" and add <caution><para>Because gssencmode and sslencmode are\nnot preserved from the original connection, the cancel request is not\nencrypted.</para></caution> or something like that.\n\n\nI wonder if Section 33.7 Canceling Queries in Progress should be split\nin three subsections, and I propose the following order:\n\n33.7.1 PGcancelConn-based Cancellation API\n PQcancelConn\t\t-- we first document the basics\n PQcancelSend\n PQcancelFinish\n PQcancelPoll\t\t-- the nonblocking interface is documented next\n PQcancelReset\t\t-- reuse a cancelconn, later in docs because it's more advanced\n PQcancelStatus\t-- accessors go last\n PQcancelSocket\n PQcancelErrorMessage\n\n33.7.2 Obsolete interface\n PQgetCancel\n PQfreeCancel\n PQcancel\n\n33.7.3 Deprecated and Insecure Methods\n PQrequestCancel\n\nI have a hard time coming up with good subsection titles though.\n\nNow, looking at this list, I think it's surprising that the nonblocking\nrequest for a cancellation is called PQcancelPoll. PQcancelSend() is at\nodds with the asynchronous query API, which uses the verb \"send\" for the\nasynchronous variants. This would suggest that PQcancelPoll should\nactually be called PQcancelSend or maybe PQcancelStart (mimicking\nPQconnectStart). I'm not sure what's a good alternative name for the\nblocking one, which you have called PQcancelSend.\n\nI see upthread that the names of these functions were already quite\nheavily debated. Sorry to beat that dead horse some more ... I'm just\nnot sure it's decided matter.\n\nLastly -- the doc blurbs that say simply \"a version of XYZ that can be\nused for cancellation connections\" are a bit underwhelming. Shouldn't\nwe document these more fully instead of making users go read the docs\nfor the other functions and wonder what the differences might be, if\nany?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Before you were born your parents weren't as boring as they are now. They\ngot that way paying your bills, cleaning up your room and listening to you\ntell them how idealistic you are.\" -- Charles J. Sykes' advice to teenagers\n\n\n",
"msg_date": "Fri, 2 Feb 2024 16:05:56 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, 2 Feb 2024 at 16:06, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Now, looking at this list, I think it's surprising that the nonblocking\n> request for a cancellation is called PQcancelPoll. PQcancelSend() is at\n> odds with the asynchronous query API, which uses the verb \"send\" for the\n> asynchronous variants. This would suggest that PQcancelPoll should\n> actually be called PQcancelSend or maybe PQcancelStart (mimicking\n> PQconnectStart). I'm not sure what's a good alternative name for the\n> blocking one, which you have called PQcancelSend.\n\nI agree that Send is an unfortunate suffix. I'd love to use PQcancel\nfor this, but obviously that one is already taken. Some other options\nthat I can think of are (from favorite to less favorite):\n- PQcancelBlocking\n- PQcancelAndWait\n- PQcancelGo\n- PQcancelNow\n\nFinally, another option would be to renome PQcancelConn to\nPQgetCancelConn and then rename PQcancelSend to PQcancelConn.\n\nRegarding PQcancelPoll, I think it's a good name for the polling\nfunction, but I agree it's a bit confusing to use it to also start\nsending the connection. Even the code of PQcancelPoll basically admits\nthat this is confusing behaviour:\n\n /*\n * Before we can call PQconnectPoll we first need to start the connection\n * using pqConnectDBStart. Non-cancel connections already do this whenever\n * the connection is initialized. But cancel connections wait until the\n * caller starts polling, because there might be a large delay between\n * creating a cancel connection and actually wanting to use it.\n */\n if (conn->status == CONNECTION_STARTING)\n {\n if (!pqConnectDBStart(&cancelConn->conn))\n {\n cancelConn->conn.status = CONNECTION_STARTED;\n return PGRES_POLLING_WRITING;\n }\n }\n\nThe only reasonable thing I can think of to make that situation better\nis to move that part of the function outside of PQcancelPoll and\ncreate a dedicated PQcancelStart function for it. It introduces an\nextra function, but it does seem more in line with how we do the\nregular connection establishment. Basically you would have code like\nthis then, which looks quite nice honestly:\n\n cancelConn = PQcancelConn(conn);\n if (!PQcancelStart(cancelConn))\n pg_fatal(\"bad cancel connection: %s\", PQcancelErrorMessage(cancelConn));\n while (true)\n {\n // polling using PQcancelPoll here\n }\n\n\n",
"msg_date": "Fri, 2 Feb 2024 23:53:16 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Feb-02, Jelte Fennema-Nio wrote:\n\n> The only reasonable thing I can think of to make that situation better\n> is to move that part of the function outside of PQcancelPoll and\n> create a dedicated PQcancelStart function for it. It introduces an\n> extra function, but it does seem more in line with how we do the\n> regular connection establishment. Basically you would have code like\n> this then, which looks quite nice honestly:\n> \n> cancelConn = PQcancelConn(conn);\n> if (!PQcancelStart(cancelConn))\n> pg_fatal(\"bad cancel connection: %s\", PQcancelErrorMessage(cancelConn));\n> while (true)\n> {\n> // polling using PQcancelPoll here\n> }\n\nMaybe this is okay? I'll have a look at the whole final situation more\ncarefully later; or if somebody else wants to share an opinion, please\ndo so.\n\nIn the meantime I pushed your 0002 and 0003 patches, so you can take\nthis as an opportunity to rebase the remaining ones.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The saddest aspect of life right now is that science gathers knowledge faster\n than society gathers wisdom.\" (Isaac Asimov)\n\n\n",
"msg_date": "Sun, 4 Feb 2024 16:39:21 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Sun, 4 Feb 2024 at 16:39, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Maybe this is okay? I'll have a look at the whole final situation more\n> carefully later; or if somebody else wants to share an opinion, please\n> do so.\n\nAttached is a new version of the final patches, with much improved\ndocs (imho) and the new function names: PQcancelStart and\nPQcancelBlocking.",
"msg_date": "Wed, 14 Feb 2024 18:20:44 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Feb-14, Jelte Fennema-Nio wrote:\n\n> Attached is a new version of the final patches, with much improved\n> docs (imho) and the new function names: PQcancelStart and\n> PQcancelBlocking.\n\nHmm, I think the changes to libpq_pipeline in 0005 should be in 0004.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 14 Feb 2024 18:41:37 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 14 Feb 2024 at 18:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hmm, I think the changes to libpq_pipeline in 0005 should be in 0004.\n\nYeah, you're correct. Fixed that now.",
"msg_date": "Wed, 14 Feb 2024 19:22:06 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "In patch 0004, I noticed a couple of typos in the documentation; please \nfind attached a fixup patch correcting these.\n\nStill in the documentation, same patch, the last paragraph documenting \nPQcancelPoll() ends as:\n\n+ indicate the current stage of the connection procedure and might \nbe useful\n+ to provide feedback to the user for example. These statuses are:\n+ </para>\n\nwhile not actually listing the \"statuses\". Should we list them? Adjust \nthe wording? Or refer to PQconnectPoll() documentation (since the \nparagraph is copied from there it seems)?\n\n\nOtherwise, the feature still works fine as far as I can tell.",
"msg_date": "Wed, 6 Mar 2024 15:03:20 +0100",
"msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 6 Mar 2024 at 15:03, Denis Laxalde <denis.laxalde@dalibo.com> wrote:\n>\n> In patch 0004, I noticed a couple of typos in the documentation; please\n> find attached a fixup patch correcting these.\n\nThanks, applied.\n\n> while not actually listing the \"statuses\". Should we list them?\n\nI listed the relevant statuses over now and updated the PQcancelStatus\ndocs to look more like the PQstatus one. I didn't list any statuses\nthat a cancel connection could never have (but a normal connection\ncan).\n\nWhile going over the list of statuses possible for a cancel connection\nI realized that the docs for PQconnectStart were not listing all\nrelevant statuses, so I fixed that in patch 0001.",
"msg_date": "Wed, 6 Mar 2024 19:09:35 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Docs: one bogus \"that that\".\n\nDid we consider having PQcancelConn() instead be called\nPQcancelCreate()? I think this better conveys that what we're doing is\ncreate an object that can be used to do something, and that nothing else\nis done with it by default. Also, the comment still says\n\"Asynchronously cancel a query on the given connection. This requires\npolling the returned PGcancelConn to actually complete the cancellation\nof the query.\" but this is no longer a good description of what this\nfunction does.\n\nWhy do we return a non-NULL pointer from PQcancelConn in the first three\ncases where we return errors? (original conn was NULL, original conn is\nPGINVALID_SOCKET, pqCopyPGconn returns failure) Wouldn't it make more\nsense to free the allocated object and return NULL? Actually, I wonder\nif there's any reason at all to return a valid pointer in any failure\ncases; I mean, do we really expect that application authors are going to\nread/report the error message from a PGcancelConn that failed to be fully\ncreated? Anyway, maybe there are reasons for this; but in any case we\nshould set ->cancelRequest in all cases, not only after the first tests\nfor errors.\n\nI think the extra PGconn inside pg_cancel_conn is useless; it would be\nsimpler to typedef PGcancelConn to PGconn in fe-cancel.c, and remove the\nindirection through the extra struct. You're actually dereferencing the\nobject in two ways in the new code, both by casting the outer object\nstraight to PGconn (taking advantage that the struct member is first in\nthe struct), and by using PGcancelConn->conn. This seems pointless. I\nmean, if we're going to cast to \"PGconn *\" in some places anyway, then\nwe may as well access all members directly. Perhaps, if you want, you\ncould add asserts that ->cancelRequest is set true in all the\nfe-cancel.c functions. Anyway, we'd still have compiler support to tell\nyou that you're passing the wrong struct to the function. (I didn't\nactually try to change the code this way, so I might be wrong.)\n\nWe could move the definition of struct pg_cancel to fe-cancel.c. Nobody\noutside that needs to know that definition anyway.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/\n\n\n",
"msg_date": "Wed, 6 Mar 2024 19:22:46 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 6 Mar 2024 at 19:22, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Docs: one bogus \"that that\".\n\nwill fix\n\n> Did we consider having PQcancelConn() instead be called\n> PQcancelCreate()?\n\nFine by me\n\n> Also, the comment still says\n> \"Asynchronously cancel a query on the given connection. This requires\n> polling the returned PGcancelConn to actually complete the cancellation\n> of the query.\" but this is no longer a good description of what this\n> function does.\n\nwill fix\n\n> Why do we return a non-NULL pointer from PQcancelConn in the first three\n> cases where we return errors? (original conn was NULL, original conn is\n> PGINVALID_SOCKET, pqCopyPGconn returns failure) Wouldn't it make more\n> sense to free the allocated object and return NULL? Actually, I wonder\n> if there's any reason at all to return a valid pointer in any failure\n> cases; I mean, do we really expect that application authors are going to\n> read/report the error message from a PGcancelConn that failed to be fully\n> created?\n\nI think having a useful error message when possible is quite nice. And\nI do think people will read/report this error message. Especially\nsince many people will simply pass it to PQcancelBlocking, whether\nit's NULL or not. And then check the status, and then report the error\nif the status was CONNECTION_BAD.\n\n> but in any case we\n> should set ->cancelRequest in all cases, not only after the first tests\n> for errors.\n\nmakes sense\n\n> I think the extra PGconn inside pg_cancel_conn is useless; it would be\n> simpler to typedef PGcancelConn to PGconn in fe-cancel.c, and remove the\n> indirection through the extra struct.\n\nThat sounds nice indeed. I'll try it out.\n\n> We could move the definition of struct pg_cancel to fe-cancel.c. Nobody\n> outside that needs to know that definition anyway.\n\nwill do\n\n\n",
"msg_date": "Wed, 6 Mar 2024 19:41:10 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Attached is a new patchset with various changes. I created a dedicated\n0002 patch to add tests for the already existing cancellation\nfunctions, because that seemed useful for another thread where changes\nto the cancellation protocol are being proposed[1].\n\n[1]: https://www.postgresql.org/message-id/flat/508d0505-8b7a-4864-a681-e7e5edfe32aa%40iki.fi\n\nOn Wed, 6 Mar 2024 at 19:22, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Docs: one bogus \"that that\".\n\nThis was already fixed by my previous doc changes in v32, I guess that\nemail got crossed with this one\n\n> Did we consider having PQcancelConn() instead be called\n> PQcancelCreate()?\n\nDone\n\n> \"Asynchronously cancel a query on the given connection. This requires\n> polling the returned PGcancelConn to actually complete the cancellation\n> of the query.\" but this is no longer a good description of what this\n> function does.\n\nFixed\n\n> Anyway, maybe there are reasons for this; but in any case we\n> should set ->cancelRequest in all cases, not only after the first tests\n> for errors.\n\nDone\n\n> I think the extra PGconn inside pg_cancel_conn is useless; it would be\n> simpler to typedef PGcancelConn to PGconn in fe-cancel.c, and remove the\n> indirection through the extra struct. You're actually dereferencing the\n> object in two ways in the new code, both by casting the outer object\n> straight to PGconn (taking advantage that the struct member is first in\n> the struct), and by using PGcancelConn->conn. This seems pointless. I\n> mean, if we're going to cast to \"PGconn *\" in some places anyway, then\n> we may as well access all members directly. Perhaps, if you want, you\n> could add asserts that ->cancelRequest is set true in all the\n> fe-cancel.c functions. Anyway, we'd still have compiler support to tell\n> you that you're passing the wrong struct to the function. (I didn't\n> actually try to change the code this way, so I might be wrong.)\n\nTurns out you were wrong about the compiler support to tell us we're\npassing the wrong struct: When both the PGconn and PGcancelConn\ntypedefs refer to the same struct, the compiler allows passing PGconn\nto PGcancelConn functions and vice versa without complaining. This\nseems enough reason for me to keep indirection through the extra\nstruct.\n\nSo instead of adding the proposed typed this typedef I chose to add a\ncomment to pg_cancel_conn explaining its purpose, as well as not\ncasting PGcancelConn to PGconn but always accessing the conn field for\nconsistency.\n\n> We could move the definition of struct pg_cancel to fe-cancel.c. Nobody\n> outside that needs to know that definition anyway.\n\nDone in 0003",
"msg_date": "Thu, 7 Mar 2024 11:11:32 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Here's a last one for the cfbot.\n\nI have a question about this one\n\nint\nPQcancelStart(PGcancelConn *cancelConn)\n{\n [...]\n if (cancelConn->conn.status != CONNECTION_ALLOCATED)\n {\n libpq_append_conn_error(&cancelConn->conn,\n \"cancel request is already being sent on this connection\");\n cancelConn->conn.status = CONNECTION_BAD;\n return 0;\n }\n\n\nIf we do this and we see conn.status is not ALLOCATED, meaning a cancel\nis already ongoing, shouldn't we leave conn.status alone instead of\nchanging to CONNECTION_BAD? I mean, we shouldn't be juggling the elbow\nof whoever's doing that, should we? Maybe just add the error message\nand return 0?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"If it is not right, do not do it.\nIf it is not true, do not say it.\" (Marcus Aurelius, Meditations)",
"msg_date": "Tue, 12 Mar 2024 10:19:27 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> If we do this and we see conn.status is not ALLOCATED, meaning a cancel\n> is already ongoing, shouldn't we leave conn.status alone instead of\n> changing to CONNECTION_BAD? I mean, we shouldn't be juggling the elbow\n> of whoever's doing that, should we? Maybe just add the error message\n> and return 0?\n\nI'd rather fail as hard as possible when someone is using the API\nwrongly. Not doing so is bound to cause confusion imho. e.g. if the\nstate is still CONNECTION_OK because the user forgot to call\nPQcancelReset then keeping the connection status \"as is\" might seem as\nif the cancel request succeeded even though nothing happened. So if\nthe user uses the API incorrectly then I'd rather use all the avenues\npossible to indicate that there was an error. Especially since in all\nother cases if PQcancelStart returns false CONNECTION_BAD is the\nstatus, and this in turn means that PQconnectPoll will return\nPGRES_POLLING_FAILED. So I doubt people will always check the actual\nreturn value of the function to check if an error happened. They might\ncheck PQcancelStatus or PQconnectPoll instead, because that integrates\neasier with the rest of their code.\n\n\n",
"msg_date": "Tue, 12 Mar 2024 10:53:24 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 10:53, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> I'd rather fail as hard as possible when someone is using the API\n> wrongly.\n\nTo be clear, this is my way of looking at it. If you feel strongly\nabout that we should not change conn.status, I'm fine with making that\nchange to the patchset.\n\n\n",
"msg_date": "Tue, 12 Mar 2024 12:41:02 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Here's a last one for the cfbot.\n\nThanks for committing the first 3 patches btw. Attached a tiny change\nto 0001, which adds \"(backing struct for PGcancelConn)\" to the comment\non pg_cancel_conn.",
"msg_date": "Tue, 12 Mar 2024 15:04:21 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 15:04, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> Attached a tiny change to 0001\n\nOne more tiny comment change, stating that pg_cancel is used by the\ndeprecated PQcancel function.",
"msg_date": "Tue, 12 Mar 2024 16:45:41 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-12, Jelte Fennema-Nio wrote:\n\n> On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Here's a last one for the cfbot.\n> \n> Thanks for committing the first 3 patches btw. Attached a tiny change\n> to 0001, which adds \"(backing struct for PGcancelConn)\" to the comment\n> on pg_cancel_conn.\n\nThanks, I included it. I hope there were no other changes, because I\ndidn't verify :-) but if there were, please let me know to incorporate\nthem.\n\nI made a number of other small changes, mostly to the documentation,\nnothing fundamental. (Someday we should stop using <listentry> to\ndocument the libpq functions and use refentry's instead ... it'd be\nuseful to have manpages for these functions.)\n\nOne thing I don't like very much is release_conn_addrinfo(), which is\ncalled conditionally in two places but unconditionally in other places.\nMaybe it'd make more sense to put this conditionality inside the\nfunction itself, possibly with a \"bool force\" flag to suppress that in\nthe cases where it is not desired.\n\nIn pqConnectDBComplete, we cast the PGconn * to PGcancelConn * in order\nto call PQcancelPoll, which is a bit abusive, but I don't know how to do\nbetter. Maybe we just accept this ... but if PQcancelStart is the only\nway to have pqConnectDBStart called from a cancel connection, maybe it'd\nbe saner to duplicate pqConnectDBStart for cancel conns.\n\nThanks!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 12 Mar 2024 17:50:48 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hmm, buildfarm member kestrel (which uses\n-fsanitize=undefined,alignment) failed:\n\n# Running: libpq_pipeline -r 700 cancel port=49975 host=/tmp/dFh46H7YGc\ndbname='postgres'\ntest cancellations... \nlibpq_pipeline:260: query did not fail when it was expected\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-12%2016%3A41%3A27\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The saddest aspect of life right now is that science gathers knowledge faster\n than society gathers wisdom.\" (Isaac Asimov)\n\n\n",
"msg_date": "Tue, 12 Mar 2024 18:58:55 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-12, Alvaro Herrera wrote:\n\n> Hmm, buildfarm member kestrel (which uses\n> -fsanitize=undefined,alignment) failed:\n> \n> # Running: libpq_pipeline -r 700 cancel port=49975 host=/tmp/dFh46H7YGc\n> dbname='postgres'\n> test cancellations... \n> libpq_pipeline:260: query did not fail when it was expected\n\nHm, I tried using the same compile flags, couldn't reproduce.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n\n",
"msg_date": "Tue, 12 Mar 2024 19:28:40 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 12 Mar 2024 at 19:28, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Mar-12, Alvaro Herrera wrote:\n>\n> > Hmm, buildfarm member kestrel (which uses\n> > -fsanitize=undefined,alignment) failed:\n> >\n> > # Running: libpq_pipeline -r 700 cancel port=49975 host=/tmp/dFh46H7YGc\n> > dbname='postgres'\n> > test cancellations...\n> > libpq_pipeline:260: query did not fail when it was expected\n>\n> Hm, I tried using the same compile flags, couldn't reproduce.\n\nOkay, it passed now it seems so I guess this test is flaky somehow.\nThe error message and the timing difference between failed and\nsucceeded buildfarm run clearly indicates that the pg_sleep ran its\n180 seconds to completion (so cancel was never processed for some\nreason).\n\n**failed case**\n282/285 postgresql:libpq_pipeline / libpq_pipeline/001_libpq_pipeline\n ERROR 191.56s exit status 1\n\n**succeeded case**\n\n252/285 postgresql:libpq_pipeline / libpq_pipeline/001_libpq_pipeline\n OK 10.01s 21 subtests passed\n\nI don't see any obvious reason for how this test can be flaky, but\nI'll think a bit more about it tomorrow.\n\n\n",
"msg_date": "Tue, 12 Mar 2024 23:43:03 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> On Tue, 12 Mar 2024 at 19:28, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>> Hmm, buildfarm member kestrel (which uses\n>>> -fsanitize=undefined,alignment) failed:\n\n>> Hm, I tried using the same compile flags, couldn't reproduce.\n\n> Okay, it passed now it seems so I guess this test is flaky somehow.\n\nTwo more intermittent failures:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bushmaster&dt=2024-03-13%2003%3A15%3A09\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-03-13%2003%3A15%3A31\n\nThese animals all belong to Andres' flotilla, but other than that\nI'm not seeing a connection. I suspect it's basically just a\ntiming dependency. Have you thought about the fact that a cancel\nrequest is a no-op if it arrives after the query's done?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Mar 2024 23:53:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 04:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I suspect it's basically just a\n> timing dependency. Have you thought about the fact that a cancel\n> request is a no-op if it arrives after the query's done?\n\nI agree it's probably a timing issue. The cancel being received after\nthe query is done seems very unlikely, since the query takes 180\nseconds (assuming PG_TEST_TIMEOUT_DEFAULT is not lowered for these\nanimals). I think it's more likely that the cancel request arrives too\nearly, and thus being ignored because no query is running yet. The\ntest already had logic to wait until the query backend was in the\n\"active\" state, before sending a cancel to solve that issue. But my\nguess is that that somehow isn't enough.\n\nSadly I'm having a hard time reliably reproducing this race condition\nlocally. So it's hard to be sure what is happening here. Attached is a\npatch with a wild guess as to what the issue might be (i.e. seeing an\noutdated \"active\" state and thus passing the check even though the\nquery is not running yet)",
"msg_date": "Wed, 13 Mar 2024 11:04:43 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-13, Jelte Fennema-Nio wrote:\n\n> I agree it's probably a timing issue. The cancel being received after\n> the query is done seems very unlikely, since the query takes 180\n> seconds (assuming PG_TEST_TIMEOUT_DEFAULT is not lowered for these\n> animals). I think it's more likely that the cancel request arrives too\n> early, and thus being ignored because no query is running yet. The\n> test already had logic to wait until the query backend was in the\n> \"active\" state, before sending a cancel to solve that issue. But my\n> guess is that that somehow isn't enough.\n> \n> Sadly I'm having a hard time reliably reproducing this race condition\n> locally. So it's hard to be sure what is happening here. Attached is a\n> patch with a wild guess as to what the issue might be (i.e. seeing an\n> outdated \"active\" state and thus passing the check even though the\n> query is not running yet)\n\nI tried leaving the original running in my laptop to see if I could\nreproduce it, but got no hits ... and we didn't get any other failures\napart from the three ones already reported ... so it's not terribly high\nprobability. Anyway I pushed your patch now since the theory seems\nplausible; let's see if we still get the issue to reproduce. If it\ndoes, we could make the script more verbose to hunt for further clues.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 13 Mar 2024 20:00:52 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 12:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2024-Mar-13, Jelte Fennema-Nio wrote:\n> > Sadly I'm having a hard time reliably reproducing this race condition\n> > locally. So it's hard to be sure what is happening here. Attached is a\n> > patch with a wild guess as to what the issue might be (i.e. seeing an\n> > outdated \"active\" state and thus passing the check even though the\n> > query is not running yet)\n>\n> I tried leaving the original running in my laptop to see if I could\n> reproduce it, but got no hits ... and we didn't get any other failures\n> apart from the three ones already reported ... so it's not terribly high\n> probability. Anyway I pushed your patch now since the theory seems\n> plausible; let's see if we still get the issue to reproduce. If it\n> does, we could make the script more verbose to hunt for further clues.\n\nI hit this on my machine. With the attached diff I can reproduce\nconstantly (including with the most recent test patch); I think the\ncancel must be arriving between the bind/execute steps?\n\nThanks,\n--Jacob",
"msg_date": "Wed, 13 Mar 2024 12:08:30 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 20:08, Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> I hit this on my machine. With the attached diff I can reproduce\n> constantly (including with the most recent test patch); I think the\n> cancel must be arriving between the bind/execute steps?\n\nNice find! Your explanation makes total sense. Attached a patchset\nthat fixes/works around this issue by using the simple query protocol\nin the cancel test.",
"msg_date": "Thu, 14 Mar 2024 10:51:13 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-14, Jelte Fennema-Nio wrote:\n\n> On Wed, 13 Mar 2024 at 20:08, Jacob Champion\n> <jacob.champion@enterprisedb.com> wrote:\n> > I hit this on my machine. With the attached diff I can reproduce\n> > constantly (including with the most recent test patch); I think the\n> > cancel must be arriving between the bind/execute steps?\n> \n> Nice find! Your explanation makes total sense. Attached a patchset\n> that fixes/works around this issue by using the simple query protocol\n> in the cancel test.\n\nHmm, isn't this basically saying that we're giving up on reliably\ncanceling queries altogether? I mean, maybe we'd like to instead fix\nthe bug about canceling queries in extended query protocol ...\nIsn't that something you're worried about?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n",
"msg_date": "Thu, 14 Mar 2024 11:33:30 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 14 Mar 2024 at 11:33, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hmm, isn't this basically saying that we're giving up on reliably\n> canceling queries altogether? I mean, maybe we'd like to instead fix\n> the bug about canceling queries in extended query protocol ...\n> Isn't that something you're worried about?\n\nIn any case I think it's worth having (non-flaky) test coverage of our\nlibpq cancellation sending code. So I think it makes sense to commit\nthe patch I proposed, even if the backend code to handle that code is\narguably buggy.\n\nRegarding the question if the backend code is actually buggy or not:\nthe way cancel requests are defined to work is a bit awkward. They\ncancel whatever operation is running on the session when they arrive.\nSo if the session is just in the middle of a Bind and Execute message\nthere is nothing to cancel. While surprising and probably not what\nsomeone would want, I don't think this behaviour is too horrible in\npractice in this case. Most of the time people cancel queries while\nthe Execute message is being processed. The new test really only runs\ninto this problem because it sends a cancel request, immediately after\nsending the query.\n\nI definitely think it's worth rethinking the way we do query\ncancellations though. I think what we would probably want is a way to\ncancel a specific query/message on a session. Instead of cancelling\nwhatever is running at the moment when the cancel request is processed\nby Postgres. Because this \"cancel whatever is running\" behaviour is\nfraught with issues, this Bind/Execute issue being only one of them.\nOne really annoying race condition of a cancel request cancelling\nanother query than intended can happen with this flow (that I spend\nlots of time on addressing in PgBouncer):\n1. You send query A on session 1\n2. You send a cancel request for session 1 (intending to cancel query A)\n3. Query A completes by itself\n4. You now send query B\n5. The cancel request is now processed\n6. Query B is now cancelled\n\nBut solving that race condition would involve changing the postgres\nprotocol. Which I'm trying to make possible with the first few commits\nin[1]. And while those first few commits might still land in PG17, I\ndon't think a large protocol change like adding query identifiers to\ncancel requests is feasible for PG17 anymore.\n\n\n",
"msg_date": "Thu, 14 Mar 2024 12:36:32 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "I enabled the test again and also pushed the changes to dblink,\nisolationtester and fe_utils (which AFAICS is used by pg_dump,\npg_amcheck, reindexdb and vacuumdb). I chickened out of committing the\npostgres_fdw changes though, so here they are again. Not sure I'll find\ncourage to get these done by tomorrow, or whether I should just let them\nfor Fujita-san or Noah, who have been the last committers to touch this.\n\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"",
"msg_date": "Mon, 18 Mar 2024 19:40:10 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 07:40:10PM +0100, Alvaro Herrera wrote:\n> I enabled the test again and also pushed the changes to dblink,\n> isolationtester and fe_utils (which AFAICS is used by pg_dump,\n\nI recommend adding a libpqsrv_cancel() function to libpq-be-fe-helpers.h, to\nuse from dblink and postgres_fdw. pgxn modules calling PQcancel() from the\nbackend (citus pg_bulkload plproxy pmpp) then have a better chance to adopt\nthe new way.\n\n\n",
"msg_date": "Wed, 20 Mar 2024 19:54:38 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 03:54, Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Mar 18, 2024 at 07:40:10PM +0100, Alvaro Herrera wrote:\n> > I enabled the test again and also pushed the changes to dblink,\n> > isolationtester and fe_utils (which AFAICS is used by pg_dump,\n>\n> I recommend adding a libpqsrv_cancel() function to libpq-be-fe-helpers.h, to\n> use from dblink and postgres_fdw. pgxn modules calling PQcancel() from the\n> backend (citus pg_bulkload plproxy pmpp) then have a better chance to adopt\n> the new way.\n\nDone",
"msg_date": "Fri, 22 Mar 2024 09:54:29 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-22, Jelte Fennema-Nio wrote:\n\n> On Thu, 21 Mar 2024 at 03:54, Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Mon, Mar 18, 2024 at 07:40:10PM +0100, Alvaro Herrera wrote:\n> > > I enabled the test again and also pushed the changes to dblink,\n> > > isolationtester and fe_utils (which AFAICS is used by pg_dump,\n> >\n> > I recommend adding a libpqsrv_cancel() function to libpq-be-fe-helpers.h, to\n> > use from dblink and postgres_fdw. pgxn modules calling PQcancel() from the\n> > backend (citus pg_bulkload plproxy pmpp) then have a better chance to adopt\n> > the new way.\n> \n> Done\n\nNice, thanks. I played with it a bit, mostly trying to figure out if\nthe chosen API is usable. I toyed with making it return boolean success\nand the error message as an output argument, because I was nervous about\nwhat'd happen in OOM. But since this is backend environment, what\nactually happens is that we elog(ERROR) anyway, so we never return a\nNULL error message. So after the detour I think Jelte's API is okay.\n\nI changed it so that the error messages are returned as translated\nphrases, and was bothered by the fact that if errors happen repeatedly,\nthe memory for them might be leaked. Maybe this is fine depending on\nthe caller's memory context, but since it's only at most one string each\ntime, it's quite easy to just keep track of it so that we can release it\non the next.\n\nI ended up reducing the two PG_TRY blocks to a single one. I see no\nreason to split them up, and this way it looks more legible.\n\nWhat do you think?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)",
"msg_date": "Wed, 27 Mar 2024 16:34:57 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-27, Alvaro Herrera wrote:\n\n> I changed it so that the error messages are returned as translated\n> phrases, and was bothered by the fact that if errors happen repeatedly,\n> the memory for them might be leaked. Maybe this is fine depending on\n> the caller's memory context, but since it's only at most one string each\n> time, it's quite easy to just keep track of it so that we can release it\n> on the next.\n\n(Actually this sounds clever but fails pretty obviously if the caller\ndoes free the string, such as in a memory context reset. So I guess we\nhave to just accept the potential leakage.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)\n\n\n",
"msg_date": "Wed, 27 Mar 2024 19:46:19 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 19:46, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Mar-27, Alvaro Herrera wrote:\n>\n> > I changed it so that the error messages are returned as translated\n> > phrases, and was bothered by the fact that if errors happen repeatedly,\n> > the memory for them might be leaked. Maybe this is fine depending on\n> > the caller's memory context, but since it's only at most one string each\n> > time, it's quite easy to just keep track of it so that we can release it\n> > on the next.\n>\n> (Actually this sounds clever but fails pretty obviously if the caller\n> does free the string, such as in a memory context reset. So I guess we\n> have to just accept the potential leakage.)\n\nYour changes look good, apart from the prverror stuff indeed. If you\nremove the prverror stuff again I think this is ready to commit.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 10:31:30 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, 27 Mar 2024 at 19:27, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I ended up reducing the two PG_TRY blocks to a single one. I see no\n> reason to split them up, and this way it looks more legible.\n\nI definitely agree this looks better. Not sure why I hadn't done that,\nmaybe it wasn't possible in one of the earlier iterations of the API.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 10:33:13 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-28, Jelte Fennema-Nio wrote:\n\n> Your changes look good, apart from the prverror stuff indeed. If you\n> remove the prverror stuff again I think this is ready to commit.\n\nGreat, thanks for looking. Pushed now, I'll be closing the commitfest\nentry shortly.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n\n\n",
"msg_date": "Thu, 28 Mar 2024 11:33:00 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hm, indri failed:\n\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O2 -fno-common -Werror -fvisibility=hidden -bundle -o dblink.dylib dblink.o -L../../src/port -L../../src/common -L../../src/interfaces/libpq -lpq -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.4.sdk -L/opt/local/libexec/llvm-15/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -Wl,-dead_strip_dylibs -Werror -fvisibility=hidden -bundle_loader ../../src/backend/postgres\n\nUndefined symbols for architecture arm64:\n \"_libintl_gettext\", referenced from:\n _libpqsrv_cancel in dblink.o\n _libpqsrv_cancel in dblink.o\nld: symbol(s) not found for architecture arm64\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\nmake[1]: *** [dblink.dylib] Error 1\nmake: *** [all-dblink-recurse] Error 2\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 28 Mar 2024 12:15:08 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-28, Alvaro Herrera wrote:\n\n> Undefined symbols for architecture arm64:\n> \"_libintl_gettext\", referenced from:\n> _libpqsrv_cancel in dblink.o\n> _libpqsrv_cancel in dblink.o\n> ld: symbol(s) not found for architecture arm64\n> clang: error: linker command failed with exit code 1 (use -v to see invocation)\n> make[1]: *** [dblink.dylib] Error 1\n> make: *** [all-dblink-recurse] Error 2\n\nI just removed the _() from the new function. There's not much point in\nwasting more time on this, given that contrib doesn't have translation\nsupport anyway, and we're not using this in libpqwalreceiver.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Crear es tan difícil como ser libre\" (Elsa Triolet)\n\n\n",
"msg_date": "Thu, 28 Mar 2024 13:19:14 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Eh, kestrel has also failed[1], apparently every query after the large\nJOIN that this commit added as test fails with a statement timeout error.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-28%2016%3A01%3A14\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)\n\n\n",
"msg_date": "Thu, 28 Mar 2024 17:34:08 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 17:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Eh, kestrel has also failed[1], apparently every query after the large\n> JOIN that this commit added as test fails with a statement timeout error.\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-28%2016%3A01%3A14\n\nUgh that's annoying, the RESET is timing out too I guess. That can\nhopefully be easily fixed by changing the new test to:\n\nBEGIN;\nSET LOCAL statement_timeout = '10ms';\nselect count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;\n-- this takes very long\nROLLBACK;\n\n\n",
"msg_date": "Thu, 28 Mar 2024 17:37:50 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-28, Jelte Fennema-Nio wrote:\n\n> Ugh that's annoying, the RESET is timing out too I guess.\n\nHah, you're right, I can reproduce with a smaller timeout, and using SET\nLOCAL works as a fix. If we're doing that, why not reduce the timeout\nto 1ms? We don't need to wait extra 9ms ...\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n",
"msg_date": "Thu, 28 Mar 2024 17:43:29 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 17:43, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hah, you're right, I can reproduce with a smaller timeout, and using SET\n> LOCAL works as a fix. If we're doing that, why not reduce the timeout\n> to 1ms? We don't need to wait extra 9ms ...\n\nI think we don't really want to make the timeout too short. Otherwise\nthe query might get cancelled before we push any query down to the\nFDW. I guess that means that for some slow machines even 10ms is not\nenough to make the test do the intended purpose. I'd keep it at 10ms,\nwhich seems long enough for normal systems, while still being pretty\nshort.\n\n\n",
"msg_date": "Thu, 28 Mar 2024 18:13:56 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> On Thu, 28 Mar 2024 at 17:43, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Hah, you're right, I can reproduce with a smaller timeout, and using SET\n>> LOCAL works as a fix. If we're doing that, why not reduce the timeout\n>> to 1ms? We don't need to wait extra 9ms ...\n\n> I think we don't really want to make the timeout too short. Otherwise\n> the query might get cancelled before we push any query down to the\n> FDW. I guess that means that for some slow machines even 10ms is not\n> enough to make the test do the intended purpose. I'd keep it at 10ms,\n> which seems long enough for normal systems, while still being pretty\n> short.\n\nIf the test fails both when the machine is too slow and when it's\ntoo fast, then there's zero hope of making it stable and we should\njust remove it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Mar 2024 13:35:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Mar-28, Tom Lane wrote:\n\n> Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> \n> > I think we don't really want to make the timeout too short. Otherwise\n> > the query might get cancelled before we push any query down to the\n> > FDW. I guess that means that for some slow machines even 10ms is not\n> > enough to make the test do the intended purpose. I'd keep it at 10ms,\n> > which seems long enough for normal systems, while still being pretty\n> > short.\n> \n> If the test fails both when the machine is too slow and when it's\n> too fast, then there's zero hope of making it stable and we should\n> just remove it.\n\nIt doesn't fail when it's too fast -- it's just that it doesn't cover\nthe case we want to cover.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Escucha y olvidarás; ve y recordarás; haz y entenderás\" (Confucio)\n\n\n",
"msg_date": "Thu, 28 Mar 2024 18:53:30 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2024-Mar-28, Tom Lane wrote:\n>> If the test fails both when the machine is too slow and when it's\n>> too fast, then there's zero hope of making it stable and we should\n>> just remove it.\n\n> It doesn't fail when it's too fast -- it's just that it doesn't cover\n> the case we want to cover.\n\nThat's hardly better, because then you think you have test\ncoverage but maybe you don't.\n\nCould we make this test bulletproof by using an injection point?\nIf not, I remain of the opinion that we're better off without it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Mar 2024 14:02:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 28 Mar 2024 at 19:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > It doesn't fail when it's too fast -- it's just that it doesn't cover\n> > the case we want to cover.\n>\n> That's hardly better, because then you think you have test\n> coverage but maybe you don't.\n\nHonestly, that seems quite a lot better. Instead of having randomly\nfailing builds, you have a test that creates coverage 80+% of the\ntime. And that also seems a lot better than having no coverage at all\n(which is what we had for the last 7 years since introduction of\ncancellations to postgres_fdw). It would be good to expand the comment\nin the test though saying that the test might not always cover the\nintended code path, due to timing problems.\n\n> Could we make this test bulletproof by using an injection point?\n> If not, I remain of the opinion that we're better off without it.\n\nPossibly, and if so, I agree that would be better than the currently\nadded test. But I honestly don't feel like spending the time on\ncreating such a test. And given 7 years have passed without someone\nadding any test for this codepath at all, I don't expect anyone else\nwill either.\n\nIf you both feel we're better off without the test, feel free to\nremove it. This was just some small missing test coverage that I\nnoticed while working on this patch, that I thought I'd quickly\naddress. I don't particularly care a lot about the specific test.\n\n\n",
"msg_date": "Fri, 29 Mar 2024 09:17:55 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 09:17:55AM +0100, Jelte Fennema-Nio wrote:\n> On Thu, 28 Mar 2024 at 19:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Could we make this test bulletproof by using an injection point?\n> > If not, I remain of the opinion that we're better off without it.\n> \n> Possibly, and if so, I agree that would be better than the currently\n> added test. But I honestly don't feel like spending the time on\n> creating such a test.\n\nThe SQL test is more representative of real applications, and it's way simpler\nto understand. In general, I prefer 6-line SQL tests that catch a problem 10%\nof the time over injection point tests that catch it 100% of the time. For\nlow detection rate to be exciting, it needs to be low enough to have a serious\nchance of all buildfarm members reporting green for the bad commit. With ~115\nbuildfarm members running in the last day, 0.1% detection rate would have been\nlow enough to bother improving, but 4% would be high enough to call it good.\n\n\n",
"msg_date": "Fri, 29 Mar 2024 15:17:24 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Great, thanks for looking. Pushed now, I'll be closing the commitfest\n> entry shortly.\n\nOn my machine, headerscheck does not like this:\n\n$ src/tools/pginclude/headerscheck --cplusplus\nIn file included from /tmp/headerscheck.4gTaW5/test.cpp:3:\n./src/include/libpq/libpq-be-fe-helpers.h: In function 'char* libpqsrv_cancel(PGconn*, TimestampTz)':\n./src/include/libpq/libpq-be-fe-helpers.h:393:10: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]\n return \"out of memory\";\n ^~~~~~~~~~~~~~~\n./src/include/libpq/libpq-be-fe-helpers.h:421:13: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]\n error = \"cancel request timed out\";\n ^~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe second part of that could easily be fixed by declaring \"error\" as\n\"const char *\". As for the first part, can we redefine the whole\nfunction as returning \"const char *\"? (If not, this coding is very\nquestionable anyway.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Apr 2024 17:29:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Apr-03, Tom Lane wrote:\n\n> On my machine, headerscheck does not like this:\n> \n> $ src/tools/pginclude/headerscheck --cplusplus\n> In file included from /tmp/headerscheck.4gTaW5/test.cpp:3:\n> ./src/include/libpq/libpq-be-fe-helpers.h: In function 'char* libpqsrv_cancel(PGconn*, TimestampTz)':\n> ./src/include/libpq/libpq-be-fe-helpers.h:393:10: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]\n> return \"out of memory\";\n> ^~~~~~~~~~~~~~~\n> ./src/include/libpq/libpq-be-fe-helpers.h:421:13: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]\n> error = \"cancel request timed out\";\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~\n> \n> The second part of that could easily be fixed by declaring \"error\" as\n> \"const char *\". As for the first part, can we redefine the whole\n> function as returning \"const char *\"? (If not, this coding is very\n> questionable anyway.)\n\nYeah, this seems to work and I no longer get that complaint from\nheaderscheck.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 4 Apr 2024 10:45:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, 4 Apr 2024 at 10:45, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Yeah, this seems to work and I no longer get that complaint from\n> headerscheck.\n\npatch LGTM\n\n\n",
"msg_date": "Thu, 4 Apr 2024 10:47:03 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "[ from a week ago ]\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hm, indri failed:\n> ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O2 -fno-common -Werror -fvisibility=hidden -bundle -o dblink.dylib dblink.o -L../../src/port -L../../src/common -L../../src/interfaces/libpq -lpq -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.4.sdk -L/opt/local/libexec/llvm-15/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -Wl,-dead_strip_dylibs -Werror -fvisibility=hidden -bundle_loader ../../src/backend/postgres\n\n> Undefined symbols for architecture arm64:\n> \"_libintl_gettext\", referenced from:\n> _libpqsrv_cancel in dblink.o\n> _libpqsrv_cancel in dblink.o\n> ld: symbol(s) not found for architecture arm64\n> clang: error: linker command failed with exit code 1 (use -v to see invocation)\n> make[1]: *** [dblink.dylib] Error 1\n> make: *** [all-dblink-recurse] Error 2\n\nHaving just fixed the same issue for test_json_parser, I now realize\nwhat's going on there: dblink's link command doesn't actually mention\nany of the external libraries that we might need, such as libintl.\nYou can get away with that on some platforms, but not macOS.\nIt would probably be possible to fix that if anyone cared to.\nI'm not sufficiently excited about it to do so right now --- as\nyou say, we don't support translation in contrib anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Apr 2024 14:06:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hello hackers,\n\n30.03.2024 01:17, Noah Misch wrote:\n> On Fri, Mar 29, 2024 at 09:17:55AM +0100, Jelte Fennema-Nio wrote:\n>> On Thu, 28 Mar 2024 at 19:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Could we make this test bulletproof by using an injection point?\n>>> If not, I remain of the opinion that we're better off without it.\n>> Possibly, and if so, I agree that would be better than the currently\n>> added test. But I honestly don't feel like spending the time on\n>> creating such a test.\n> The SQL test is more representative of real applications, and it's way simpler\n> to understand. In general, I prefer 6-line SQL tests that catch a problem 10%\n> of the time over injection point tests that catch it 100% of the time. For\n> low detection rate to be exciting, it needs to be low enough to have a serious\n> chance of all buildfarm members reporting green for the bad commit. With ~115\n> buildfarm members running in the last day, 0.1% detection rate would have been\n> low enough to bother improving, but 4% would be high enough to call it good.\n\nAs a recent buildfarm failure on orlingo (which tests asan-enabled builds)\n[1] shows, that test can still fail:\n70/70 postgresql:postgres_fdw-running / postgres_fdw-running/regress ERROR 278.67s exit status 1\n\n@@ -2775,6 +2775,7 @@\n SET LOCAL statement_timeout = '10ms';\n select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long\n ERROR: canceling statement due to statement timeout\n+WARNING: could not get result of cancel request due to timeout\n COMMIT;\n\n(from the next run we can see normal duration:\n\"postgres_fdw-running/regress OK 6.30s \")\n\nI reproduced the failure with an asan-enabled build on a slowed-down VM\nand as far as I can see, it's caused by the following condition in\nProcessInterrupts():\n /*\n * If we are reading a command from the client, just ignore the cancel\n * request --- sending an extra error message won't accomplish\n * anything. Otherwise, go ahead and throw the error.\n */\n if (!DoingCommandRead)\n {\n LockErrorCleanup();\n ereport(ERROR,\n(errcode(ERRCODE_QUERY_CANCELED),\n errmsg(\"canceling statement due to user request\")));\n }\n\nI think this failure can be reproduced easily (without asan/slowing down)\nwith this modification:\n@@ -4630,6 +4630,7 @@ PostgresMain(const char *dbname, const char *username)\n idle_session_timeout_enabled = false;\n }\n\n+if (rand() % 10 == 0) pg_usleep(10000);\n /*\n * (5) disable async signal conditions again.\n *\n\nRunning this test in a loop (for ((i=1;i<=100;i++)); do \\\necho \"iteration $i\"; make -s check -C contrib/postgres_fdw/ || break; \\\ndone), I get:\n...\niteration 56\n# +++ regress check in contrib/postgres_fdw +++\n# initializing database system by copying initdb template\n# using temp instance on port 55312 with PID 991332\nok 1 - postgres_fdw 20093 ms\n1..1\n# All 1 tests passed.\niteration 57\n# +++ regress check in contrib/postgres_fdw +++\n# initializing database system by copying initdb template\n# using temp instance on port 55312 with PID 992152\nnot ok 1 - postgres_fdw 62064 ms\n1..1\n...\n--- .../contrib/postgres_fdw/expected/postgres_fdw.out 2024-06-22 02:52:42.991574907 +0000\n+++ .../contrib/postgres_fdw/results/postgres_fdw.out 2024-06-22 14:43:43.949552927 +0000\n@@ -2775,6 +2775,7 @@\n SET LOCAL statement_timeout = '10ms';\n select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long\n ERROR: canceling statement due to statement timeout\n+WARNING: could not get result of cancel request due to timeout\n COMMIT;\n\nI also came across another failure of the test:\n@@ -2774,7 +2774,7 @@\n BEGIN;\n SET LOCAL statement_timeout = '10ms';\n select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long\n-ERROR: canceling statement due to statement timeout\n+ERROR: canceling statement due to user request\n COMMIT;\n\nwhich is reproduced with a sleep added here:\n@@ -1065,6 +1065,7 @@ exec_simple_query(const char *query_string)\n */\n parsetree_list = pg_parse_query(query_string);\n+pg_usleep(11000);\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-06-20%2009%3A52%3A04\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 22 Jun 2024 18:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Sat, 22 Jun 2024 at 17:00, Alexander Lakhin <exclusion@gmail.com> wrote:\n> @@ -2775,6 +2775,7 @@\n> SET LOCAL statement_timeout = '10ms';\n> select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long\n> ERROR: canceling statement due to statement timeout\n> +WARNING: could not get result of cancel request due to timeout\n> COMMIT;\n\nAs you describe it, this problem occurs when the cancel request is\nprocessed by the foreign server, before the query is actually\nreceived. And postgres then (rightly) ignores the cancel request. I'm\nnot sure if the existing test is easily changeable to fix this. The\nonly thing that I can imagine works in practice is increasing the\nstatement_timeout, e.g. to 100ms.\n\n> I also came across another failure of the test:\n> @@ -2774,7 +2774,7 @@\n> BEGIN;\n> SET LOCAL statement_timeout = '10ms';\n> select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long\n> -ERROR: canceling statement due to statement timeout\n> +ERROR: canceling statement due to user request\n> COMMIT;\n>\n> which is reproduced with a sleep added here:\n> @@ -1065,6 +1065,7 @@ exec_simple_query(const char *query_string)\n> */\n> parsetree_list = pg_parse_query(query_string);\n> +pg_usleep(11000);\n\nAfter investigating, I realized this actually exposes a bug in our\nstatement timeout logic. It has nothing to do with posgres_fdw and\nreproduces with any regular postgres query too. Attached is a patch\nthat fixes this issue. This one should probably be backported.",
"msg_date": "Mon, 24 Jun 2024 00:59:48 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "24.06.2024 01:59, Jelte Fennema-Nio wrote:\n> On Sat, 22 Jun 2024 at 17:00, Alexander Lakhin <exclusion@gmail.com> wrote:\n>> @@ -2775,6 +2775,7 @@\n>> SET LOCAL statement_timeout = '10ms';\n>> select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long\n>> ERROR: canceling statement due to statement timeout\n>> +WARNING: could not get result of cancel request due to timeout\n>> COMMIT;\n> As you describe it, this problem occurs when the cancel request is\n> processed by the foreign server, before the query is actually\n> received. And postgres then (rightly) ignores the cancel request. I'm\n> not sure if the existing test is easily changeable to fix this. The\n> only thing that I can imagine works in practice is increasing the\n> statement_timeout, e.g. to 100ms.\n\nI'd just like to add that that one original query assumes several \"remote\"\nqueries (see the attached excerpt from postmaster.log with verbose logging\nenabled).\n\nBest regards,\nAlexander",
"msg_date": "Tue, 25 Jun 2024 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, 25 Jun 2024 at 07:00, Alexander Lakhin <exclusion@gmail.com> wrote:\n> I'd just like to add that that one original query assumes several \"remote\"\n> queries (see the attached excerpt from postmaster.log with verbose logging\n> enabled).\n\nNice catch! All those EXPLAIN queries are definitely not intentional,\nand likely to greatly increase the likelihood of this flakiness.\n\nAttached is a patch that fixes that by moving the test before enabling\nuse_remote_estimate on any of the foreign tables, as well as\nincreasing the statement_timeout to 100ms.\n\nMy expectation is that that should remove all failure cases. If it\ndoesn't, I think our best bet is removing the test again.",
"msg_date": "Tue, 25 Jun 2024 10:24:37 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 05:50:48PM +0100, Alvaro Herrera wrote:\n> On 2024-Mar-12, Jelte Fennema-Nio wrote:\n> > On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > Here's a last one for the cfbot.\n> > \n> > Thanks for committing the first 3 patches btw.\n> \n> Thanks, I included it.\n\nPGcancelConn *\nPQcancelCreate(PGconn *conn)\n{\n...\noom_error:\n\tconn->status = CONNECTION_BAD;\n\tlibpq_append_conn_error(cancelConn, \"out of memory\");\n\treturn (PGcancelConn *) cancelConn;\n}\n\nShouldn't that be s/conn->status/cancelConn->status/?\n\n\n",
"msg_date": "Sun, 30 Jun 2024 12:00:40 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Sun, 30 Jun 2024 at 21:00, Noah Misch <noah@leadboat.com> wrote:\n> Shouldn't that be s/conn->status/cancelConn->status/?\n\nUgh yes, I think this was a copy paste error. See attached patch 0003\nto fix this (rest of the patches are untouched from previous\nrevision).",
"msg_date": "Mon, 1 Jul 2024 00:38:46 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Mon, 1 Jul 2024 at 00:38, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> Ugh yes, I think this was a copy paste error. See attached patch 0003\n> to fix this (rest of the patches are untouched from previous\n> revision).\n\nAlvaro committed 0003, which caused cfbot to think a rebase is\nnecessary. Attached should solve that.",
"msg_date": "Wed, 10 Jul 2024 14:10:55 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hello,\n\n25.06.2024 11:24, Jelte Fennema-Nio wrote:\n> My expectation is that that should remove all failure cases. If it\n> doesn't, I think our best bet is removing the test again.\n\nIt looks like that test eventually showed what could be called a virtue.\nPlease take a look at a recent BF failure [1]:\ntimed out after 10800 secs\n...\n# +++ regress install-check in contrib/postgres_fdw +++\n# using postmaster on /home/andrew/bf/root/tmp/buildfarm-e2ahpQ, port 5878\n\nSo the postgres_fdw test hanged for several hours while running on the\nCygwin animal lorikeet.\n\nI've managed to reproduce this issue in my Cygwin environment by running\nthe postgres_fdw test in a loop (10 iterations are enough to get the\ndescribed effect). And what I'm seeing is that a query-cancelling backend\nis stuck inside pgfdw_xact_callback() -> pgfdw_abort_cleanup() ->\npgfdw_cancel_query() -> pgfdw_cancel_query_begin() -> libpqsrv_cancel() ->\nWaitLatchOrSocket() -> WaitEventSetWait() -> WaitEventSetWaitBlock() ->\npoll().\n\nThe timeout value (approximately 30 seconds), which is passed to poll(),\nis effectively ignored by this call — the waiting lasts for unlimited time.\n\nThis definitely is caused by 2466d6654. (I applied the test change from that\ncommit to 2466d6654~1 and saw no issue when running the same test in a\nloop.)\n\nWith gdb attached to a hanging backend, I see the following stack trace:\n#0 0x00007ffb7f70d5e4 in ntdll!ZwWaitForSingleObject () from /cygdrive/c/Windows/SYSTEM32/ntdll.dll\n#1 0x00007ffb7d2e920e in WaitForSingleObjectEx () from /cygdrive/c/Windows/System32/KERNELBASE.dll\n#2 0x00007ffb5ce78862 in fhandler_socket_wsock::evaluate_events (this=0x800126968, event_mask=50, events=@0x7ffffb208: \n0, erase=erase@entry=false)\n at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/fhandler/socket_inet.cc:268\n#3 0x00007ffb5cdef0f5 in peek_socket (me=0xa001a43c0) at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:1771\n#4 0x00007ffb5cdf211e in select_stuff::poll (this=this@entry=0x7ffffb300, readfds=0x7ffffb570, \nreadfds@entry=0x800000000, writefds=0x7ffffb560, writefds@entry=0x7ffffb5c0, exceptfds=0x7ffffb550,\n exceptfds@entry=0x7ffb5cdf2c97 <cygwin_select(int, fd_set*, fd_set*, fd_set*, timeval*)+71>) at \n/usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:554\n#5 0x00007ffb5cdf257e in select (maxfds=maxfds@entry=45, readfds=0x800000000, writefds=0x7ffffb5c0, \nexceptfds=0x7ffb5cdf2c97 <cygwin_select(int, fd_set*, fd_set*, fd_set*, timeval*)+71>, us=4308570016,\n us@entry=29973000) at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:204\n#6 0x00007ffb5cdf2927 in pselect (maxfds=45, readfds=0x7ffffb570, writefds=0x7ffffb560, exceptfds=0x7ffffb550, \nto=<optimized out>, to@entry=0x7ffffb500, set=<optimized out>, set@entry=0x0)\n at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:120\n#7 0x00007ffb5cdf2c97 in cygwin_select (maxfds=<optimized out>, readfds=<optimized out>, writefds=<optimized out>, \nexceptfds=<optimized out>, to=0x7ffffb5b0)\n at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:147\n#8 0x00007ffb5cddc112 in poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at \n/usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/poll.cc:83\n...\nand socket_inet.c:268 ([2]) indeed contains an infinite wait call\n(LOCK_EVENTS; / WaitForSingleObject (wsock_mtx, INFINITE)).\n\nSo it looks like a Cygwin bug, but maybe something should be done on our side\ntoo, at least to prevent such lorikeet failures.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-07-12%2010%3A05%3A27\n[2] https://www.cygwin.com/cgit/newlib-cygwin/tree/winsup/cygwin/fhandler/socket_inet.cc?h=cygwin-3.5.3\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 16 Jul 2024 14:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Jul-16, Alexander Lakhin wrote:\n\n> I've managed to reproduce this issue in my Cygwin environment by running\n> the postgres_fdw test in a loop (10 iterations are enough to get the\n> described effect). And what I'm seeing is that a query-cancelling backend\n> is stuck inside pgfdw_xact_callback() -> pgfdw_abort_cleanup() ->\n> pgfdw_cancel_query() -> pgfdw_cancel_query_begin() -> libpqsrv_cancel() ->\n> WaitLatchOrSocket() -> WaitEventSetWait() -> WaitEventSetWaitBlock() ->\n> poll().\n> \n> The timeout value (approximately 30 seconds), which is passed to poll(),\n> is effectively ignored by this call — the waiting lasts for unlimited time.\n\nUgh. I tried to follow what's going on in that cygwin code, but I gave\nup pretty quickly. It depends on a mutex, but I didn't see the mutex\nbeing defined or initialized anywhere.\n\n> So it looks like a Cygwin bug, but maybe something should be done on our side\n> too, at least to prevent such lorikeet failures.\n\nI don't know what else we can do other than remove the test.\n\nMaybe we can disable this test specifically on Cygwin. We could do that\nby creating a postgres_fdw_cancel.sql file, with the current output for\nall platforms, and a \"SELECT version() ~ 'cygwin' AS skip_test\" query,\nas we do for encoding tests and such.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n",
"msg_date": "Tue, 16 Jul 2024 17:08:52 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Jul-16, Alvaro Herrera wrote:\n\n> Maybe we can disable this test specifically on Cygwin. We could do that\n> by creating a postgres_fdw_cancel.sql file, with the current output for\n> all platforms, and a \"SELECT version() ~ 'cygwin' AS skip_test\" query,\n> as we do for encoding tests and such.\n\nSomething like this.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 16 Jul 2024 17:22:25 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Wed, Jul 17, 2024 at 3:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Ugh. I tried to follow what's going on in that cygwin code, but I gave\n> up pretty quickly. It depends on a mutex, but I didn't see the mutex\n> being defined or initialized anywhere.\n\nhttps://github.com/cygwin/cygwin/blob/cygwin-3.5.3/winsup/cygwin/fhandler/socket_inet.cc#L217C1-L217C77\n\nNot obvious how it'd be deadlocking (?), though... it's hard to see\nhow anything between LOCK_EVENTS and UNLOCK_EVENTS could escape/return\nearly. (Something weird going on with signal handlers? I can't\nimagine where one would call poll() though).\n\n\n",
"msg_date": "Wed, 17 Jul 2024 12:05:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hello Thomas,\n\n17.07.2024 03:05, Thomas Munro wrote:\n> On Wed, Jul 17, 2024 at 3:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Ugh. I tried to follow what's going on in that cygwin code, but I gave\n>> up pretty quickly. It depends on a mutex, but I didn't see the mutex\n>> being defined or initialized anywhere.\n> https://github.com/cygwin/cygwin/blob/cygwin-3.5.3/winsup/cygwin/fhandler/socket_inet.cc#L217C1-L217C77\n>\n> Not obvious how it'd be deadlocking (?), though... it's hard to see\n> how anything between LOCK_EVENTS and UNLOCK_EVENTS could escape/return\n> early. (Something weird going on with signal handlers? I can't\n> imagine where one would call poll() though).\n\nI've simplified the repro to the following:\necho \"\n-- setup foreign server \"loopback\" --\n\nCREATE TABLE t1(i int);\nCREATE FOREIGN TABLE ft1 (i int) SERVER loopback OPTIONS (table_name 't1');\nCREATE FOREIGN TABLE ft2 (i int) SERVER loopback OPTIONS (table_name 't1');\n\nINSERT INTO t1 SELECT i FROM generate_series(1, 100000) g(i);\n\" | psql\n\ncat << 'EOF' | psql\nSelect pg_sleep(10);\nSET statement_timeout = '10ms';\nSELECT 'SELECT count(*) FROM ft1 CROSS JOIN ft2;' FROM generate_series(1, 100)\n\\gexec\nEOF\n\nI've attached strace (with --mask=0x251, per [1]) to the query-cancelling\nbackend and got strace.log (see in attachment), while observing:\nERROR: canceling statement due to statement timeout\n...\nERROR: canceling statement due to statement timeout\n-- total 14 lines, then the process hanged --\n-- I interrupted it several seconds later --\n\nAs far as I can see (having analyzed a number of runs), the hanging occurs\nwhen some itimer-related activity happens before \"peek_socket\" in this\nevent sequence:\n[main] postgres {pid} select_stuff::wait: res after verify 0\n[main] postgres {pid} select_stuff::wait: returning 0\n[main] postgres {pid} select: sel.wait returns 0\n[main] postgres {pid} peek_socket: read_ready: 0, write_ready: 1, except_ready: 0\n\n(See the last occurrence of the sequence in the log.)\n\n[1] https://cygwin.com/cygwin-ug-net/strace.html\n\nBest regards,\nAlexander",
"msg_date": "Wed, 17 Jul 2024 22:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 7:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> As far as I can see (having analyzed a number of runs), the hanging occurs\n> when some itimer-related activity happens before \"peek_socket\" in this\n> event sequence:\n> [main] postgres {pid} select_stuff::wait: res after verify 0\n> [main] postgres {pid} select_stuff::wait: returning 0\n> [main] postgres {pid} select: sel.wait returns 0\n> [main] postgres {pid} peek_socket: read_ready: 0, write_ready: 1, except_ready: 0\n>\n> (See the last occurrence of the sequence in the log.)\n\nYeah, right, there's a lot going on between those two lines from the\n[main] thread. There are messages from helper threads [itimer], [sig]\nand [socksel]. At a guess, [socksel] might be doing extra secret\ncommunication over our socket in order to exchange SO_PEERCRED\ninformation, huh, is that always there? Seems worth filing a bug\nreport.\n\nFor the record, I know of one other occasional test failure on Cygwin:\nit randomly panics in SnapBuildSerialize(). While I don't expect\nthere to be any users of PostgreSQL on Cygwin (it was unusably broken\nbefore we refactored the postmaster in v16), that one is interesting\nbecause (1) it also happen on native Windows builds, and (2) at least\none candidate fix[1] sounds like it would speed up logical replication\non all operating systems.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BJ4jSFk%3D-hdoZdcx%2Bp7ru6xuipzCZY-kiKoDc2FjsV7g%40mail.gmail.com#afb5dc4208cc0776a060145f9571dec2\n\n\n",
"msg_date": "Thu, 18 Jul 2024 13:06:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On 2024-Jul-16, Alvaro Herrera wrote:\n\n> On 2024-Jul-16, Alvaro Herrera wrote:\n> \n> > Maybe we can disable this test specifically on Cygwin. We could do that\n> > by creating a postgres_fdw_cancel.sql file, with the current output for\n> > all platforms, and a \"SELECT version() ~ 'cygwin' AS skip_test\" query,\n> > as we do for encoding tests and such.\n> \n> Something like this.\n\nI have pushed this \"fix\", so we shouldn't see this failure anymore.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 22 Jul 2024 13:26:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hello Alvaro,\n\nLet me show you another related anomaly, which drongo kindly discovered\nrecently: [1]. That test failed with:\n SELECT dblink_cancel_query('dtest1');\n- dblink_cancel_query\n----------------------\n- OK\n+ dblink_cancel_query\n+--------------------------\n+ cancel request timed out\n (1 row)\n\nI've managed to reproduce this when running 20 dblink tests in parallel,\nand with extra logging added (see attached) I've got:\n...\n2024-08-28 10:17:12.949 PDT [8236:204] pg_regress/dblink LOG: statement: SELECT dblink_cancel_query('dtest1');\n!!!PQcancelPoll|8236| conn->status: 2\n!!!PQcancelPoll|8236| conn->status: 3\n!!!PQconnectPoll|8236| before pqPacketSend(..., &cancelpacket, ...)\n!!!pqPacketSend|8236| before pqFlush\n!!!pqsecure_raw_write|8236| could not send data to server: Socket is not connected (0x00002749/10057)\n!!!pqPacketSend|8236| after pqFlush, STATUS_OK\n!!!PQconnectPoll|8236| after pqPacketSend, STATUS_OK\n2024-08-28 10:17:12.950 PDT [5548:7] pg_regress LOG: statement: select * from foo where f1 < 3\n2024-08-28 10:17:12.951 PDT [8692:157] DEBUG: forked new backend, pid=4644 socket=5160\n2024-08-28 10:17:12.973 PDT [4644:1] [unknown] LOG: connection received: host=::1 port=55073\n2024-08-28 10:17:12.973 PDT [4644:2] [unknown] LOG: !!!BackendInitialize| before ProcessSSLStartup()\n!!!PQcancelPoll|8236| conn->status: 4\n!!!PQcancelPoll|8236| conn->status: 4\n2024-08-28 10:17:24.060 PDT [1436:1] DEBUG: snapshot of 0+0 running transaction ids (lsn 0/194C4E0 oldest xid 780 \nlatest complete 779 next xid 780)\n!!!PQcancelPoll|8236| conn->status: 4\n2024-08-28 10:17:42.951 PDT [4644:3] [unknown] LOG: !!!BackendInitialize| ProcessSSLStartup() returned -1\n2024-08-28 10:17:42.951 PDT [4644:4] [unknown] DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make\n...\n\nThus, pqsecure_raw_write(), called via PQcancelPoll() -> PQconnectPoll() ->\npqPacketSend() -> pqFlush) -> pqSendSome() -> pqsecure_write(), returned\nthe WSAENOTCONN error, but it wasn't noticed at upper levels.\nConsequently, the cancelling backend waited for the cancel packet that was\nnever sent.\n\nThe first commit, that I could reproduce this test failure on, is 2466d6654.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-26%2021%3A35%3A04\n\nBest regards,\nAlexander",
"msg_date": "Wed, 28 Aug 2024 21:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> Let me show you another related anomaly, which drongo kindly discovered\n> recently: [1]. That test failed with:\n> SELECT dblink_cancel_query('dtest1');\n> - dblink_cancel_query\n> ----------------------\n> - OK\n> + dblink_cancel_query\n> +--------------------------\n> + cancel request timed out\n> (1 row)\n\nWhile we're piling on, has anyone noticed that *non* Windows buildfarm\nanimals are also failing this test pretty frequently? The most recent\noccurrence is at [1], and it looks like this:\n\ndiff -U3 /home/bf/bf-build/mylodon/HEAD/pgsql/contrib/postgres_fdw/expected/query_cancel.out /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/postgres_fdw/regress/results/query_cancel.out\n--- /home/bf/bf-build/mylodon/HEAD/pgsql/contrib/postgres_fdw/expected/query_cancel.out\t2024-07-22 11:09:50.638133878 +0000\n+++ /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/postgres_fdw/regress/results/query_cancel.out\t2024-08-30 06:28:01.971083945 +0000\n@@ -17,4 +17,5 @@\n SET LOCAL statement_timeout = '10ms';\n select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long\n ERROR: canceling statement due to statement timeout\n+WARNING: could not get result of cancel request due to timeout\n COMMIT;\n\nI trawled the buildfarm database for other occurrences of \"could not\nget result of cancel request\" since this test went in. I found 34\nof them (see attachment), and none that weren't the timeout flavor.\n\nMost of the failing machines are not especially slow, so even though\nthe hard-wired 30 second timeout that's being used here feels a little\nunder-engineered, I'm not sure that arranging to raise it would help.\nMy spidey sense feels that there's some actual bug here, but it's hard\nto say where. mylodon's postmaster log confirms that the 30 seconds\ndid elapse, and that there wasn't anything much else going on:\n\n2024-08-30 06:27:31.926 UTC client backend[3668381] pg_regress/query_cancel ERROR: canceling statement due to statement timeout\n2024-08-30 06:27:31.926 UTC client backend[3668381] pg_regress/query_cancel STATEMENT: select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;\n2024-08-30 06:28:01.946 UTC client backend[3668381] pg_regress/query_cancel WARNING: could not get result of cancel request due to timeout\n\nAny thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2006%3A25%3A46",
"msg_date": "Fri, 30 Aug 2024 15:21:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, Aug 30, 2024, 21:21 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> While we're piling on, has anyone noticed that *non* Windows buildfarm\n> animals are also failing this test pretty frequently?\n>\n<snip>\n\nAny thoughts?\n>\n\nYes. Fixes are here (see the ~10 emails above in the thread for details):\nhttps://www.postgresql.org/message-id/CAGECzQQO8Cn2Rw45xUYmvzXeSSsst7-bcruuzUfMbGQc3ueSdw@mail.gmail.com\n\nThey don't apply anymore after the change to move this test to a dedicated\nfile. It shouldn't be too hard to update those patches though. I'll try to\ndo that in a few weeks when I'm back behind my computer. But feel free to\ncommit something earlier.\n\n>\n\nOn Fri, Aug 30, 2024, 21:21 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nWhile we're piling on, has anyone noticed that *non* Windows buildfarm\nanimals are also failing this test pretty frequently? <snip>\nAny thoughts? Yes. Fixes are here (see the ~10 emails above in the thread for details): https://www.postgresql.org/message-id/CAGECzQQO8Cn2Rw45xUYmvzXeSSsst7-bcruuzUfMbGQc3ueSdw@mail.gmail.comThey don't apply anymore after the change to move this test to a dedicated file. It shouldn't be too hard to update those patches though. I'll try to do that in a few weeks when I'm back behind my computer. But feel free to commit something earlier.",
"msg_date": "Fri, 30 Aug 2024 21:49:44 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> On Fri, Aug 30, 2024, 21:21 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While we're piling on, has anyone noticed that *non* Windows buildfarm\n>> animals are also failing this test pretty frequently?\n\n> Yes. Fixes are here (see the ~10 emails above in the thread for details):\n> https://www.postgresql.org/message-id/CAGECzQQO8Cn2Rw45xUYmvzXeSSsst7-bcruuzUfMbGQc3ueSdw@mail.gmail.com\n\nHmm. I'm not convinced that 0001 is an actual *fix*, but it should\nat least reduce the frequency of occurrence a lot, which'd help.\n\nI don't want to move the test case to where you propose, because\nthat's basically not sensible. But can't we avoid remote estimates\nby just cross-joining ft1 to itself, and not using the tables for\nwhich remote estimate is enabled?\n\nI think 0002 is probably outright wrong, or at least the change to\ndisable_statement_timeout is. Once we get to that, we don't want\nto throw a timeout error any more, even if an interrupt was received\njust before it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Aug 2024 16:11:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "I wrote:\n> Hmm. I'm not convinced that 0001 is an actual *fix*, but it should\n> at least reduce the frequency of occurrence a lot, which'd help.\n\nAfter enabling log_statement = all to verify what commands are being\nsent to the remote, I realized that there's a third thing this patch\ncan do to stabilize matters: issue a regular remote query inside the\ntest transaction, before we enable the timeout. This will ensure\nthat we've dealt with configure_remote_session() and started a\nremote transaction, so that there aren't extra round trips happening\nfor that while the clock is running.\n\nPushed with that addition and some comment-tweaking. We'll see\nwhether that actually makes things more stable, but I don't think\nit could make it worse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Aug 2024 16:55:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Fri, 30 Aug 2024 at 22:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> > On Fri, Aug 30, 2024, 21:21 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> While we're piling on, has anyone noticed that *non* Windows buildfarm\n> >> animals are also failing this test pretty frequently?\n>\n> > Yes. Fixes are here (see the ~10 emails above in the thread for details):\n> > https://www.postgresql.org/message-id/CAGECzQQO8Cn2Rw45xUYmvzXeSSsst7-bcruuzUfMbGQc3ueSdw@mail.gmail.com\n>\n> Hmm. I'm not convinced that 0001 is an actual *fix*, but it should\n> at least reduce the frequency of occurrence a lot, which'd help.\n\nI also don't think it's an actual fix, but I couldn't think of a way\nto fix this. And since this only happens if you cancel right at the\nstart of a postgres_fdw query, I don't think it's worth investing too\nmuch time on a fix.\n\n> I don't want to move the test case to where you propose, because\n> that's basically not sensible. But can't we avoid remote estimates\n> by just cross-joining ft1 to itself, and not using the tables for\n> which remote estimate is enabled?\n\nYeah that should work too (I just saw your next email, where you said\nit's committed like this).\n\n> I think 0002 is probably outright wrong, or at least the change to\n> disable_statement_timeout is. Once we get to that, we don't want\n> to throw a timeout error any more, even if an interrupt was received\n> just before it.\n\nThe disable_statement_timeout change was not the part of that patch\nthat was necessary for stable output, only the change in the first\nbranch of enable_statement_timeout was necessary. The reason being\nthat enable_statement_timeout is called multiple times for a query,\nbecause start_xact_command is called multiple times in\nexec_simple_query. The change to disable_statement_timeout just seemed\nlike the logical extension of that change, especially since there was\nbasically a verbatim copy of disable_statement_timeout in the second\nbranch of enable_statement_timeout.\n\nTo make sure I understand your suggestion correctly: Are you saying\nyou would want to completely remove the outstanding interrupt if it\nwas caused by de statement_timout when disable_statement_timeout is\ncalled? Because I agree that would probably make sense, but that\nsounds like a more impactful change. But the current behaviour seems\nstrictly worse than the behaviour proposed in the patch to me, because\ncurrently the backend would still be interrupted, but the error would\nindicate a reason for the interrupt that is simply incorrect i.e. it\nwill say it was cancelled due to a user request, which never happened.\n\n\n",
"msg_date": "Fri, 30 Aug 2024 23:24:58 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "Hello Tom,\n\n30.08.2024 23:55, Tom Lane wrote:\n> Pushed with that addition and some comment-tweaking. We'll see\n> whether that actually makes things more stable, but I don't think\n> it could make it worse.\n\nThank you for fixing that issue!\n\nI've tested your fix with the modification I proposed upthread:\n idle_session_timeout_enabled = false;\n }\n+if (rand() % 10 == 0) pg_usleep(10000);\n /*\n * (5) disable async signal conditions again.\n\nand can confirm that the issue is gone. On 8749d850f~1, the test failed\non iterations 3, 3. 12 for me, but on current REL_17_STABLE, 100 test\niterations succeeded.\n\nAt the same time, mylodon confirmed my other finding at [1] and failed [2] with:\n-ERROR: canceling statement due to statement timeout\n+ERROR: canceling statement due to user request\n\n[1] https://www.postgresql.org/message-id/4db099c8-4a52-3cc4-e970-14539a319466%40gmail.com\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2023%3A03%3A31\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 31 Aug 2024 07:04:04 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
},
{
"msg_contents": "On Sat, 31 Aug 2024 at 06:04, Alexander Lakhin <exclusion@gmail.com> wrote:\n> At the same time, mylodon confirmed my other finding at [1] and failed [2] with:\n> -ERROR: canceling statement due to statement timeout\n> +ERROR: canceling statement due to user request\n>\n> [1] https://www.postgresql.org/message-id/4db099c8-4a52-3cc4-e970-14539a319466%40gmail.com\n> [2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2023%3A03%3A31\n\nInterestingly that's a different test that failed, but it looks like\nit failed for the same reason that my 0002 patch fixes.\n\nI also took a quick look at the code again, and completely removing\nthe outstanding interrupt seems hard to do. Because there's no way to\nknow if there were multiple causes for the interupt, i.e. someone\ncould have pressed ctrl+c as well and we wouldn't want to undo that.\n\nSo I think the solution in 0002, while debatable if strictly correct,\nis the only fix that we can easily do. Also I personally believe the\nbehaviour resulting from 0002 is totally correct: The new behaviour\nwould be that if a timeout occurred, right before it was disabled or\nreset, but the interrupt was not processed yet, then we process that\ntimeout as normal. That seems totally reasonable behaviour to me from\nthe perspective of an end user: You get a timeout error when the\ntimeout occurred before the timeout was disabled/reset.\n\n\n",
"msg_date": "Sat, 31 Aug 2024 09:08:51 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add non-blocking version of PQcancel"
}
] |
[
{
"msg_contents": "\nWhile testing a buildfarm module to automate running headerscheck and\ncpluspluscheck, I encountered a bunch of errors like this:\n\nJan 12 09:35:57 In file included from\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/port/atomics.h:70,\nJan 12 09:35:57 from\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/storage/lwlock.h:21,\nJan 12 09:35:57 from\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/storage/lock.h:23,\nJan 12 09:35:57 from\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/storage/proc.h:21,\nJan 12 09:35:57 from\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/storage/shm_mq.h:18,\nJan 12 09:35:57 from\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/libpq/pqmq.h:17,\nJan 12 09:35:57 from /tmp/cpluspluscheck.16q7jo/test.cpp:3:\nJan 12 09:35:57\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/port/atomics/arch-x86.h:\nIn function ‘bool pg_atomic_test_set_flag_impl(volatile pg_atomic_flag*)’:\nJan 12 09:35:57\n/home/andrew/bf/root/HEAD/pgsql.build/../pgsql/src/include/port/atomics/arch-x86.h:143:23:\nwarning: ISO C++17 does not allow ‘register’ storage class specifier\n[-Wregister]\nJan 12 09:35:57 143 | register char _res = 1;\nJan 12 09:35:57 | ^~~~\n\n\nthere are similar complaints about s_lock.h.\n\n\nThe compiler is: g++ (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)\n\n\nDo we need to add -Wnoregister to the CXXFLAGS?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 12 Jan 2022 10:24:34 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "cpluspluscheck failure"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI've adapted the work that Konstantina did for pl/julia as part of her\nGSOC project to add an example of handling triggers to plsample. Which\nwas based from pl/tcl and pl/perl.\n\nOne aspect that I'm not sure about is whether the example should be\nduplicating code (as it is now) for keeping an example contained within\na single function. The only reason I can come up with is to try to read\nthrough an example with minimal jumping around.\n\nHoping this is a good start.\n\nRegards,\nMark",
"msg_date": "Wed, 12 Jan 2022 08:33:19 -0800",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": true,
"msg_subject": "trigger example for plsample"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThis patch is straightforward, does what it says, and passes the tests.\r\n\r\nRegarding the duplication of code between plsample_func_handler and\r\nplsample_trigger_handler, perhaps that's for the best for now, as 3554 in\r\nthe same commitfest also touches plsample, so merge conflicts may be\r\nminimized by not doing more invasive refactoring.\r\n\r\nThat would leave low-hanging fruit for a later patch that could refactor\r\nplsample to reduce the duplication (maybe adding a validator at the same\r\ntime? That would also duplicate some of the checks in the existing handlers.)\r\n\r\nI am not sure that structuring the trigger handler with separate compile and\r\nexecute steps is worth the effort for a simple example like plsample. The main\r\nplsample_func_handler is not so structured.\r\n\r\nIt's likely that many real PLs will have some notion of compilation separate from\r\nexecution. But those will also have logic to do the compilation only once, and\r\nsomewhere to cache the result of that for reuse across calls, and those kinds of\r\ndetails might make plsample's basic skeleton more complex than needed.\r\n\r\nI know that in just looking at expected/plsample.out, I was a little distracted by\r\nseeing multiple \"compile\" messages for the same trigger function in the same\r\nsession and wondering why that was.\r\n\r\nSo maybe it would be simpler and less distracting to assume that the PL targeted\r\nby plsample is one that just has a simple interpreter that works from the source text.\r\n\r\nRegards,\r\n-Chap\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Fri, 25 Feb 2022 18:39:39 +0000",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "On Fri, Feb 25, 2022 at 06:39:39PM +0000, Chapman Flack wrote:\n> This patch is straightforward, does what it says, and passes the tests.\n> \n> Regarding the duplication of code between plsample_func_handler and\n> plsample_trigger_handler, perhaps that's for the best for now, as 3554 in\n> the same commitfest also touches plsample, so merge conflicts may be\n> minimized by not doing more invasive refactoring.\n> \n> That would leave low-hanging fruit for a later patch that could refactor\n> plsample to reduce the duplication (maybe adding a validator at the same\n> time? That would also duplicate some of the checks in the existing handlers.)\n> \n> I am not sure that structuring the trigger handler with separate compile and\n> execute steps is worth the effort for a simple example like plsample. The main\n> plsample_func_handler is not so structured.\n> \n> It's likely that many real PLs will have some notion of compilation separate from\n> execution. But those will also have logic to do the compilation only once, and\n> somewhere to cache the result of that for reuse across calls, and those kinds of\n> details might make plsample's basic skeleton more complex than needed.\n> \n> I know that in just looking at expected/plsample.out, I was a little distracted by\n> seeing multiple \"compile\" messages for the same trigger function in the same\n> session and wondering why that was.\n> \n> So maybe it would be simpler and less distracting to assume that the PL targeted\n> by plsample is one that just has a simple interpreter that works from the source text.\n\nI've attached v2, which reduces the output:\n\n* Removing the notices for the text body, and the \"compile\" message.\n* Replaced the notice for \"compile\" message with a comment as a\n placeholder for where a compiling code or checking a cache may go.\n* Reducing the number of rows inserted into the table, thus reducing\n the number of notice messages about which code path is taken.\n\n\nI think that reduces the repetitiveness of the output...\n\nRegards,\nMark",
"msg_date": "Wed, 2 Mar 2022 12:12:01 -0800",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "On 03/02/22 15:12, Mark Wong wrote:\n\n> I've attached v2, which reduces the output:\n> \n> * Removing the notices for the text body, and the \"compile\" message.\n> * Replaced the notice for \"compile\" message with a comment as a\n> placeholder for where a compiling code or checking a cache may go.\n> * Reducing the number of rows inserted into the table, thus reducing\n> the number of notice messages about which code path is taken.\n\nI think the simplifying assumption of a simple interpreted language allows\na lot more of this code to go away. I mean more or less that whole first\nPG_TRY...PG_END_TRY block, which could be replaced with a comment saying\nsomething like\n\n The source text may be augmented here, such as by wrapping it as the\n body of a function in the target language, prefixing a parameter list\n with names like TD_name, TD_relid, TD_table_name, TD_table_schema,\n TD_event, TD_when, TD_level, TD_NEW, TD_OLD, and args, using whatever\n types in the target language are convenient. The augmented text can be\n cached in a longer-lived memory context, or, if the target language uses\n a compilation step, that can be done here, caching the result of the\n compilation.\n\nThat would leave only the later PG_TRY block where the function gets\n\"executed\". That could stay largely as is, but should probably also have\na comment within it, something like\n\n Here the function (the possibly-augmented source text, or the result\n of compilation if the target language uses such a step) should be\n executed, after binding these values from the TriggerData struct to\n the expected parameters.\n\nThat should make the example shorter and clearer, and preserve the output\ntested by the regression test. Trying to show much more than that involves\nassumptions about what the PL's target language syntax looks like, how its\nexecution engine works and parameters are bound, and so on, and that is\nlikely to just be distracting, IMV.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 10 Mar 2022 18:36:44 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 06:36:44PM -0500, Chapman Flack wrote:\n> On 03/02/22 15:12, Mark Wong wrote:\n> \n> > I've attached v2, which reduces the output:\n> > \n> > * Removing the notices for the text body, and the \"compile\" message.\n> > * Replaced the notice for \"compile\" message with a comment as a\n> > placeholder for where a compiling code or checking a cache may go.\n> > * Reducing the number of rows inserted into the table, thus reducing\n> > the number of notice messages about which code path is taken.\n> \n> I think the simplifying assumption of a simple interpreted language allows\n> a lot more of this code to go away. I mean more or less that whole first\n> PG_TRY...PG_END_TRY block, which could be replaced with a comment saying\n> something like\n> \n> The source text may be augmented here, such as by wrapping it as the\n> body of a function in the target language, prefixing a parameter list\n> with names like TD_name, TD_relid, TD_table_name, TD_table_schema,\n> TD_event, TD_when, TD_level, TD_NEW, TD_OLD, and args, using whatever\n> types in the target language are convenient. The augmented text can be\n> cached in a longer-lived memory context, or, if the target language uses\n> a compilation step, that can be done here, caching the result of the\n> compilation.\n> \n> That would leave only the later PG_TRY block where the function gets\n> \"executed\". That could stay largely as is, but should probably also have\n> a comment within it, something like\n> \n> Here the function (the possibly-augmented source text, or the result\n> of compilation if the target language uses such a step) should be\n> executed, after binding these values from the TriggerData struct to\n> the expected parameters.\n> \n> That should make the example shorter and clearer, and preserve the output\n> tested by the regression test. Trying to show much more than that involves\n> assumptions about what the PL's target language syntax looks like, how its\n> execution engine works and parameters are bound, and so on, and that is\n> likely to just be distracting, IMV.\n\nI think I've applied all of these suggestions and attached a new patch.\n\nRegards,\nMark",
"msg_date": "Wed, 6 Apr 2022 13:44:28 -0700",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 5:44 PM Mark Wong <markwkm@gmail.com> wrote:\n>\n> On Thu, Mar 10, 2022 at 06:36:44PM -0500, Chapman Flack wrote:\n> > On 03/02/22 15:12, Mark Wong wrote:\n> >\n> > > I've attached v2, which reduces the output:\n> > >\n> > > * Removing the notices for the text body, and the \"compile\" message.\n> > > * Replaced the notice for \"compile\" message with a comment as a\n> > > placeholder for where a compiling code or checking a cache may go.\n> > > * Reducing the number of rows inserted into the table, thus reducing\n> > > the number of notice messages about which code path is taken.\n> >\n> > I think the simplifying assumption of a simple interpreted language\nallows\n> > a lot more of this code to go away. I mean more or less that whole first\n> > PG_TRY...PG_END_TRY block, which could be replaced with a comment saying\n> > something like\n> >\n> > The source text may be augmented here, such as by wrapping it as the\n> > body of a function in the target language, prefixing a parameter list\n> > with names like TD_name, TD_relid, TD_table_name, TD_table_schema,\n> > TD_event, TD_when, TD_level, TD_NEW, TD_OLD, and args, using whatever\n> > types in the target language are convenient. The augmented text can be\n> > cached in a longer-lived memory context, or, if the target language\nuses\n> > a compilation step, that can be done here, caching the result of the\n> > compilation.\n> >\n> > That would leave only the later PG_TRY block where the function gets\n> > \"executed\". That could stay largely as is, but should probably also have\n> > a comment within it, something like\n> >\n> > Here the function (the possibly-augmented source text, or the result\n> > of compilation if the target language uses such a step) should be\n> > executed, after binding these values from the TriggerData struct to\n> > the expected parameters.\n> >\n> > That should make the example shorter and clearer, and preserve the\noutput\n> > tested by the regression test. Trying to show much more than that\ninvolves\n> > assumptions about what the PL's target language syntax looks like, how\nits\n> > execution engine works and parameters are bound, and so on, and that is\n> > likely to just be distracting, IMV.\n>\n> I think I've applied all of these suggestions and attached a new patch.\n>\n\nCool... I also have a look into the code. To me everything is also ok!!!\n\nRegards,\n\n--\nFabrízio de Royes Mello\n\nOn Wed, Apr 6, 2022 at 5:44 PM Mark Wong <markwkm@gmail.com> wrote:>> On Thu, Mar 10, 2022 at 06:36:44PM -0500, Chapman Flack wrote:> > On 03/02/22 15:12, Mark Wong wrote:> >> > > I've attached v2, which reduces the output:> > >> > > * Removing the notices for the text body, and the \"compile\" message.> > > * Replaced the notice for \"compile\" message with a comment as a> > > placeholder for where a compiling code or checking a cache may go.> > > * Reducing the number of rows inserted into the table, thus reducing> > > the number of notice messages about which code path is taken.> >> > I think the simplifying assumption of a simple interpreted language allows> > a lot more of this code to go away. I mean more or less that whole first> > PG_TRY...PG_END_TRY block, which could be replaced with a comment saying> > something like> >> > The source text may be augmented here, such as by wrapping it as the> > body of a function in the target language, prefixing a parameter list> > with names like TD_name, TD_relid, TD_table_name, TD_table_schema,> > TD_event, TD_when, TD_level, TD_NEW, TD_OLD, and args, using whatever> > types in the target language are convenient. The augmented text can be> > cached in a longer-lived memory context, or, if the target language uses> > a compilation step, that can be done here, caching the result of the> > compilation.> >> > That would leave only the later PG_TRY block where the function gets> > \"executed\". That could stay largely as is, but should probably also have> > a comment within it, something like> >> > Here the function (the possibly-augmented source text, or the result> > of compilation if the target language uses such a step) should be> > executed, after binding these values from the TriggerData struct to> > the expected parameters.> >> > That should make the example shorter and clearer, and preserve the output> > tested by the regression test. Trying to show much more than that involves> > assumptions about what the PL's target language syntax looks like, how its> > execution engine works and parameters are bound, and so on, and that is> > likely to just be distracting, IMV.>> I think I've applied all of these suggestions and attached a new patch.>Cool... I also have a look into the code. To me everything is also ok!!!Regards,--Fabrízio de Royes Mello",
"msg_date": "Wed, 6 Apr 2022 18:12:08 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "On 2022-04-06 16:44, Mark Wong wrote:\n> I think I've applied all of these suggestions and attached a new patch.\n\nThat looks good to me, though I wonder about the pfree(source).\nIn the simplest case of a PL that uses no advance compilation or\naugmentation step, the Code Execution block might naturally refer\nto source, so perhaps the example boilerplate shouldn't include\na pfree that needs to be removed in that case.\n\nIn fact, I wonder about the need for any retail pfree()s here. Those\nadded in this patch are the only ones in plsample.c. They are small\nallocations, and maybe it would both streamline the example to leave\nout the pfree calls, and be an illustration of best practice in letting\nthe memory context machinery handle all the deallocation at once, where\nthere isn't a special need to free something, like an especially large\nallocation, at retail.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 07 Apr 2022 10:30:13 -0400",
"msg_from": "chap@anastigmatix.net",
"msg_from_op": false,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "On Thu, Apr 07, 2022 at 10:30:13AM -0400, chap@anastigmatix.net wrote:\n> On 2022-04-06 16:44, Mark Wong wrote:\n> > I think I've applied all of these suggestions and attached a new patch.\n> \n> That looks good to me, though I wonder about the pfree(source).\n> In the simplest case of a PL that uses no advance compilation or\n> augmentation step, the Code Execution block might naturally refer\n> to source, so perhaps the example boilerplate shouldn't include\n> a pfree that needs to be removed in that case.\n> \n> In fact, I wonder about the need for any retail pfree()s here. Those\n> added in this patch are the only ones in plsample.c. They are small\n> allocations, and maybe it would both streamline the example to leave\n> out the pfree calls, and be an illustration of best practice in letting\n> the memory context machinery handle all the deallocation at once, where\n> there isn't a special need to free something, like an especially large\n> allocation, at retail.\n\nThanks, I've attached v4.\n\nI've removed all of the pfree()'s and added an elog(DEBUG1) for source\nto quiet a compiler warning about source's lack of use. :) (Was that a\ngood way?)\n\nRegards,\nMark",
"msg_date": "Thu, 7 Apr 2022 09:15:18 -0700",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nv4 looks good to me.\r\n\r\nI don't think this requires any documentation change.\r\nThe patch simply adds trigger handling example code to plsample.c,\r\nand plsample is already mentioned in the documentation on writing\r\na PL handler.\r\n\r\nRegards,\r\n-Chap\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 07 Apr 2022 21:37:38 +0000",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> v4 looks good to me.\n\nPushed with very minor editorialization. Mainly, I undid the\ndecision to stop printing the function source text, on the\ngrounds that (1) it falsified the comment immediately above,\nand (2) if you have to print it anyway to avoid compiler warnings,\nyou're just creating confusing inconsistency between the two\nhandler functions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Apr 2022 18:29:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: trigger example for plsample"
},
{
"msg_contents": "On Thu, Apr 07, 2022 at 06:29:53PM -0400, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n> > v4 looks good to me.\n> \n> Pushed with very minor editorialization. Mainly, I undid the\n> decision to stop printing the function source text, on the\n> grounds that (1) it falsified the comment immediately above,\n> and (2) if you have to print it anyway to avoid compiler warnings,\n> you're just creating confusing inconsistency between the two\n> handler functions.\n\nSounds good to me, thanks!\n\nRegards,\nMark\n\n\n",
"msg_date": "Fri, 8 Apr 2022 17:34:00 +0000",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: trigger example for plsample"
}
] |
[
{
"msg_contents": "Hello Postgres friends,\n\nI've got a question about the wire protocol; the relevant text in the docs\nseems a bit ambiguous to me. If the processing of a Sync message fails\n(e.g. because the commit of the current transaction fails), is the backend\nallowed to respond with an ErrorResponse, in addition to the ReadyForQuery\nmessage? Or, does the backend swallow the error, and return only the\nReadyForQuery (I hope not).\n\nThe docs\n<https://www.postgresql.org/docs/14/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY>\n say:\n\"\"\"\nAt completion of each series of extended-query messages, the frontend\nshould issue a Sync message. This parameterless message causes the backend\nto close the current transaction if it's not inside a BEGIN/COMMIT\ntransaction block (“close” meaning to commit if no error, or roll back if\nerror). Then a ReadyForQuery response is issued. The purpose of Sync is to\nprovide a resynchronization point for error recovery. When an error is\ndetected while processing any extended-query message, the backend issues\nErrorResponse, then reads and discards messages until a Sync is reached,\nthen issues ReadyForQuery and returns to normal message processing. (But\nnote that no skipping occurs if an error is detected while processing Sync\n— this ensures that there is one and only one ReadyForQuery sent for each\nSync.)\n\"\"\"\n\nThis paragraph acknowledges that an error can be \"detected\" while\nprocessing a Sync, but one reading of it might suggest that the only\nresponse from a Sync is a single ReadyForQueryMessage.\n\nThanks!\n\n- Andrei\n\nHello Postgres friends,I've got a question about the wire protocol; the relevant text in the docs seems a bit ambiguous to me. If the processing of a Sync message fails (e.g. because the commit of the current transaction fails), is the backend allowed to respond with an ErrorResponse, in addition to the ReadyForQuery message? Or, does the backend swallow the error, and return only the ReadyForQuery (I hope not).The docs say:\"\"\"At completion of each series of extended-query messages, the frontend should issue a Sync message. This parameterless message causes the backend to close the current transaction if it's not inside a BEGIN/COMMIT transaction block (“close” meaning to commit if no error, or roll back if error). Then a ReadyForQuery response is issued. The purpose of Sync is to provide a resynchronization point for error recovery. When an error is detected while processing any extended-query message, the backend issues ErrorResponse, then reads and discards messages until a Sync is reached, then issues ReadyForQuery and returns to normal message processing. (But note that no skipping occurs if an error is detected while processing Sync — this ensures that there is one and only one ReadyForQuery sent for each Sync.)\"\"\"This paragraph acknowledges that an error can be \"detected\" while processing a Sync, but one reading of it might suggest that the only response from a Sync is a single ReadyForQueryMessage.Thanks!- Andrei",
"msg_date": "Wed, 12 Jan 2022 12:52:50 -0500",
"msg_from": "Andrei Matei <andreimatei1@gmail.com>",
"msg_from_op": true,
"msg_subject": "is ErrorResponse possible on Sync?"
},
{
"msg_contents": ">Or, does the backend swallow the error, and return only the ReadyForQuery\n(I hope not).\n\nWhat is your backend version?\n\nHere's a well-known case when the backend did swallow the error:\n\"Error on failed COMMIT\"\nhttps://www.postgresql.org/message-id/b9fb50dc-0f6e-15fb-6555-8ddb86f4aa71%40postgresfriends.org\n\nI don't remember if the behavior has been fixed or not.\nThe expected behavior was \"commit\" returned \"rollback\" status without any\nerror.\n\nVladimir\n\n>Or, does the backend swallow the error, and return only the ReadyForQuery (I hope not).What is your backend version?Here's a well-known case when the backend did swallow the error:\"Error on failed COMMIT\"https://www.postgresql.org/message-id/b9fb50dc-0f6e-15fb-6555-8ddb86f4aa71%40postgresfriends.orgI don't remember if the behavior has been fixed or not.The expected behavior was \"commit\" returned \"rollback\" status without any error.Vladimir",
"msg_date": "Wed, 12 Jan 2022 21:03:17 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is ErrorResponse possible on Sync?"
},
{
"msg_contents": "Andrei Matei <andreimatei1@gmail.com> writes:\n> I've got a question about the wire protocol; the relevant text in the docs\n> seems a bit ambiguous to me. If the processing of a Sync message fails\n> (e.g. because the commit of the current transaction fails), is the backend\n> allowed to respond with an ErrorResponse, in addition to the ReadyForQuery\n> message? Or, does the backend swallow the error, and return only the\n> ReadyForQuery (I hope not).\n\nUh ... I don't think Sync itself can fail. Any ErrorResponse you see\nthere is really from failure of some prior command. The Sync is really\ndelimiting how much stuff you'd like to skip in case of a failure.\nBasically this is to allow pipelining of commands, with the ability to\ndiscard later commands if an earlier one fails.\n\nBut in any case, no, Sync would not suppress an error message if\none is needed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jan 2022 13:05:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: is ErrorResponse possible on Sync?"
},
{
"msg_contents": "Thanks!\n\nI work on CockroachDB - which is wire-compatible with Postgres - so I'm\ninterested in what the server can and cannot do.\n\n\n> Uh ... I don't think Sync itself can fail. Any ErrorResponse you see\n> there is really from failure of some prior command.\n\n\nHmm, this got me curious. If Sync itself cannot fail, then what is this\nsentence really saying: \"This parameterless message (ed. Sync) causes the\nbackend to close the current transaction if it's not inside a BEGIN/COMMIT\ntransaction block (“close” meaning to commit if no error, or roll back if\nerror).\" ?\nThis seems to say that, outside of BEGIN/END, the transaction is committed\nat Sync time (i.e. if the Sync is never sent, nothing is committed).\nPresumably, committing a transaction can fail even if no\nprevious command/statement failed, right?\n\n\n\n> The Sync is really\n> delimiting how much stuff you'd like to skip in case of a failure.\n> Basically this is to allow pipelining of commands, with the ability to\n> discard later commands if an earlier one fails.\n>\n> But in any case, no, Sync would not suppress an error message if\n> one is needed.\n>\n> regards, tom lane\n>\n\nThanks!I work on CockroachDB - which is wire-compatible with Postgres - so I'm interested in what the server can and cannot do. \nUh ... I don't think Sync itself can fail. Any ErrorResponse you see\nthere is really from failure of some prior command.Hmm, this got me curious. If Sync itself cannot fail, then what is this sentence really saying: \"This parameterless message (ed. Sync) causes the backend to close the current transaction if it's not inside a BEGIN/COMMIT transaction block (“close” meaning to commit if no error, or roll back if error).\" ?This seems to say that, outside of BEGIN/END, the transaction is committed at Sync time (i.e. if the Sync is never sent, nothing is committed). Presumably, committing a transaction can fail even if no previous command/statement failed, right? The Sync is really\ndelimiting how much stuff you'd like to skip in case of a failure.\nBasically this is to allow pipelining of commands, with the ability to\ndiscard later commands if an earlier one fails.\n\nBut in any case, no, Sync would not suppress an error message if\none is needed.\n\n regards, tom lane",
"msg_date": "Wed, 12 Jan 2022 13:18:53 -0500",
"msg_from": "Andrei Matei <andreimatei1@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: is ErrorResponse possible on Sync?"
},
{
"msg_contents": "On 2022-Jan-12, Andrei Matei wrote:\n\n> If Sync itself cannot fail, then what is this\n> sentence really saying: \"This parameterless message (ed. Sync) causes the\n> backend to close the current transaction if it's not inside a BEGIN/COMMIT\n> transaction block (“close” meaning to commit if no error, or roll back if\n> error).\" ?\n> This seems to say that, outside of BEGIN/END, the transaction is committed\n> at Sync time (i.e. if the Sync is never sent, nothing is committed).\n> Presumably, committing a transaction can fail even if no\n> previous command/statement failed, right?\n\nA deferred trigger can cause a failure at COMMIT time for which no\nprevious error was reported.\n\nalvherre=# create table t (a int unique deferrable initially deferred);\nCREATE TABLE\nalvherre=# insert into t values (1);\nINSERT 0 1\nalvherre=# begin;\nBEGIN\nalvherre=*# insert into t values (1);\nINSERT 0 1\nalvherre=*# commit;\nERROR: duplicate key value violates unique constraint \"t_a_key\"\nDETALLE: Key (a)=(1) already exists.\n\nI'm not sure if you can cause this to explode with just a Sync message, though.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 12 Jan 2022 16:39:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: is ErrorResponse possible on Sync?"
},
{
"msg_contents": "> Hmm, this got me curious. If Sync itself cannot fail, then what is this\n> sentence really saying: \"This parameterless message (ed. Sync) causes the\n> backend to close the current transaction if it's not inside a BEGIN/COMMIT\n> transaction block (“close” meaning to commit if no error, or roll back if\n> error).\" ?\n> This seems to say that, outside of BEGIN/END, the transaction is committed\n> at Sync time (i.e. if the Sync is never sent, nothing is committed).\n\nYes, if you do not send Sync and terminate the session, then the\ntransaction will not be committed.\n\nFE=> Parse(stmt=\"\", query=\"INSERT INTO t1 VALUES(2)\")\nFE=> Bind(stmt=\"\", portal=\"\")\nFE=> Execute(portal=\"\")\nFE=> Terminate\n\nAfter this, I don't see the row (2) in table t1.\n\n> Presumably, committing a transaction can fail even if no\n> previous command/statement failed, right?\n\nRight. Alvaro gave an excellent example.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 13 Jan 2022 08:51:01 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: is ErrorResponse possible on Sync?"
},
{
"msg_contents": "I used your example and tried it with prepared statements. I captured the\ntraffic with\nWireshark. My client sent Bind/Execute/Sync messages, and PostgreSQL 14 sent\nback BindComplete/CommandComplete/ErrorResponse messages, followed by\nReadyForQuery after that.\n\nSo yes, it looks like ErrorResponse is a valid response for Sync.\n\nOn Tue, Jan 18, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Jan-12, Andrei Matei wrote:\n>\n> > If Sync itself cannot fail, then what is this\n> > sentence really saying: \"This parameterless message (ed. Sync) causes the\n> > backend to close the current transaction if it's not inside a\n> BEGIN/COMMIT\n> > transaction block (“close” meaning to commit if no error, or roll back if\n> > error).\" ?\n> > This seems to say that, outside of BEGIN/END, the transaction is\n> committed\n> > at Sync time (i.e. if the Sync is never sent, nothing is committed).\n> > Presumably, committing a transaction can fail even if no\n> > previous command/statement failed, right?\n>\n> A deferred trigger can cause a failure at COMMIT time for which no\n> previous error was reported.\n>\n> alvherre=# create table t (a int unique deferrable initially deferred);\n> CREATE TABLE\n> alvherre=# insert into t values (1);\n> INSERT 0 1\n> alvherre=# begin;\n> BEGIN\n> alvherre=*# insert into t values (1);\n> INSERT 0 1\n> alvherre=*# commit;\n> ERROR: duplicate key value violates unique constraint \"t_a_key\"\n> DETALLE: Key (a)=(1) already exists.\n>\n> I'm not sure if you can cause this to explode with just a Sync message,\n> though.\n>\n> --\n> Álvaro Herrera Valdivia, Chile —\n> https://www.EnterpriseDB.com/\n>\n>\n>\n>\n>\n\nI used your example and tried it with prepared statements. I captured the traffic withWireshark. My client sent Bind/Execute/Sync messages, and PostgreSQL 14 sentback BindComplete/CommandComplete/ErrorResponse messages, followed byReadyForQuery after that.So yes, it looks like ErrorResponse is a valid response for Sync.On Tue, Jan 18, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Jan-12, Andrei Matei wrote:\n\n> If Sync itself cannot fail, then what is this\n> sentence really saying: \"This parameterless message (ed. Sync) causes the\n> backend to close the current transaction if it's not inside a BEGIN/COMMIT\n> transaction block (“close” meaning to commit if no error, or roll back if\n> error).\" ?\n> This seems to say that, outside of BEGIN/END, the transaction is committed\n> at Sync time (i.e. if the Sync is never sent, nothing is committed).\n> Presumably, committing a transaction can fail even if no\n> previous command/statement failed, right?\n\nA deferred trigger can cause a failure at COMMIT time for which no\nprevious error was reported.\n\nalvherre=# create table t (a int unique deferrable initially deferred);\nCREATE TABLE\nalvherre=# insert into t values (1);\nINSERT 0 1\nalvherre=# begin;\nBEGIN\nalvherre=*# insert into t values (1);\nINSERT 0 1\nalvherre=*# commit;\nERROR: duplicate key value violates unique constraint \"t_a_key\"\nDETALLE: Key (a)=(1) already exists.\n\nI'm not sure if you can cause this to explode with just a Sync message, though.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 18 Jan 2022 18:33:20 -0500",
"msg_from": "Rafi Shamim <rafiss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is ErrorResponse possible on Sync?"
}
] |
[
{
"msg_contents": "\nFor some considerable time the recovery tests have been at best flaky on\nWindows, and at worst disastrous (i.e. they can hang rather than just\nfail). It's a problem I worked around on my buildfarm animals by\ndisabling the tests, hoping to find time to get back to analysing the\nproblem. But now we are seeing failures on the cfbot too (e.g.\nhttps://cirrus-ci.com/task/5860692694663168 and\nhttps://cirrus-ci.com/task/5316745152954368 ) so I think we need to\nspend some effort on finding out what's going on here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 12 Jan 2022 14:34:00 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Windows vs recovery tests"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> For some considerable time the recovery tests have been at best flaky on\n> Windows, and at worst disastrous (i.e. they can hang rather than just\n> fail).\n\nHow long is \"some considerable time\"? I'm wondering if this isn't\nthe same issue under discussion in\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jan 2022 16:15:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Windows vs recovery tests"
},
{
"msg_contents": "Hello.\n\nIt is also could be related -\nhttps://www.postgresql.org/message-id/flat/20220112112425.pgzymqcgdy62e7m3%40jrouhaud#097b54a539ac3091ca4e4ed8ce9ab89c\n(both Windows and Linux cases.\n\nBest regards,\nMichail.\n\nHello.It is also could be related - https://www.postgresql.org/message-id/flat/20220112112425.pgzymqcgdy62e7m3%40jrouhaud#097b54a539ac3091ca4e4ed8ce9ab89c (both Windows and Linux cases.Best regards,Michail.",
"msg_date": "Thu, 13 Jan 2022 00:37:44 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows vs recovery tests"
},
{
"msg_contents": "\nOn 1/12/22 16:15, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> For some considerable time the recovery tests have been at best flaky on\n>> Windows, and at worst disastrous (i.e. they can hang rather than just\n>> fail).\n> How long is \"some considerable time\"? I'm wondering if this isn't\n> the same issue under discussion in\n>\n> https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n>\n> \t\t\t\n\n\n\nmany months - this isn't a new thing.\n\n\nI'm going to set up a system where I run the test in a fairly tight loop\nand see if I can find out more.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 12 Jan 2022 17:22:37 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Windows vs recovery tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-12 14:34:00 -0500, Andrew Dunstan wrote:\n> For some considerable time the recovery tests have been at best flaky on\n> Windows, and at worst disastrous (i.e. they can hang rather than just\n> fail). It's a problem I worked around on my buildfarm animals by\n> disabling the tests, hoping to find time to get back to analysing the\n> problem. But now we are seeing failures on the cfbot too (e.g.\n> https://cirrus-ci.com/task/5860692694663168 and\n> https://cirrus-ci.com/task/5316745152954368 ) so I think we need to\n> spend some effort on finding out what's going on here.\n\nI'm somewhat certain that this is caused by assertions or aborts hanging with\na GUI popup, e.g. due to a check in the CRT.\n\nI saw these kind of hangs a lot in the aio development tree before I merged\nthe changes to change error/abort handling on windows. Before the recent CI\nchanges cfbot ran windows tests without assertions, which - besides just\nrunning fewer tests - explains having fewer such hang before, because there's\nmore sources of such error popups in the debug CRT.\n\nIt'd be nice if somebody could look at the patch and discussion in\nhttps://www.postgresql.org/message-id/20211005193033.tg4pqswgvu3hcolm%40alap3.anarazel.de\n\n\nThe debugging information for the cirrus-ci tasks has a list of\nprocesses. E.g. for https://cirrus-ci.com/task/5860692694663168 there's\n\n 1 agent.exe\n 1 CExecSvc.exe\n 1 csrss.exe\n 1 fontdrvhost.exe\n 1 lsass.exe\n 1 msdtc.exe\n 1 psql.exe\n 1 services.exe\n 1 wininit.exe\n 9 cmd.exe\n 9 perl.exe\n 9 svchost.exe\n 49 postgres.exe\nprocesses.\n\nSo we know that some tests were actually still in progress... It's\nparticularly interesting that there's a psql process still hanging around...\n\n\nBefore I \"simplified\" that away, the CI patch ran all tests with a shorter\nindividual timeout than the overall CI timeout, so we'd see error logs\netc. Perhaps that was a mistake to remove. IIRC I did something like\n\n\"C:\\Program Files\\Git\\usr\\bin\\timeout.exe\" -v --kill-after=35m 30m perl path/to/vcregress.pl ...\n\nPerhaps worth re-adding?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jan 2022 15:58:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Windows vs recovery tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-12 15:58:26 -0800, Andres Freund wrote:\n> On 2022-01-12 14:34:00 -0500, Andrew Dunstan wrote:\n> > For some considerable time the recovery tests have been at best flaky on\n> > Windows, and at worst disastrous (i.e. they can hang rather than just\n> > fail). It's a problem I worked around on my buildfarm animals by\n> > disabling the tests, hoping to find time to get back to analysing the\n> > problem. But now we are seeing failures on the cfbot too (e.g.\n> > https://cirrus-ci.com/task/5860692694663168 and\n> > https://cirrus-ci.com/task/5316745152954368 ) so I think we need to\n> > spend some effort on finding out what's going on here.\n> \n> I'm somewhat certain that this is caused by assertions or aborts hanging with\n> a GUI popup, e.g. due to a check in the CRT.\n\nOh, that was only about https://cirrus-ci.com/task/5860692694663168 not\nhttps://cirrus-ci.com/task/5316745152954368\n\nLooking through the recent recovery failures that were just on windows, I see\nthree different \"classes\" of recovery test failures:\n\n1) Tests sometimes never finish, resulting in CI timing out\n2) Tests sometimes finish, but t/001_stream_rep.pl fails\n3) Tests fail with patch specific issues (e.g. 36/2096, 36/3461, 36/3459)\n\n From the cases I looked the failures in 1) always have a successful\nt/001_stream_rep.pl. This makes me think that we're likely at two separate\ntypes of problems?\n\n\nOne might think that\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/3464\nconflicts with the above grouping. But all but the currently last failure were\ndue a compiler warning in an earlier version of the patch.\n\n\nThere's one interesting patch that also times out just on windows, albeit in\nanother test group:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/2096\n\nThis IMO looks likely to be a bug in psql introduced by that patch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jan 2022 18:25:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Windows vs recovery tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-12 18:25:26 -0800, Andres Freund wrote:\n> There's one interesting patch that also times out just on windows, albeit in\n> another test group:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/36/2096\n> \n> This IMO looks likely to be a bug in psql introduced by that patch.\n\nvcregress doesn't say which tests it's about to run unfortunately, but\ncomparing a successful run (on another branch) says that the test running\nafter pgbench are the psql tests.\n\n\nI pushed a branch to my github repository containing cfbot's commit and one\nthat runs the psql tests in isolation, under a timeout... Which predictably\nfailed. But at least we see the logs...\n\nhttps://cirrus-ci.com/task/6723083204558848?logs=psql_test_tcp#L15\n\nbased on the log files it looks like psql's 001_basic test did run\n\n# Test clean handling of unsupported replication command responses\npsql_like(\n\t$node,\n\t'handling of unexpected PQresultStatus',\n\t'START_REPLICATION 0/0',\n\tundef, qr/unexpected PQresultStatus: 8$/);\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6723083204558848/log/src/bin/psql/tmp_check/log/001_basic_main.log\n2022-01-13 03:28:45.973 GMT [604][walsender] [001_basic.pl][3/0:0] STATEMENT: START_REPLICATION 0/0\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6723083204558848/tap/src/bin/psql/tmp_check/log/regress_log_001_basic\n\nthe last log entry in tap log is\n\nok 23 - \\help with argument: stdout matches\n\n\nSo it looks like psql is hanging somewhere after that. I assume with an error\npopup that nobody can click on :/.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jan 2022 20:03:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Windows vs recovery tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-12 20:03:14 -0800, Andres Freund wrote:\n> So it looks like psql is hanging somewhere after that. I assume with an error\n> popup that nobody can click on :/.\n\nNot quite. Psql is actually just logging output in an endless loop. I\nconnected with cdb.exe.\n\nkP:\n\n00000000`007fd3c8 00007ffc`0d00f13a ntdll!NtWriteFile+0x14\n00000000`007fd3d0 00007ffc`03978ec3 KERNELBASE!WriteFile+0x7a\n00000000`007fd440 00007ffc`03979d21 ucrtbased!write_text_ansi_nolock(\n int fh = 0n2,\n char * buffer = 0x00000000`007febb0 \" ???\",\n unsigned int buffer_size = 1)+0x183\n00000000`007fe8f0 00007ffc`039798a7 ucrtbased!_write_nolock(\n int fh = 0n2,\n void * buffer = 0x00000000`007febb0,\n unsigned int buffer_size = 1,\n class __crt_cached_ptd_host * ptd = 0x00000000`007fef40)+0x451\n00000000`007fea80 00007ffc`03920e1d ucrtbased!_write_internal(\n int fh = 0n2,\n void * buffer = 0x00000000`007febb0,\n unsigned int size = 1,\n class __crt_cached_ptd_host * ptd = 0x00000000`007fef40)+0x377\n00000000`007feb20 00007ffc`0392090e ucrtbased!write_buffer_nolock<char>(\n char c = 0n32 ' ',\n class __crt_stdio_stream stream = class __crt_stdio_stream,\n class __crt_cached_ptd_host * ptd = 0x00000000`007fef40)+0x27d\n00000000`007febb0 00007ffc`03921242 ucrtbased!common_flush_and_write_nolock<char>(\n int c = 0n32,\n class __crt_stdio_stream stream = class __crt_stdio_stream,\n class __crt_cached_ptd_host * ptd = 0x00000000`007fef40)+0x22e\n00000000`007fec20 00007ffc`038ddf5a ucrtbased!__acrt_stdio_flush_and_write_narrow_nolock(\n int c = 0n32,\n struct _iobuf * stream = 0x00007ffc`03a27ce0,\n class __crt_cached_ptd_host * ptd = 0x00000000`007fef40)+0x32\n00000000`007fec60 00007ffc`038dd5a3 ucrtbased!_fwrite_nolock_internal(\n void * buffer = 0x00000000`007ff020,\n unsigned int64 element_size = 1,\n unsigned int64 element_count = 7,\n struct _iobuf * public_stream = 0x00007ffc`03a27ce0,\n class __crt_cached_ptd_host * ptd = 0x00000000`007fef40)+0x79a\n00000000`007fed60 00007ffc`038dd426 ucrtbased!<lambda_26974eb511f701c600fccfa2a97a8e1b>::operator()(void)+0x73\n00000000`007fedd0 00007ffc`038dd4a8 ucrtbased!__crt_seh_guarded_call<unsigned __int64>::operator()<<lambda_a2589f19c\n515cac03caf6db9c38355e9>,<lambda_26974eb511f701c600fccfa2a97a8e1b> &,<lambda_ad9ce2f38261e34e8a422b9cc35dfe8d> >(\n class __acrt_lock_stream_and_call::__l2::<lambda_a2589f19c515cac03caf6db9c38355e9> * setup = 0x0\n0000000`007fee58,\n class _fwrite_internal::__l2::<lambda_26974eb511f701c600fccfa2a97a8e1b> * action = 0x00000000`00\n7feec0,\n class __acrt_lock_stream_and_call::__l2::<lambda_ad9ce2f38261e34e8a422b9cc35dfe8d> * cleanup = 0\nx00000000`007fee50)+0x36\n00000000`007fee10 00007ffc`038dd72d ucrtbased!__acrt_lock_stream_and_call<<lambda_26974eb511f701c600fccfa2a97a8e1b>\n>(\n struct _iobuf * stream = 0x00007ffc`03a27ce0,\n class _fwrite_internal::__l2::<lambda_26974eb511f701c600fccfa2a97a8e1b> * action = 0x00000000`00\n7feec0)+0x58\n00000000`007fee70 00007ffc`038de046 ucrtbased!_fwrite_internal(\n void * buffer = 0x00000000`007ff020,\n unsigned int64 size = 1,\n unsigned int64 count = 7,\n struct _iobuf * stream = 0x00007ffc`03a27ce0,\n class __crt_cached_ptd_host * ptd = 0x00000000`007fef40)+0x15d\n00000000`007fef00 00000001`4004a639 ucrtbased!fwrite(\n void * buffer = 0x00000000`007ff020,\n unsigned int64 size = 1,\n unsigned int64 count = 7,\n struct _iobuf * stream = 0x00007ffc`03a27ce0)+0x56\n00000000`007fef90 00000001`4004a165 psql!flushbuffer(\n struct PrintfTarget * target = 0x00000000`007feff8)+0x59\n00000000`007fefd0 00000001`4004a1e6 psql!pg_vfprintf(\n struct _iobuf * stream = 0x00007ffc`03a27ce0,\n char * fmt = 0x00000001`40094268 \"error: \",\n char * args = 0x00000000`007ff4a0 \"@???\")+0xa5\n00000000`007ff450 00000001`40045962 psql!pg_fprintf(\n struct _iobuf * stream = 0x00007ffc`03a27ce0,\n char * fmt = 0x00000001`40094268 \"error: \")+0x36\n00000000`007ff490 00000001`40045644 psql!pg_log_generic_v(\n pg_log_level level = PG_LOG_ERROR (0n4),\n char * fmt = 0x00000001`40062e90 \"unexpected PQresultStatus: %d\",\n char * ap = 0x00000000`007ff540 \"???\")+0x302\n00000000`007ff4f0 00000001`4000ef1f psql!pg_log_generic(\n pg_log_level level = PG_LOG_ERROR (0n4),\n char * fmt = 0x00000001`40062e90 \"unexpected PQresultStatus: %d\")+0x34\n00000000`007ff530 00000001`4000e794 psql!AcceptResult(\n struct pg_result * result = 0x00000000`0015af90,\n bool show_error = false)+0x9f\n00000000`007ff580 00000001`4000c8fe psql!SendQueryAndProcessResults(\n char * query = 0x00000000`00107570 \"START_REPLICATION 0/0\",\n double * pelapsed_msec = 0x00000000`007ff718,\n bool is_watch = false,\n struct printQueryOpt * opt = 0x00000000`00000000,\n struct _iobuf * printQueryFout = 0x00000000`00000000,\n bool * tx_ended = 0x00000000`007ff6b3)+0x1a4\n00000000`007ff680 00000001`40024104 psql!SendQuery(\n char * query = 0x00000000`00107570 \"START_REPLICATION 0/0\")+0x42e\n00000000`007ff750 00000001`40001524 psql!MainLoop(\n struct _iobuf * source = 0x00007ffc`03a27c30)+0xf84\n00000000`007ff890 00000001`40032903 psql!process_file(\n char * filename = 0x00000001`400618e0 \"<stdin>\",\n bool use_relative_path = false)+0x274\n00000000`007ffcf0 00000001`400503f9 psql!main(\n int argc = 0n8,\n char ** argv = 0x00000000`0012a750)+0xc43\n00000000`007ffe10 00000001`4005034e psql!invoke_main(void)+0x39\n00000000`007ffe60 00000001`4005020e psql!__scrt_common_main_seh(void)+0x12e\n00000000`007ffed0 00000001`4005046e psql!__scrt_common_main(void)+0xe\n00000000`007fff00 00007ffc`0d2a7974 psql!mainCRTStartup(\n void * __formal = 0x00000000`002e6000)+0xe\n00000000`007fff30 00007ffc`0fe1a2f1 KERNEL32!BaseThreadInitThunk+0x14\n00000000`007fff60 00000000`00000000 ntdll!RtlUserThreadStart+0x21\n\nbp psql!pg_log_generic\n\n0:000> k3\nChild-SP RetAddr Call Site\n00000000`007ff528 00000001`4000ef1f psql!pg_log_generic [C:\\cirrus\\src\\common\\logging.c @ 198]\n00000000`007ff530 00000001`4000e794 psql!AcceptResult+0x9f [C:\\cirrus\\src\\bin\\psql\\common.c @ 385]\n00000000`007ff580 00000001`4000c8fe psql!SendQueryAndProcessResults+0x1a4 [C:\\cirrus\\src\\bin\\psql\\common.c @ 1163]\n0:000> g\nBreakpoint 0 hit\npsql!pg_log_generic:\n00000001`40045610 4889542410 mov qword ptr [rsp+10h],rdx ss:00000000`007ff538=0000000100000006\n0:000> k3\nChild-SP RetAddr Call Site\n00000000`007ff528 00000001`4000ef1f psql!pg_log_generic [C:\\cirrus\\src\\common\\logging.c @ 198]\n00000000`007ff530 00000001`4000e794 psql!AcceptResult+0x9f [C:\\cirrus\\src\\bin\\psql\\common.c @ 385]\n00000000`007ff580 00000001`4000c8fe psql!SendQueryAndProcessResults+0x1a4 [C:\\cirrus\\src\\bin\\psql\\common.c @ 1163]\n\n\nAh, I see the bug. It's a use-after-free introduced in the patch:\n\nSendQueryAndProcessResults(const char *query, double *pelapsed_msec,\n\tbool is_watch, const printQueryOpt *opt, FILE *printQueryFout, bool *tx_ended)\n\n...\n\t/* first result */\n\tresult = PQgetResult(pset.db);\n\n\twhile (result != NULL)\n\n...\n\t\tif (!AcceptResult(result, false))\n\t\t{\n...\n\t\t\tClearOrSaveResult(result);\n\t\t\tsuccess = false;\n\n\t\t\t/* and switch to next result */\n\t\t\tresult_status = PQresultStatus(result);\n\t\t\tif (result_status == PGRES_COPY_BOTH ||\n\t\t\t\tresult_status == PGRES_COPY_OUT ||\n\t\t\t\tresult_status == PGRES_COPY_IN)\n\nSo we called ClearOrSaveResult() with did a PQclear(), and then we go and call\nPQresultStatus().\n\n\nSo this really is unrelated to CI. I'll mention this message in the other\nthread.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jan 2022 21:41:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Windows vs recovery tests"
}
] |
[
{
"msg_contents": "After attempting to use gin and gist indexes for our queries that run against citext columns, our team has come up with the following to make our queries run from 2 mins to 25ms;CREATE EXTENSION pg_trgmCREATE EXTENSION btree_gin --may not be needed, checking\nCREATE OPERATOR CLASS gin_trgm_ops_ci_newFOR TYPE citext USING ginASOPERATOR 1 % (text, text),FUNCTION 1 btint4cmp (int4, int4),FUNCTION 2 gin_extract_value_trgm (text, internal),FUNCTION 3 gin_extract_query_trgm (text, internal, int2, internal, internal, internal, internal),FUNCTION 4 gin_trgm_consistent (internal,int2, text, int4, internal, internal, internal, internal),STORAGE int4;\nALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 3 ~~ (citext, citext),OPERATOR 4 ~~* (citext, citext);ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 7 %> (text, text),FUNCTION 6 (text,text) gin_trgm_triconsistent (internal, int2, text, int4, internal, internal, internal);\n\nOur question is, does anyone see any flaw on this? \nAlso, could this not be incorporated into postgres natively?\nI'm posting the old and new explain plans;\nNew explain;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=874327.76..874327.77 rows=1 width=8) (actual time=21.952..21.954 rows=1 loops=1)-> Nested Loop (cost=1620.95..874284.13 rows=17449 width=0) (actual time=6.259..21.948 rows=9 loops=1)-> Bitmap Heap Scan on t775 b1 (cost=1620.39..525029.25 rows=45632 width=35) (actual time=6.212..8.189 rows=13 loops=1)Recheck Cond: ((c240001002 ~~ 'smp%'::citext) OR (c200000020 ~~ 'smp%'::citext) OR (c200000001 ~~ 'smp%'::citext))Rows Removed by Index Recheck: 259Filter: ((c400079600 <> 'ABC_BUSINESSSERVICE'::citext) AND (c400127400 = 'ABC.ASSET'::citext) AND ((c1000000001 = 'Mrictton Global'::citext) OR (c1000000001 = 'ABCOpsMonitoring'::citext) OR (c1000000001 = 'Mrictton'::citext) OR (c1000000001 = 'Mrictton EITTE'::citext) OR (c1000000001 = 'Mrictton Finance'::citext) OR (c1000000001 = 'Mrictton Generic Services and Support'::citext) OR (c1000000001 = 'Mrictton Global'::citext) OR (c1000000001 = 'Mrictton Global Demo Solutions'::citext) OR (c1000000001 = 'Mrictton HR Direct'::citext) OR (c1000000001 = 'Mrictton Marketing and Communications'::citext) OR (c1000000001 = 'Ericsson Master Data Management'::citext) OR (c1000000001 = 'Mrictton OHS'::citext) OR (c1000000001 = 'Mrictton Patents and Licensing'::citext) OR (c1000000001 = 'Mrictton Sales'::citext) OR (c1000000001 = 'MricttonSecurity'::citext) OR (c1000000001 = 'Mrictton Shared Services'::citext) OR (c1000000001 = 'Mrictton Sourcing'::citext) OR (c1000000001 = 'Mrictton Supply ROD'::citext) OR (c1000000001 = 'Mrictton SW Supply Operations'::citext) OR (c1000000001 = 'Remedy,a ABC Software Company'::citext)) AND (c400079600 = ANY ('{ABC_DATABASE,ABC_ACCOUNT,ABC_MEDIA,ABC.CORE:ABC_CONCRETECOLLECTION,ABC_PACKAGE,ABC_BIOS,ABC_SYSTEMSOFTWARE,ABC_KEYBOARD,ABC_LAN,ABC_LOGICALSYSTEMCOMPONENT,ABC_LNSGROUP,ABC_PHYSICALLOCATION,ABC_FLOPPYDRIVE,ABC_DOCUMENT,ABC_BUSINESSSERVICE,ABC_DATABASESTORAGE,ABC_NETWORKPORT,ABC_VIRTUALSYSTEMENABLER,ABC_POINTINGDEVICE,ABC_PRINTER,ABC_SYSTEMRESOURCE,ABC_CONNECTIVITYSEGMENT,ABC.CORE:ABC_BUSINESSPROCESS,ABC_PROTOCOLENDPOINT,ABC_TRANSACTION,ABC_APPLICATIONINFRASTRUCTURE,ABC_SOFTWARESERVER,ABC_UPS,ABC_ACTIVITY,ABC_CDROMDRIVE,ABC.CORE:ABC_RASD,ABC_PRODUCT,ABC_REMOTEFILESYSTEM,ABC_IPENDPOINT,ABC_LOCALFILESYSTEM,ABC_APPLICATION,ABC_IPCONNECTIVITYSUBNET,ABC_CLUSTER,ABC_CHASSIS,ABC_WAN,ABC_PATCH,ABC_ADMINDOMAIN,ABC.CORE:ABC_RESOURCEPOOL,ABC_IPXCONNECTIVITYNETWORK,ABC_HARDWARESYSTEMCOMPONENT,ABC_FILESYSTEM,ABC_MONITOR,ABC_CONNECTIVITYGROUP,ABC_EQUIPMENT,ABC_MAINFRAME,ABC_RACK,ABC_OPERATINGSYSTEM,ABC_PROCESSOR,ABC_SHARE,ABC_LANENDPOINT,ABC_HARDWAREPACKAGE,ABC_TAPEDRIVE,ABC_COMMUNICATIONENDPOINT,ABC_APPLICATIONSYSTEM,ABC_CARD,ABC_DISKPARTITION,ABC.CORE:ABC_VIRTUALSYSTEMSETTINGDATA,ABC_MEMORY,ABC_NTDOMAIN,ABC_COMPUTERSYSTEM,ABC_DISKDRIVE,ABC_SERVICEOFFERINGINSTANCE,ABC_ROLE,ABC_APPLICATIONSERVICE}'::citext[])))Rows Removed by Filter: 62Heap Blocks: exact=313-> BitmapOr (cost=1620.39..1620.39 rows=163489 width=0) (actual time=5.703..5.704 rows=0 loops=1)-> Bitmap Index Scan on oto2 (cost=0.00..528.72 rows=54496 width=0) (actual time=0.724..0.724 rows=41 loops=1)Index Cond: (c240001002 ~~ 'smp%'::citext)-> Bitmap Index Scan on oto3 (cost=0.00..528.72 rows=54496 width=0) (actual time=4.852..4.852 rows=331 loops=1)Index Cond: (c200000020 ~~ 'smp%'::citext)-> Bitmap Index Scan on oto4 (cost=0.00..528.72 rows=54496 width=0) (actual time=0.127..0.127 rows=0 loops=1)Index Cond: (c200000001 ~~ 'smp%'::citext)-> Index Scan using i1279_0_400129200_t1279 on t1279 b2 (cost=0.56..7.64 rows=1 width=35) (actual time=1.057..1.058 rows=1 loops=13)Index Cond: (c400129200 = b1.c400129200)Filter: ((c7 <> 6) AND (c7 <> 8))Rows Removed by Filter: 0Planning Time: 2.478 msExecution Time: 22.059 ms(21 rows)\nTime: 26.510 ms\nOld explain with slow plan;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1926420.44..1926420.70 rows=102 width=1199) (actual time=16396.091..16569.194 rows=9 loops=1)-> Sort (cost=1926420.44..1926458.76 rows=15326 width=1199) (actual time=16396.089..16569.190 rows=9 loops=1)Sort Key: b1.c200000020 NULLS FIRST, ((concat((concat(b1.c1, '|'))::citext, COALESCE(b2.c1, ''::citext)))::citext)Sort Method: quicksort Memory: 29kB-> WindowAgg (cost=1000.56..1925832.51 rows=15326 width=1199) (actual time=16396.025..16569.138 rows=9 loops=1)-> Gather (cost=1000.56..1925564.30 rows=15326 width=1191) (actual time=4288.742..16569.068 rows=9 loops=1)Workers Planned: 6Workers Launched: 6-> Nested Loop (cost=0.56..1923031.70 rows=2554 width=1191) (actual time=9430.362..16387.794 rows=1 loops=7)-> Parallel Seq Scan on t1279 b2 (cost=0.00..530806.15 rows=416134 width=910) (actual time=0.016..575.311 rows=353200 loops=7)Filter: ((c7 <> 6) AND (c7 <> 8))Rows Removed by Filter: 574840-> Index Scan using efrain_test_ix_t775_2 on t775 b1 (cost=0.56..3.34 rows=1 width=316) (actual time=0.044..0.044 rows=0 loops=2472402)Index Cond: ((c400129200 = b2.c400129200) AND (c400127400 = 'ABC.ASSET'::citext))Filter: ((c400079600 <> 'ABC_BUSINESSSERVICE'::citext) AND ((c240001002 ~~ 'smp%'::citext) OR (c200000020 ~~ 'smp%'::citext) OR (c200000001 ~~ 'smp%'::citext)) AND ((c1000000001 ='Mrictton Global'::citext) OR (c1000000001 = 'ABCOpsMonitoring'::citext) OR (c1000000001 = 'Mrictton'::citext) OR (c1000000001 = 'Mrictton EITTE'::citext) OR (c1000000001 = 'Mrictton Finance'::citext) OR (c1000000001 = 'Mrictton Generic Services and Support'::citext) OR (c1000000001 = 'Mrictton Global'::citext) OR (c1000000001 = 'Mrictton Global Demo Solutions'::citext) OR (c1000000001 = 'Mrictton HR Direct'::citext) OR (c1000000001 = 'Mrictton Marketing and Communications'::citext) OR (c1000000001 = 'Mrictton Master Data Management'::citext) OR (c1000000001 = 'Mrictton OHS'::citext) OR (c1000000001 = 'Mrictton Patents and Licensing'::citext) OR (c1000000001 = 'Mrictton Sales'::citext) OR (c1000000001 = 'Mrictton Security'::citext) OR (c1000000001 = 'Mrictton Shared Services'::citext) OR (c1000000001 = 'Mrictton Sourcing'::citext) OR (c1000000001 = 'Mrictton Supply ROD'::citext) OR (c1000000001 = 'Mrictton SW Supply Operations'::citext) OR (c1000000001 = 'Remedy,a ABC Software Company'::citext)) AND (c400079600 = ANY ('{ABC_DATABASE,ABC_ACCOUNT,ABC_MEDIA,ABC.CORE:ABC_CONCRETECOLLECTION,ABC_PACKAGE,ABC_BIOS,ABC_SYSTEMSOFTWARE,ABC_KEYBOARD,ABC_LAN,ABC_LOGICALSYSTEMCOMPONENT,ABC_LNSGROUP,ABC_PHYSICALLOCATION,ABC_FLOPPYDRIVE,ABC_DOCUMENT,ABC_BUSINESSSERVICE,ABC_DATABASESTORAGE,ABC_NETWORKPORT,ABC_VIRTUALSYSTEMENABLER,ABC_POINTINGDEVICE,ABC_PRINTER,ABC_SYSTEMRESOURCE,ABC_CONNECTIVITYSEGMENT,ABC.CORE:ABC_BUSINESSPROCESS,ABC_PROTOCOLENDPOINT,ABC_TRANSACTION,ABC_APPLICATIONINFRASTRUCTURE,ABC_SOFTWARESERVER,ABC_UPS,ABC_ACTIVITY,ABC_CDROMDRIVE,ABC.CORE:ABC_RASD,ABC_PRODUCT,ABC_REMOTEFILESYSTEM,ABC_IPENDPOINT,ABC_LOCALFILESYSTEM,ABC_APPLICATION,ABC_IPCONNECTIVITYSUBNET,ABC_CLUSTER,ABC_CHASSIS,ABC_WAN,ABC_PATCH,ABC_ADMINDOMAIN,ABC.CORE:ABC_RESOURCEPOOL,ABC_IPXCONNECTIVITYNETWORK,ABC_HARDWARESYSTEMCOMPONENT,ABC_FILESYSTEM,ABC_MONITOR,ABC_CONNECTIVITYGROUP,ABC_EQUIPMENT,ABC_MAINFRAME,ABC_RACK,ABC_OPERATINGSYSTEM,ABC_PROCESSOR,ABC_SHARE,ABC_LANENDPOINT,ABC_HARDWAREPACKAGE,ABC_TAPEDRIVE,ABC_COMMUNICATIONENDPOINT,ABC_APPLICATIONSYSTEM,ABC_CARD,ABC_DISKPARTITION,ABC.CORE:ABC_VIRTUALSYSTEMSETTINGDATA,ABC_MEMORY,ABC_NTDOMAIN,ABC_COMPUTERSYSTEM,ABC_DISKDRIVE,ABC_SERVICEOFFERINGINSTANCE,ABC_ROLE,ABC_APPLICATIONSERVICE}'::citext[])))Rows Removed by Filter: 1Planning Time: 3.205 msExecution Time: 16569.351 ms(18 rows)\nTime: 16577.806 ms (00:16.578)\nProductsPostgreSQL Community EditionProduct VersionPostgreSQL 12\n\nThanks.\n\nAfter attempting to use gin and gist indexes for our queries that run against citext columns, our team has come up with the following to make our queries run from 2 mins to 25ms;CREATE EXTENSION pg_trgmCREATE EXTENSION btree_gin --may not be needed, checkingCREATE OPERATOR CLASS gin_trgm_ops_ci_newFOR TYPE citext USING ginASOPERATOR 1 % (text, text),FUNCTION 1 btint4cmp (int4, int4),FUNCTION 2 gin_extract_value_trgm (text, internal),FUNCTION 3 gin_extract_query_trgm (text, internal, int2, internal, internal, internal, internal),FUNCTION 4 gin_trgm_consistent (internal,int2, text, int4, internal, internal, internal, internal),STORAGE int4;ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 3 ~~ (citext, citext),OPERATOR 4 ~~* (citext, citext);ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 7 %> (text, text),FUNCTION 6 (text,text) gin_trgm_triconsistent (internal, int2, text, int4, internal, internal, internal);Our question is, does anyone see any flaw on this? Also, could this not be incorporated into postgres natively?I'm posting the old and new explain plans;New explain;QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=874327.76..874327.77 rows=1 width=8) (actual time=21.952..21.954 rows=1 loops=1)-> Nested Loop (cost=1620.95..874284.13 rows=17449 width=0) (actual time=6.259..21.948 rows=9 loops=1)-> Bitmap Heap Scan on t775 b1 (cost=1620.39..525029.25 rows=45632 width=35) (actual time=6.212..8.189 rows=13 loops=1)Recheck Cond: ((c240001002 ~~ 'smp%'::citext) OR (c200000020 ~~ 'smp%'::citext) OR (c200000001 ~~ 'smp%'::citext))Rows Removed by Index Recheck: 259Filter: ((c400079600 <> 'ABC_BUSINESSSERVICE'::citext) AND (c400127400 = 'ABC.ASSET'::citext) AND ((c1000000001 = 'Mrictton Global'::citext) OR (c1000000001 = 'ABCOpsMonitoring'::citext) OR (c1000000001 = 'Mrictton'::citext) OR (c1000000001 = 'Mrictton EITTE'::citext) OR (c1000000001 = 'Mrictton Finance'::citext) OR (c1000000001 = 'Mrictton Generic Services and Support'::citext) OR (c1000000001 = 'Mrictton Global'::citext) OR (c1000000001 = 'Mrictton Global Demo Solutions'::citext) OR (c1000000001 = 'Mrictton HR Direct'::citext) OR (c1000000001 = 'Mrictton Marketing and Communications'::citext) OR (c1000000001 = 'Ericsson Master Data Management'::citext) OR (c1000000001 = 'Mrictton OHS'::citext) OR (c1000000001 = 'Mrictton Patents and Licensing'::citext) OR (c1000000001 = 'Mrictton Sales'::citext) OR (c1000000001 = 'MricttonSecurity'::citext) OR (c1000000001 = 'Mrictton Shared Services'::citext) OR (c1000000001 = 'Mrictton Sourcing'::citext) OR (c1000000001 = 'Mrictton Supply ROD'::citext) OR (c1000000001 = 'Mrictton SW Supply Operations'::citext) OR (c1000000001 = 'Remedy,a ABC Software Company'::citext)) AND (c400079600 = ANY ('{ABC_DATABASE,ABC_ACCOUNT,ABC_MEDIA,ABC.CORE:ABC_CONCRETECOLLECTION,ABC_PACKAGE,ABC_BIOS,ABC_SYSTEMSOFTWARE,ABC_KEYBOARD,ABC_LAN,ABC_LOGICALSYSTEMCOMPONENT,ABC_LNSGROUP,ABC_PHYSICALLOCATION,ABC_FLOPPYDRIVE,ABC_DOCUMENT,ABC_BUSINESSSERVICE,ABC_DATABASESTORAGE,ABC_NETWORKPORT,ABC_VIRTUALSYSTEMENABLER,ABC_POINTINGDEVICE,ABC_PRINTER,ABC_SYSTEMRESOURCE,ABC_CONNECTIVITYSEGMENT,ABC.CORE:ABC_BUSINESSPROCESS,ABC_PROTOCOLENDPOINT,ABC_TRANSACTION,ABC_APPLICATIONINFRASTRUCTURE,ABC_SOFTWARESERVER,ABC_UPS,ABC_ACTIVITY,ABC_CDROMDRIVE,ABC.CORE:ABC_RASD,ABC_PRODUCT,ABC_REMOTEFILESYSTEM,ABC_IPENDPOINT,ABC_LOCALFILESYSTEM,ABC_APPLICATION,ABC_IPCONNECTIVITYSUBNET,ABC_CLUSTER,ABC_CHASSIS,ABC_WAN,ABC_PATCH,ABC_ADMINDOMAIN,ABC.CORE:ABC_RESOURCEPOOL,ABC_IPXCONNECTIVITYNETWORK,ABC_HARDWARESYSTEMCOMPONENT,ABC_FILESYSTEM,ABC_MONITOR,ABC_CONNECTIVITYGROUP,ABC_EQUIPMENT,ABC_MAINFRAME,ABC_RACK,ABC_OPERATINGSYSTEM,ABC_PROCESSOR,ABC_SHARE,ABC_LANENDPOINT,ABC_HARDWAREPACKAGE,ABC_TAPEDRIVE,ABC_COMMUNICATIONENDPOINT,ABC_APPLICATIONSYSTEM,ABC_CARD,ABC_DISKPARTITION,ABC.CORE:ABC_VIRTUALSYSTEMSETTINGDATA,ABC_MEMORY,ABC_NTDOMAIN,ABC_COMPUTERSYSTEM,ABC_DISKDRIVE,ABC_SERVICEOFFERINGINSTANCE,ABC_ROLE,ABC_APPLICATIONSERVICE}'::citext[])))Rows Removed by Filter: 62Heap Blocks: exact=313-> BitmapOr (cost=1620.39..1620.39 rows=163489 width=0) (actual time=5.703..5.704 rows=0 loops=1)-> Bitmap Index Scan on oto2 (cost=0.00..528.72 rows=54496 width=0) (actual time=0.724..0.724 rows=41 loops=1)Index Cond: (c240001002 ~~ 'smp%'::citext)-> Bitmap Index Scan on oto3 (cost=0.00..528.72 rows=54496 width=0) (actual time=4.852..4.852 rows=331 loops=1)Index Cond: (c200000020 ~~ 'smp%'::citext)-> Bitmap Index Scan on oto4 (cost=0.00..528.72 rows=54496 width=0) (actual time=0.127..0.127 rows=0 loops=1)Index Cond: (c200000001 ~~ 'smp%'::citext)-> Index Scan using i1279_0_400129200_t1279 on t1279 b2 (cost=0.56..7.64 rows=1 width=35) (actual time=1.057..1.058 rows=1 loops=13)Index Cond: (c400129200 = b1.c400129200)Filter: ((c7 <> 6) AND (c7 <> 8))Rows Removed by Filter: 0Planning Time: 2.478 msExecution Time: 22.059 ms(21 rows)Time: 26.510 msOld explain with slow plan;QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1926420.44..1926420.70 rows=102 width=1199) (actual time=16396.091..16569.194 rows=9 loops=1)-> Sort (cost=1926420.44..1926458.76 rows=15326 width=1199) (actual time=16396.089..16569.190 rows=9 loops=1)Sort Key: b1.c200000020 NULLS FIRST, ((concat((concat(b1.c1, '|'))::citext, COALESCE(b2.c1, ''::citext)))::citext)Sort Method: quicksort Memory: 29kB-> WindowAgg (cost=1000.56..1925832.51 rows=15326 width=1199) (actual time=16396.025..16569.138 rows=9 loops=1)-> Gather (cost=1000.56..1925564.30 rows=15326 width=1191) (actual time=4288.742..16569.068 rows=9 loops=1)Workers Planned: 6Workers Launched: 6-> Nested Loop (cost=0.56..1923031.70 rows=2554 width=1191) (actual time=9430.362..16387.794 rows=1 loops=7)-> Parallel Seq Scan on t1279 b2 (cost=0.00..530806.15 rows=416134 width=910) (actual time=0.016..575.311 rows=353200 loops=7)Filter: ((c7 <> 6) AND (c7 <> 8))Rows Removed by Filter: 574840-> Index Scan using efrain_test_ix_t775_2 on t775 b1 (cost=0.56..3.34 rows=1 width=316) (actual time=0.044..0.044 rows=0 loops=2472402)Index Cond: ((c400129200 = b2.c400129200) AND (c400127400 = 'ABC.ASSET'::citext))Filter: ((c400079600 <> 'ABC_BUSINESSSERVICE'::citext) AND ((c240001002 ~~ 'smp%'::citext) OR (c200000020 ~~ 'smp%'::citext) OR (c200000001 ~~ 'smp%'::citext)) AND ((c1000000001 ='Mrictton Global'::citext) OR (c1000000001 = 'ABCOpsMonitoring'::citext) OR (c1000000001 = 'Mrictton'::citext) OR (c1000000001 = 'Mrictton EITTE'::citext) OR (c1000000001 = 'Mrictton Finance'::citext) OR (c1000000001 = 'Mrictton Generic Services and Support'::citext) OR (c1000000001 = 'Mrictton Global'::citext) OR (c1000000001 = 'Mrictton Global Demo Solutions'::citext) OR (c1000000001 = 'Mrictton HR Direct'::citext) OR (c1000000001 = 'Mrictton Marketing and Communications'::citext) OR (c1000000001 = 'Mrictton Master Data Management'::citext) OR (c1000000001 = 'Mrictton OHS'::citext) OR (c1000000001 = 'Mrictton Patents and Licensing'::citext) OR (c1000000001 = 'Mrictton Sales'::citext) OR (c1000000001 = 'Mrictton Security'::citext) OR (c1000000001 = 'Mrictton Shared Services'::citext) OR (c1000000001 = 'Mrictton Sourcing'::citext) OR (c1000000001 = 'Mrictton Supply ROD'::citext) OR (c1000000001 = 'Mrictton SW Supply Operations'::citext) OR (c1000000001 = 'Remedy,a ABC Software Company'::citext)) AND (c400079600 = ANY ('{ABC_DATABASE,ABC_ACCOUNT,ABC_MEDIA,ABC.CORE:ABC_CONCRETECOLLECTION,ABC_PACKAGE,ABC_BIOS,ABC_SYSTEMSOFTWARE,ABC_KEYBOARD,ABC_LAN,ABC_LOGICALSYSTEMCOMPONENT,ABC_LNSGROUP,ABC_PHYSICALLOCATION,ABC_FLOPPYDRIVE,ABC_DOCUMENT,ABC_BUSINESSSERVICE,ABC_DATABASESTORAGE,ABC_NETWORKPORT,ABC_VIRTUALSYSTEMENABLER,ABC_POINTINGDEVICE,ABC_PRINTER,ABC_SYSTEMRESOURCE,ABC_CONNECTIVITYSEGMENT,ABC.CORE:ABC_BUSINESSPROCESS,ABC_PROTOCOLENDPOINT,ABC_TRANSACTION,ABC_APPLICATIONINFRASTRUCTURE,ABC_SOFTWARESERVER,ABC_UPS,ABC_ACTIVITY,ABC_CDROMDRIVE,ABC.CORE:ABC_RASD,ABC_PRODUCT,ABC_REMOTEFILESYSTEM,ABC_IPENDPOINT,ABC_LOCALFILESYSTEM,ABC_APPLICATION,ABC_IPCONNECTIVITYSUBNET,ABC_CLUSTER,ABC_CHASSIS,ABC_WAN,ABC_PATCH,ABC_ADMINDOMAIN,ABC.CORE:ABC_RESOURCEPOOL,ABC_IPXCONNECTIVITYNETWORK,ABC_HARDWARESYSTEMCOMPONENT,ABC_FILESYSTEM,ABC_MONITOR,ABC_CONNECTIVITYGROUP,ABC_EQUIPMENT,ABC_MAINFRAME,ABC_RACK,ABC_OPERATINGSYSTEM,ABC_PROCESSOR,ABC_SHARE,ABC_LANENDPOINT,ABC_HARDWAREPACKAGE,ABC_TAPEDRIVE,ABC_COMMUNICATIONENDPOINT,ABC_APPLICATIONSYSTEM,ABC_CARD,ABC_DISKPARTITION,ABC.CORE:ABC_VIRTUALSYSTEMSETTINGDATA,ABC_MEMORY,ABC_NTDOMAIN,ABC_COMPUTERSYSTEM,ABC_DISKDRIVE,ABC_SERVICEOFFERINGINSTANCE,ABC_ROLE,ABC_APPLICATIONSERVICE}'::citext[])))Rows Removed by Filter: 1Planning Time: 3.205 msExecution Time: 16569.351 ms(18 rows)Time: 16577.806 ms (00:16.578)ProductsPostgreSQL Community EditionProduct VersionPostgreSQL 12Thanks.",
"msg_date": "Thu, 13 Jan 2022 04:51:56 +0000 (UTC)",
"msg_from": "\"Efrain J. Berdecia\" <ejberdecia@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Custom Operator for citext LIKE predicates question"
},
{
"msg_contents": "\"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:\n> After attempting to use gin and gist indexes for our queries that run against citext columns, our team has come up with the following to make our queries run from 2 mins to 25ms;CREATE EXTENSION pg_trgmCREATE EXTENSION btree_gin --may not be needed, checking\n> CREATE OPERATOR CLASS gin_trgm_ops_ci_newFOR TYPE citext USING ginASOPERATOR 1 % (text, text),FUNCTION 1 btint4cmp (int4, int4),FUNCTION 2 gin_extract_value_trgm (text, internal),FUNCTION 3 gin_extract_query_trgm (text, internal, int2, internal, internal, internal, internal),FUNCTION 4 gin_trgm_consistent (internal,int2, text, int4, internal, internal, internal, internal),STORAGE int4;\n> ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 3 ~~ (citext, citext),OPERATOR 4 ~~* (citext, citext);ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 7 %> (text, text),FUNCTION 6 (text,text) gin_trgm_triconsistent (internal, int2, text, int4, internal, internal, internal);\n\n> Our question is, does anyone see any flaw on this? \n\nUmm ... does it actually work? I'd expect that you get case-sensitive\ncomparison behavior in such an index, because those support functions\nare for plain text and they're not going to know that you'd like\ncase-insensitive behavior.\n\nYou generally can't make a new gin or gist opclass without actually\nwriting some C code, because the support functions embody all\nthe semantics of the operators.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jan 2022 00:58:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Custom Operator for citext LIKE predicates question"
},
{
"msg_contents": "Thank you for the feedback.\nIn our setup it has actually worked per the explains provided making the query run in milliseconds instead of seconds.\nWe weren't sure if this should be something that could be added natively with future Postgres deployments.\nThanks,Efrain J. Berdecia \n\n On Thursday, January 13, 2022, 12:58:27 AM EST, Tom Lane <tgl@sss.pgh.pa.us> wrote: \n \n \"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:\n> After attempting to use gin and gist indexes for our queries that run against citext columns, our team has come up with the following to make our queries run from 2 mins to 25ms;CREATE EXTENSION pg_trgmCREATE EXTENSION btree_gin --may not be needed, checking\n> CREATE OPERATOR CLASS gin_trgm_ops_ci_newFOR TYPE citext USING ginASOPERATOR 1 % (text, text),FUNCTION 1 btint4cmp (int4, int4),FUNCTION 2 gin_extract_value_trgm (text, internal),FUNCTION 3 gin_extract_query_trgm (text, internal, int2, internal, internal, internal, internal),FUNCTION 4 gin_trgm_consistent (internal,int2, text, int4, internal, internal, internal, internal),STORAGE int4;\n> ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 3 ~~ (citext, citext),OPERATOR 4 ~~* (citext, citext);ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 7 %> (text, text),FUNCTION 6 (text,text) gin_trgm_triconsistent (internal, int2, text, int4, internal, internal, internal);\n\n> Our question is, does anyone see any flaw on this? \n\nUmm ... does it actually work? I'd expect that you get case-sensitive\ncomparison behavior in such an index, because those support functions\nare for plain text and they're not going to know that you'd like\ncase-insensitive behavior.\n\nYou generally can't make a new gin or gist opclass without actually\nwriting some C code, because the support functions embody all\nthe semantics of the operators.\n\n regards, tom lane\n \nThank you for the feedback.In our setup it has actually worked per the explains provided making the query run in milliseconds instead of seconds.We weren't sure if this should be something that could be added natively with future Postgres deployments.Thanks,Efrain J. Berdecia\n\n\n\n\n On Thursday, January 13, 2022, 12:58:27 AM EST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n \n\n\n\"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:> After attempting to use gin and gist indexes for our queries that run against citext columns, our team has come up with the following to make our queries run from 2 mins to 25ms;CREATE EXTENSION pg_trgmCREATE EXTENSION btree_gin --may not be needed, checking> CREATE OPERATOR CLASS gin_trgm_ops_ci_newFOR TYPE citext USING ginASOPERATOR 1 % (text, text),FUNCTION 1 btint4cmp (int4, int4),FUNCTION 2 gin_extract_value_trgm (text, internal),FUNCTION 3 gin_extract_query_trgm (text, internal, int2, internal, internal, internal, internal),FUNCTION 4 gin_trgm_consistent (internal,int2, text, int4, internal, internal, internal, internal),STORAGE int4;> ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 3 ~~ (citext, citext),OPERATOR 4 ~~* (citext, citext);ALTER OPERATOR FAMILY gin_trgm_ops_ci_new USING gin ADDOPERATOR 7 %> (text, text),FUNCTION 6 (text,text) gin_trgm_triconsistent (internal, int2, text, int4, internal, internal, internal);> Our question is, does anyone see any flaw on this? Umm ... does it actually work? I'd expect that you get case-sensitivecomparison behavior in such an index, because those support functionsare for plain text and they're not going to know that you'd likecase-insensitive behavior.You generally can't make a new gin or gist opclass without actuallywriting some C code, because the support functions embody allthe semantics of the operators. regards, tom lane",
"msg_date": "Thu, 13 Jan 2022 12:38:33 +0000 (UTC)",
"msg_from": "\"Efrain J. Berdecia\" <ejberdecia@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Custom Operator for citext LIKE predicates question"
},
{
"msg_contents": "\"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:\n> In our setup it has actually worked per the explains provided making the query run in milliseconds instead of seconds.\n\nTo me, \"work\" includes \"get the right answer\". I do not think you\nare getting the same answers that citext would normally provide.\nIf you don't care about case-insensitivity, why don't you just\nuse plain text?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jan 2022 10:10:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Custom Operator for citext LIKE predicates question"
},
{
"msg_contents": "Good points. At least on the limited testing we did, we were able to get the same answer back with both executions; at least for the use cases we tested. \nWe are still doing more thourough testing.\nThis is an application that is been ported from MS SQL server to postgres and apparently the migration dba team determined citext was the way to go to maintain MSSQL existing usage of the data in the columns.\n\nThanks,Efrain J. Berdecia \n\n On Thursday, January 13, 2022, 10:10:38 AM EST, Tom Lane <tgl@sss.pgh.pa.us> wrote: \n \n \"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:\n> In our setup it has actually worked per the explains provided making the query run in milliseconds instead of seconds.\n\nTo me, \"work\" includes \"get the right answer\". I do not think you\nare getting the same answers that citext would normally provide.\nIf you don't care about case-insensitivity, why don't you just\nuse plain text?\n\n regards, tom lane\n \nGood points. At least on the limited testing we did, we were able to get the same answer back with both executions; at least for the use cases we tested. We are still doing more thourough testing.This is an application that is been ported from MS SQL server to postgres and apparently the migration dba team determined citext was the way to go to maintain MSSQL existing usage of the data in the columns.Thanks,Efrain J. Berdecia\n\n\n\n\n On Thursday, January 13, 2022, 10:10:38 AM EST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n \n\n\n\"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:> In our setup it has actually worked per the explains provided making the query run in milliseconds instead of seconds.To me, \"work\" includes \"get the right answer\". I do not think youare getting the same answers that citext would normally provide.If you don't care about case-insensitivity, why don't you justuse plain text? regards, tom lane",
"msg_date": "Thu, 13 Jan 2022 17:20:36 +0000 (UTC)",
"msg_from": "\"Efrain J. Berdecia\" <ejberdecia@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: Custom Operator for citext LIKE predicates question"
}
] |
[
{
"msg_contents": "Hello All,\n\nIt looks like we could have different isolation levels on primary and\nstandby servers in the context of replication. If the primary crashes\nand a standby server is made as primary, there could be change in\nquery results because of isolation levels. Is that expected?\n\nThanks,\nRKN\n\n\n",
"msg_date": "Thu, 13 Jan 2022 16:46:57 +0530",
"msg_from": "RKN Sai Krishna <rknsaiforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Isolation levels on primary and standby"
},
{
"msg_contents": "\n\n> 13 янв. 2022 г., в 16:16, RKN Sai Krishna <rknsaiforpostgres@gmail.com> написал(а):\n> \n> It looks like we could have different isolation levels on primary and\n> standby servers in the context of replication. If the primary crashes\n> and a standby server is made as primary, there could be change in\n> query results because of isolation levels. Is that expected?\n\nHi, RKN!\n\nTransaction isolation level can be set at the beginning of each individual transaction.\nYou can read more about transaction isolation level at [0].\n\nThere are many settings which can be configured different on Standby. E.g. default_transaction_isolation [1]. This is expected behaviour, AFAIK.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://www.postgresql.org/docs/current/transaction-iso.html\n[1] https://www.postgresql.org/docs/current/runtime-config-client.html\n\n",
"msg_date": "Thu, 13 Jan 2022 16:38:24 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Isolation levels on primary and standby"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 4:47 PM RKN Sai Krishna\n<rknsaiforpostgres@gmail.com> wrote:\n>\n> Hello All,\n>\n> It looks like we could have different isolation levels on primary and\n> standby servers in the context of replication. If the primary crashes\n> and a standby server is made as primary, there could be change in\n> query results because of isolation levels. Is that expected?\n\nI think it is possible because the standbys are free to use their own\nisolation levels for different purposes. During the failover onto the\nstandby, the code/tool that's triggering the failover will have to\ntake care of resetting the isolation level back to the crashed\nprimary. Presently, the standby requires the max_connections,\nmax_worker_processes, max_wal_senders, max_prepared_transactions and\nmax_locks_per_transaction (see the code in\nCheckRequiredParameterValues) parameters to be the same as with the\nprimary, otherwise the standby doesn't start. The postgres doesn't\nenforce the standby's isolation level with the primary, though.\n\nIIUC, the WAL that gets generated on the primary doesn't depend on its\nisolation level, in other words, the WAL records have no information\nof the isolation level. It is the MVCC snapshot, that is taken at the\nstart of the txn, doing the trick for different isolation levels.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 13 Jan 2022 20:17:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Isolation levels on primary and standby"
}
] |
[
{
"msg_contents": "This doesn't work anymore:\n\ncreate type e2 as enum ('foo', 'bar');\nalter type e2 rename value 'b<TAB>\n\nThis now results in\n\nalter type e2 rename value 'b'\n\nBisecting blames\n\ncommit cd69ec66c88633c09bc9a984a7f0930e09c7c96e\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Jan 23 11:07:12 2020 -0500\n\n Improve psql's tab completion for filenames.\n\nwhich did deal with quoting of things to be completed, so it seems very \nplausible to be some collateral damage.\n\nThe queries issued by the completion engine are\n\nbad\n\nLOG: statement: SELECT pg_catalog.quote_literal(enumlabel) FROM \npg_catalog.pg_enum e, pg_catalog.pg_type t WHERE t.oid = e.enumtypid \nAND substring(pg_catalog.quote_literal(enumlabel),1,1)='b' AND \n(pg_catalog.quote_ident(typname)='e2' OR '\"' || typname || \n'\"'='e2') AND pg_catalog.pg_type_is_visible(t.oid)\tLIMIT 1000\n\ngood\n\nLOG: statement: SELECT pg_catalog.quote_literal(enumlabel) FROM \npg_catalog.pg_enum e, pg_catalog.pg_type t WHERE t.oid = e.enumtypid \nAND substring(pg_catalog.quote_literal(enumlabel),1,2)='''b' AND \n(pg_catalog.quote_ident(typname)='e2' OR '\"' || typname || \n'\"'='e2') AND pg_catalog.pg_type_is_visible(t.oid)\tLIMIT 1000\n\nI tried quickly fiddling with the substring() call to correct for the \nchanged offset somehow, but didn't succeed. Needs more analysis.\n\n\n",
"msg_date": "Thu, 13 Jan 2022 12:23:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "tab completion of enum values is broken"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This doesn't work anymore:\n> create type e2 as enum ('foo', 'bar');\n> alter type e2 rename value 'b<TAB>\n> This now results in\n> alter type e2 rename value 'b'\n\nUgh. I'll take a look.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jan 2022 10:39:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tab completion of enum values is broken"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This doesn't work anymore:\n> create type e2 as enum ('foo', 'bar');\n> alter type e2 rename value 'b<TAB>\n> This now results in\n> alter type e2 rename value 'b'\n\nThe main issue here is that Query_for_list_of_enum_values[_with_schema]\nis designed to match against a pre-quoted list of enum values,\nwhich was appropriate when it was written because we hadn't configured\nReadline to do anything special with quotes. Now that we have, the\nstring that is being supplied to match against lacks the leading quote,\nso that we need to remove the quote_literal() calls from those queries.\n\nA secondary problem is that if we fail to identify any match, Readline\nnonetheless decides to append a trailing quote. That's why you get the\nadded quote above, and it still happens with the query fix. I'm not\nsure why Readline thinks it should do that. I worked around it in the\nattached draft patch by setting rl_completion_suppress_quote = 1 in our\nfailure-to-match case, but that feels like using a big hammer rather\nthan a proper solution.\n\nI'm not totally satisfied with this patch for a couple of reasons:\n\n1. It'll allow the user to enter a non-quoted enum value,\n\tfor example alter type e2 rename value b<TAB>\nproduces\n\talter type e2 rename value bar \nIt's not clear to me that there's any way around that, though.\nI tried returning pre-quoted values as we did before (ie,\nchanging only the WHERE clauses in the queries) but then\nReadline fails to match anything. We do have code to force\nquoting of actual filenames, but I think that's dependent on\ngoing through rl_filename_completion_function(), which of course\nwe can't do here.\n\n2. It doesn't seem like there's any nice way to deal with enum\nvalues that contain single quotes (which need to be doubled).\nAdmittedly the use-case for that is probably epsilon, but\nit annoys me that it doesn't work.\n\nIn the end, it seems like the value of this specific completion\nrule is not large enough to justify doing a ton of work to\neliminate #1 or #2. So I propose doing the attached and calling\nit good. Maybe we could add a test case.\n\nOh ... experimenting on macOS (with the system-provided libedit)\nshows no bug here. So I guess we'll need to make this conditional\nsomehow, perhaps on USE_FILENAME_QUOTING_FUNCTIONS. That's another\nreason for not going overboard.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 13 Jan 2022 14:41:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tab completion of enum values is broken"
},
{
"msg_contents": "I wrote:\n> Oh ... experimenting on macOS (with the system-provided libedit)\n> shows no bug here. So I guess we'll need to make this conditional\n> somehow, perhaps on USE_FILENAME_QUOTING_FUNCTIONS. That's another\n> reason for not going overboard.\n\nAfter further fooling with that, I concluded that the only workable\nsolution is a run-time check for whether the readline library included\nthe leading quote in what it hands us. A big advantage of doing it this\nway is that it mostly fixes my complaint #1: by crafting the check\nproperly, we will include quotes if the user hits TAB without having typed\nanything, and we won't complete an incorrectly non-quoted identifier.\nThere's still nothing to be done about single quotes inside an enum\nlabel, but I'm okay with blowing that case off.\n\nSo I think the attached is committable. I've tried it on readline\n7.0 (RHEL8) as well as whatever libedit Apple is currently shipping.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 14 Jan 2022 17:48:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tab completion of enum values is broken"
}
] |
[
{
"msg_contents": "Hi,\n\nI was re-reviewing the proposed batch of GUCs for controlling the SLRU\ncache sizes[1], and I couldn't resist sketching out $SUBJECT as an\nobvious alternative. This patch is highly experimental and full of\nunresolved bits and pieces (see below for some), but it passes basic\ntests and is enough to start trying the idea out and figuring out\nwhere the real problems lie. The hypothesis here is that CLOG,\nmultixact, etc data should compete for space with relation data in one\nunified buffer pool so you don't have to tune them, and they can\nbenefit from the better common implementation (mapping, locking,\nreplacement, bgwriter, checksums, etc and eventually new things like\nAIO, TDE, ...).\n\nI know that many people have talked about doing this and maybe they\nalready have patches along these lines too; I'd love to know what\nothers imagined differently/better.\n\nIn the attached sketch, the SLRU caches are psuedo-relations in\npseudo-database 9. Yeah. That's a straw-man idea stolen from the\nZheap/undo project[2] (I also stole DiscardBuffer() from there);\nbetter ideas for identifying these buffers without making BufferTag\nbigger are very welcome. You can list SLRU buffers with:\n\n WITH slru(relfilenode, path) AS (VALUES (0, 'pg_xact'),\n (1, 'pg_multixact/offsets'),\n (2, 'pg_multixact/members'),\n (3, 'pg_subtrans'),\n (4, 'pg_serial'),\n (5, 'pg_commit_ts'),\n (6, 'pg_notify'))\n SELECT bufferid, path, relblocknumber, isdirty, usagecount, pinning_backends\n FROM pg_buffercache NATURAL JOIN slru\n WHERE reldatabase = 9\n ORDER BY path, relblocknumber;\n\nHere are some per-cache starter hypotheses about locking that might be\ncompletely wrong and obviously need real analysis and testing.\n\npg_xact:\n\nI couldn't easily get rid of XactSLRULock, because it didn't just\nprotect buffers, it's also used to negotiate \"group CLOG updates\". (I\nthink it'd be nice to replace that system with an atomic page update\nscheme so that concurrent committers stay on CPU, something like [3],\nbut that's another topic.) I decided to try a model where readers\nonly have to pin the page (the reads are sub-byte values that we can\nread atomically, and you'll see a value as least as fresh as the time\nyou took the pin, right?), but writers have to take an exclusive\ncontent lock because otherwise they'd clobber each other at byte\nlevel, and because they need to maintain the page LSN consistently.\nWriting back is done with a share lock as usual and log flushing can\nbe done consistently. I also wanted to try avoiding the extra cost of\nlocking and accessing the buffer mapping table in common cases, so I\nuse ReadRecentBuffer() for repeat access to the same page (this\napplies to the other SLRUs too).\n\npg_subtrans:\n\nI got rid of SubtransSLRULock because it only protected page contents.\nCan be read with only a pin. Exclusive page content lock to write.\n\npg_multixact:\n\nI got rid of the MultiXact{Offset,Members}SLRULock locks. Can be read\nwith only a pin. Writers take exclusive page content lock. The\nmultixact.c module still has its own MultiXactGenLock.\n\npg_commit_ts:\n\nI got rid of CommitTsSLRULock since it only protected buffers, but\nhere I had to take shared content locks to read pages, since the\nvalues can't be read atomically. Exclusive content lock to write.\n\npg_serial:\n\nI could not easily get rid of SerialSLRULock, because it protects the\nSLRU + also some variables in serialControl. Shared and exclusive\npage content locks.\n\npg_notify:\n\nI got rid of NotifySLRULock. Shared and exclusive page content locks\nare used for reading and writing. The module still has a separate\nlock NotifyQueueLock to coordinate queue positions.\n\nSome problems tackled incompletely:\n\n* I needed to disable checksums and in-page LSNs, since SLRU pages\nhold raw data with no header. We'd probably eventually want regular\n(standard? formatted?) pages (the real work here may be implementing\nFPI for SLRUs so that checksums don't break your database on torn\nwrites). In the meantime, suppressing those things is done by the\nkludge of recognising database 9 as raw data, but there should be\nsomething better than this. A separate array of size NBuffer holds\n\"external\" page LSNs, to drive WAL flushing.\n\n* The CLOG SLRU also tracks groups of async commit LSNs in a fixed\nsized array. The obvious translation would be very wasteful (an array\nbig enough for NBuffers * groups per page), but I hope that there is a\nbetter way to do this... in the sketch patch I changed it to use the\nsingle per-page LSN for simplicity (basically group size is 32k\ninstead of 32...), which is certainly not good enough.\n\nSome stupid problems not tackled yet:\n\n* It holds onto the virtual file descriptor for the last segment\naccessed, but there is no invalidation for when segment files are\nrecycled; that could be fixed with a cycle counter or something like\nthat.\n\n* It needs to pin buffers during the critical section in commit\nprocessing, but that crashes into the ban on allocating memory while\ndealing with resowner.c book-keeping. It's also hard to know how many\nbuffers you'll need to pin in advance. For now, I just commented out\nthe assertions...\n\n* While hacking on the pg_stat_slru view I realised that there is\nsupport for \"other\" SLRUs, presumably for extensions to define their\nown. Does anyone actually do that? I, erm, didn't support that in\nthis sketch (not too hard though, I guess).\n\n* For some reason this is failing on Windows CI, but I haven't looked\ninto that yet.\n\nThoughts on the general concept, technical details? Existing patches\nfor this that are further ahead/better?\n\n[1] https://commitfest.postgresql.org/36/2627/\n[2] https://commitfest.postgresql.org/36/3228/\n[3] http://www.vldb.org/pvldb/vol13/p3195-kodandaramaih.pdf",
"msg_date": "Fri, 14 Jan 2022 02:59:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 9:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I was re-reviewing the proposed batch of GUCs for controlling the SLRU\n> cache sizes[1], and I couldn't resist sketching out $SUBJECT as an\n> obvious alternative. This patch is highly experimental and full of\n> unresolved bits and pieces (see below for some), but it passes basic\n> tests and is enough to start trying the idea out and figuring out\n> where the real problems lie. The hypothesis here is that CLOG,\n> multixact, etc data should compete for space with relation data in one\n> unified buffer pool so you don't have to tune them, and they can\n> benefit from the better common implementation (mapping, locking,\n> replacement, bgwriter, checksums, etc and eventually new things like\n> AIO, TDE, ...).\n\nI endorse this hypothesis. The performance cliff when the XID range\nyou're regularly querying exceeds the hardcoded constant is quite\nsteep, and yet we can't just keep pushing that constant up. Linear\nsearch does not scale well to infinitely large arrays.[citation\nneeded]\n\n> [ long list of dumpster-fire level problems with the patch ]\n\nHonestly, none of this sounds that bad. I mean, it sounds bad in the\nsense that you're going to have to fix all of this somehow and I'm\ngoing to unhelpfully give you no advice whatsoever about how to do\nthat, but my guess is that a moderate amount of persistence will be\nsufficient for you to get the job done. None of it sounds hopeless.\n\nBefore fixing all of that, one thing you might want to consider is\nwhether it uh, works. And by \"work\" I don't mean \"get the right\nanswer\" even though I agree with my esteemed fellow hacker that this\nis an important thing to do.[1] What I mean is that it would be good\nto see some evidence that the number of buffers that end up being used\nto cache any particular SLRU is somewhat sensible, and varies by\nworkload. For example, consider a pgbench workload. As you increase\nthe scale factor, the age of the oldest XIDs that you regularly\nencounter will also increase, because on the average, the row you're\nnow updating will not have been updated for a larger number of\ntransactions. So it would be interesting to know whether all of the\nCLOG buffers that are regularly being accessed do in fact remain in\ncache - and maybe even whether buffers that stop being regularly\naccessed get evicted in the face of cache pressure.\n\nAlso, the existing clog code contains a guard that absolutely prevents\nthe latest CLOG buffer from being evicted. Because - at least in a\npgbench test like the one postulated above, and probably in general -\nthe frequency of access to older CLOG buffers decays exponentially,\nevicting the newest or even the second or third newest CLOG buffer is\nreally bad. At present, there's a hard-coded guard to prevent the\nnewest buffer from being evicted, which is a band-aid, but an\neffective one. Even with that band-aid, evicting any of the most\nrecent few can produce a system-wide stall, where every backend ends\nup waiting for the evicted buffer to be retrieved. It would be\ninteresting to know whether the same problem can be recreated with\nyour patch, because the buffer eviction algorithm for shared buffers\nis only a little bit less dumb than the one for SLRUs, and can pretty\ncommonly devolve into little more than evict-at-random.\nEvict-at-random is very bad here, because evicting a hot CLOG page is\nprobably even worse than evicting, say, a btree root page.\n\nAnother interesting test might be one that puts pressure on some other\nSLRU, like pg_multixact or pg_subtrans. In general SLRU pages that are\nactually being used are hot enough that we should keep them in cache\nalmost no matter what else is competing for cache space ... but the\nnumber of such pages can be different from one SLRU to another, and\ncan change over time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] http://postgr.es/m/3151122.1642086632@sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 13 Jan 2022 12:46:21 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On 13/01/2022 15:59, Thomas Munro wrote:\n> Hi,\n> \n> I was re-reviewing the proposed batch of GUCs for controlling the SLRU\n> cache sizes[1], and I couldn't resist sketching out $SUBJECT as an\n> obvious alternative. This patch is highly experimental and full of\n> unresolved bits and pieces (see below for some), but it passes basic\n> tests and is enough to start trying the idea out and figuring out\n> where the real problems lie. The hypothesis here is that CLOG,\n> multixact, etc data should compete for space with relation data in one\n> unified buffer pool so you don't have to tune them, and they can\n> benefit from the better common implementation (mapping, locking,\n> replacement, bgwriter, checksums, etc and eventually new things like\n> AIO, TDE, ...).\n\n+1\n\n> I know that many people have talked about doing this and maybe they\n> already have patches along these lines too; I'd love to know what\n> others imagined differently/better.\n\nIIRC one issue with this has been performance. When an SLRU is working \nwell, a cache hit in the SLRU is very cheap. Linearly scanning the SLRU \narray is cheap, compared to computing the hash and looking up a buffer \nin the buffer cache. Would be good to do some benchmarking of that.\n\nI wanted to do this with the CSN (Commit Sequence Number) work a long \ntime ago. That would benefit from something like an SLRU with very fast \naccess, to look up CSNs. Even without CSNs, if the CLOG was very fast to \naccess we might not need hint bits anymore. I guess trying to make SLRUs \nfaster than they already are is moving the goalposts, but at a minimum \nlet's make sure they don't get any slower.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 17 Jan 2022 12:23:57 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 11:23 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> IIRC one issue with this has been performance. When an SLRU is working\n> well, a cache hit in the SLRU is very cheap. Linearly scanning the SLRU\n> array is cheap, compared to computing the hash and looking up a buffer\n> in the buffer cache. Would be good to do some benchmarking of that.\n\nOne trick I want to experiment with is trying to avoid the mapping\ntable lookup by using ReadRecentBuffer(). In the prototype patch I do\nthat for one buffer per SLRU cache (so that'll often point to the head\nCLOG page), but it could be extended to a small array of recently\naccessed buffers to scan linearly, much like the current SLRU happy\ncase except that it's not shared and doesn't need a lock so it's even\nhappier. I'm half joking here, but that would let us keep calling\nthis subsystem SLRU :-)\n\n> I wanted to do this with the CSN (Commit Sequence Number) work a long\n> time ago. That would benefit from something like an SLRU with very fast\n> access, to look up CSNs. Even without CSNs, if the CLOG was very fast to\n> access we might not need hint bits anymore. I guess trying to make SLRUs\n> faster than they already are is moving the goalposts, but at a minimum\n> let's make sure they don't get any slower.\n\nOne idea I've toyed with is putting a bitmap into shared memory where\neach bit corresponds to a range of (say) 256 xids, accessed purely\nwith atomics. If a bit is set we know they're all committed, and\notherwise you have to do more pushups. I've attached a quick and\ndirty experimental patch along those lines, but I don't think it's\nquite right, especially with people waving 64 bit xid patches around.\nPerhaps it'd need to be a smaller sliding window, somehow, and perhaps\npeople will balk at the wild and unsubstantiated assumption that\ntransactions generally commit.",
"msg_date": "Tue, 18 Jan 2022 09:05:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "Rebased, debugged and fleshed out a tiny bit more, but still with\nplenty of TODO notes and questions. I will talk about this idea at\nPGCon, so I figured it'd help to have a patch that actually applies,\neven if it doesn't work quite right yet. It's quite a large patch but\nthat's partly because it removes a lot of lines...",
"msg_date": "Fri, 27 May 2022 23:24:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On Fri, May 27, 2022 at 11:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Rebased, debugged and fleshed out a tiny bit more, but still with\n> plenty of TODO notes and questions. I will talk about this idea at\n> PGCon, so I figured it'd help to have a patch that actually applies,\n> even if it doesn't work quite right yet. It's quite a large patch but\n> that's partly because it removes a lot of lines...\n\nFWIW, here are my PGCon slides about this:\nhttps://speakerdeck.com/macdice/improving-the-slru-subsystem\n\nThere was a little bit of discussion on #pgcon-stream2 which I could\nsummarise as: can we figure out a way to keep parts of the CLOG pinned\nso that backends don't have to do that for each lookup? Then CLOG\nchecks become simple reads. There may be some relation to the idea of\n'nailing' btree root pages that I've heard of from a couple of people\nnow (with ProcSignalBarrier or something more fine grained along those\nlines if you need to unnail anything). Something to think about.\n\nI'm also wondering if it would be possible to do \"optimistic\" pinning\ninstead for reads that normally need only a pin, using some kind of\ncounter scheme with read barriers to tell you if the page might have\nbeen evicted after you read the data...\n\n\n",
"msg_date": "Sat, 28 May 2022 13:13:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-28 13:13:20 +1200, Thomas Munro wrote:\n> There was a little bit of discussion on #pgcon-stream2 which I could\n> summarise as: can we figure out a way to keep parts of the CLOG pinned\n> so that backends don't have to do that for each lookup? Then CLOG\n> checks become simple reads.\n\nIncluded in that is not needing to re-check that the identity of the buffer\nchanged since the last use and to not need a PrivateRefCountEntry. Neither is\ncheap...\n\nI'd structure it so that there's a small list of slru buffers that's pinned in\na \"shared\" mode. Entering the buffer into that increases the BufferDesc's\nrefcount, but is *not* memorialized in the backend's refcount structures,\nbecause it's \"owned by the SLRU\".\n\n\n> There may be some relation to the idea of\n> 'nailing' btree root pages that I've heard of from a couple of people\n> now (with ProcSignalBarrier or something more fine grained along those\n> lines if you need to unnail anything). Something to think about.\n\nI'm very doubtful it's a good idea to combine those things - I think it's\nquite different to come up with a design for SLRUs, of which there's a\nconstant number and shared memory ownership datastructures, and btree root\npages etc, of which there are arbitrary many.\n\nFor the nbtree (and similar) cases, I think it'd make sense to give backends a\nsize-limited number of pages they can keep pinned, but in a backend local\nway. With, as you suggest, a procsignal barrier or such to force release.\n\n\n> I'm also wondering if it would be possible to do \"optimistic\" pinning\n> instead for reads that normally need only a pin, using some kind of\n> counter scheme with read barriers to tell you if the page might have\n> been evicted after you read the data...\n\n-many\n\nThat seems fragile and complicated, without, at least to me, a clear need.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 29 May 2022 13:57:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On 28.05.2022 04:13, Thomas Munro wrote:\n> On Fri, May 27, 2022 at 11:24 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n>> Rebased, debugged and fleshed out a tiny bit more, but still with\n>> plenty of TODO notes and questions. I will talk about this idea at\n>> PGCon, so I figured it'd help to have a patch that actually applies,\n>> even if it doesn't work quite right yet. It's quite a large patch but\n>> that's partly because it removes a lot of lines...\n> FWIW, here are my PGCon slides about this:\n> https://speakerdeck.com/macdice/improving-the-slru-subsystem\n>\n> There was a little bit of discussion on #pgcon-stream2 which I could\n> summarise as: can we figure out a way to keep parts of the CLOG pinned\n> so that backends don't have to do that for each lookup? Then CLOG\n> checks become simple reads. There may be some relation to the idea of\n> 'nailing' btree root pages that I've heard of from a couple of people\n> now (with ProcSignalBarrier or something more fine grained along those\n> lines if you need to unnail anything). Something to think about.\n>\n> I'm also wondering if it would be possible to do \"optimistic\" pinning\n> instead for reads that normally need only a pin, using some kind of\n> counter scheme with read barriers to tell you if the page might have\n> been evicted after you read the data...\n>\n>\n\n\nI wonder if there are some tests which can illustrate advantages of \nstoring SLRU pages in shared buffers?\nIn PgPro we had a customer which run PL-PgSql code with recursively \ncalled function containing exception handling code. Each exception block \ncreates subtransaction\nand subxids SLRU becomes bottleneck.\nI have simulated this workload with large number subxids using the \nfollowing function:\n\ncreate or replace function do_update(id integer, level integer) returns \nvoid as $$\nbegin\n begin\n if level > 0 then\n perform do_update(id, level-1);\n else\n update pgbench_accounts SET abalance = abalance + 1 WHERE \naid = id;\n end if;\n exception WHEN OTHERS THEN\n raise notice '% %', SQLERRM, SQLSTATE;\n end;\nend; $$ language plpgsql;\n\nWith the following test script:\n\n \\set aid random(1, 1000)\n select do_update(:aid,100)\n\nI got the following results:\n\nknizhnik@xps:~/db$ pgbench postgres -f update.sql -c 10 -T 100 -P 1 -M \nprepared\npgbench (15beta1)\nstarting vacuum...end.\nprogress: 1.0 s, 3030.8 tps, lat 3.238 ms stddev 1.110, 0 failed\nprogress: 2.0 s, 3018.0 tps, lat 3.303 ms stddev 1.088, 0 failed\nprogress: 3.0 s, 3000.4 tps, lat 3.329 ms stddev 1.063, 0 failed\nprogress: 4.0 s, 2855.6 tps, lat 3.494 ms stddev 1.152, 0 failed\nprogress: 5.0 s, 2747.0 tps, lat 3.631 ms stddev 1.306, 0 failed\nprogress: 6.0 s, 2664.0 tps, lat 3.743 ms stddev 1.410, 0 failed\nprogress: 7.0 s, 2498.0 tps, lat 3.992 ms stddev 1.659, 0 failed\n...\nprogress: 93.0 s, 670.0 tps, lat 14.964 ms stddev 10.555, 0 failed\nprogress: 94.0 s, 615.0 tps, lat 16.222 ms stddev 11.419, 0 failed\nprogress: 95.0 s, 580.0 tps, lat 17.251 ms stddev 11.622, 0 failed\nprogress: 96.0 s, 568.0 tps, lat 17.582 ms stddev 11.679, 0 failed\nprogress: 97.0 s, 573.0 tps, lat 17.389 ms stddev 11.771, 0 failed\nprogress: 98.0 s, 611.0 tps, lat 16.428 ms stddev 11.768, 0 failed\nprogress: 99.0 s, 568.0 tps, lat 17.622 ms stddev 11.912, 0 failed\nprogress: 100.0 s, 568.0 tps, lat 17.631 ms stddev 11.672, 0 failed\ntps = 1035.566054 (without initial connection time)\n\nWith Thomas patch results are the following:\n\nprogress: 1.0 s, 2949.8 tps, lat 3.332 ms stddev 1.285, 0 failed\nprogress: 2.0 s, 3009.1 tps, lat 3.317 ms stddev 1.077, 0 failed\nprogress: 3.0 s, 2993.6 tps, lat 3.338 ms stddev 1.099, 0 failed\nprogress: 4.0 s, 3034.4 tps, lat 3.291 ms stddev 1.056, 0 failed\n...\nprogress: 97.0 s, 1113.0 tps, lat 8.972 ms stddev 3.885, 0 failed\nprogress: 98.0 s, 1138.0 tps, lat 8.803 ms stddev 3.496, 0 failed\nprogress: 99.0 s, 1174.8 tps, lat 8.471 ms stddev 3.875, 0 failed\nprogress: 100.0 s, 1094.1 tps, lat 9.123 ms stddev 3.842, 0 failed\ntps = 2133.240094 (without initial connection time)\n\nSo there is still degrade of performance but smaller than in case of \nvanilla and total TPS are almost two times higher.\n\nAnd this is another example demonstrating degrade of performance from \npresentation by Alexander Korotkov:\npgbench script:\n\n\\setaid random(1, 100000 * :scale)\n\\setbid random(1, 1 * :scale)\n\\settid random(1, 10 * :scale)\n\\setdelta random(-5000, 5000)\nBEGIN;\nINSERT INTOpgbench_history (tid, bid, aid, delta, mtime)\nVALUES(:tid, :bid, :aid, :delta,CURRENT_TIMESTAMP);\nSAVEPOINT s1;\nINSERT INTOpgbench_history (tid, bid, aid, delta, mtime)\nVALUES(:tid, :bid, :aid, :delta,CURRENT_TIMESTAMP);\n....\nSAVEPOINT sN;\nINSERT INTOpgbench_history (tid, bid, aid, delta, mtime)\nVALUES(:tid, :bid, :aid, :delta,CURRENT_TIMESTAMP);\nSELECTpg_sleep(1.0);\nEND;\n\n\n\n\n\nI wonder which workload can cause CLOG to become a bottleneck?\nUsually Postgres uses hint bits to avoid clog access. So standard \npgbench doesn't demonstrate any degrade of performance even in case of \npresence of long living transactions,\nwhich keeps XMIN horizon.",
"msg_date": "Thu, 16 Jun 2022 20:13:11 +0300",
"msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 1:13 PM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n> I wonder which workload can cause CLOG to become a bottleneck?\n> Usually Postgres uses hint bits to avoid clog access. So standard pgbench doesn't demonstrate any degrade of performance even in case of presence of long living transactions,\n> which keeps XMIN horizon.\n\nI haven't done research on this in a number of years and a bunch of\nother improvements have been made since then, but I remember\ndiscovering that CLOG could become a bottleneck on standard pgbench\ntests especially when using unlogged tables. The higher you raised the\nscale factor, the more of a bottleneck CLOG became. That makes sense:\nno matter what the scale factor is, you're constantly updating rows\nthat have not previously been hinted, but as the scale factor gets\nbigger, those rows are likely to be older (in terms of XID age) on\naverage, and so you need to cache more CLOG buffers to maintain\nperformance. But the SLRU system provides no such flexibility: it\ndoesn't scale to large numbers of buffers the way the code is written,\nand it certainly can't vary the number of buffers devoted to this\npurpose at runtime. So I think that the approach we're talking about\nhere has potential in that sense.\n\nHowever, another problem with the SLRU code is that it's old and\ncrufty and hard to work with. It's hard to imagine anyone being able\nto improve things very much as long as that's the basis. I don't know\nthat whatever code Thomas has written or will write is better, but if\nit is, that would be good, because I don't see a lot of improvement in\nthis area being possible otherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Jun 2022 12:21:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "Good day, Thomas\n\nВ Пт, 27/05/2022 в 23:24 +1200, Thomas Munro пишет:\n> Rebased, debugged and fleshed out a tiny bit more, but still with\n> plenty of TODO notes and questions. I will talk about this idea at\n> PGCon, so I figured it'd help to have a patch that actually applies,\n> even if it doesn't work quite right yet. It's quite a large patch but\n> that's partly because it removes a lot of lines...\n\nLooks like it have to be rebased again.\n\n\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:23:11 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On 21/07/2022 16:23, Yura Sokolov wrote:\n> Good day, Thomas\n> \n> В Пт, 27/05/2022 в 23:24 +1200, Thomas Munro пишет:\n>> Rebased, debugged and fleshed out a tiny bit more, but still with\n>> plenty of TODO notes and questions. I will talk about this idea at\n>> PGCon, so I figured it'd help to have a patch that actually applies,\n>> even if it doesn't work quite right yet. It's quite a large patch but\n>> that's partly because it removes a lot of lines...\n> \n> Looks like it have to be rebased again.\n\nHere's a rebase.\n\nI'll write a separate post with my thoughts on the high-level design of \nthis, but first a couple of more detailed issues:\n\nIn RecordTransactionCommit(), we enter a critical section, and then call \nTransactionIdCommitTree() to update the CLOG pages. That now involves a \ncall to ReadBuffer_common(), which in turn calls \nResourceOwnerEnlargeBuffers(). That can fail, because it might require \nallocating memory, which is forbidden in a critical section. I ran into \nan assertion about that with \"make check\" when I was playing around with \na heavily modified version of this patch. Haven't seen it with your \noriginal one, but I believe that's just luck.\n\nCalling ResourceOwnerEnlargeBuffers() before entering the critical \nsection would probably fix that, although I'm a bit worried about having \nthe Enlarge call so far away from the point where it's needed.\n\n> +void\n> +CheckPointSLRU(void)\n> +{\n> + /* Ensure that directory entries for new files are on disk. */\n> + for (int i = 0; i < lengthof(defs); ++i)\n> + {\n> + if (defs[i].synchronize)\n> + fsync_fname(defs[i].path, true);\n> + }\n> +}\n> +\n\nIs it really necessary to fsync() the directories? We don't do that \ntoday for the SLRUs, and we don't do it for the tablespace/db \ndirectories holding relations.\n\n- Heikki",
"msg_date": "Mon, 25 Jul 2022 09:54:25 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On 25/07/2022 09:54, Heikki Linnakangas wrote:\n> In RecordTransactionCommit(), we enter a critical section, and then call\n> TransactionIdCommitTree() to update the CLOG pages. That now involves a\n> call to ReadBuffer_common(), which in turn calls\n> ResourceOwnerEnlargeBuffers(). That can fail, because it might require\n> allocating memory, which is forbidden in a critical section. I ran into\n> an assertion about that with \"make check\" when I was playing around with\n> a heavily modified version of this patch. Haven't seen it with your\n> original one, but I believe that's just luck.\n> \n> Calling ResourceOwnerEnlargeBuffers() before entering the critical\n> section would probably fix that, although I'm a bit worried about having\n> the Enlarge call so far away from the point where it's needed.\n\nOh I just saw that you had a comment about that in the patch and had \nhacked around it. Anyway, calling ResourceOwnerEnlargeBuffers() might be \na solution. Or switch to a separate \"CriticalResourceOwner\" that's \nguaranteed to have enough pre-allocated space, before entering the \ncritical section.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:54:36 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On 25/07/2022 09:54, Heikki Linnakangas wrote:\n> I'll write a separate post with my thoughts on the high-level design of\n> this, ...\n\nThis patch represents each SLRU as a relation. The CLOG is one relation, \npg_subtrans is another relations, and so forth. The SLRU relations use a \ndifferent SMGR implementation, which is implemented in slru.c.\n\nAs you know, I'd like to make the SMGR implementation replaceable by \nextensions. We need that for Neon, and I'd imagine it to be useful for \nmany other things, too, like compression, encryption, or restoring data \nfrom a backup on-demand. I'd like all file operations to go through the \nsmgr API as much as possible, so that an extension can intercept SLRU \nfile operations too. If we introduce another internal SMGR \nimplementation, then an extension would need to replace both \nimplementations separately. I'd prefer to use the current md.c \nimplementation for SLRUs too, instead.\n\nThus I propose:\n\nLet's represent each SLRU *segment* as a separate relation, giving each \nSLRU segment a separate relNumber. Then we can use md.c for SLRUs, too. \nDropping an SLRU segment can be done by calling smgrunlink(). You won't \nneed to deal with missing segments in md.c, because each individual SLRU \nfile is a complete file, with no holes. Dropping buffers for one SLRU \nsegment can be done with DropRelationBuffers(), instead of introducing \nthe new DiscardBuffer() function. You can let md.c handle the caching of \nthe file descriptors, you won't need to reimplement that with \n'slru_file_segment'.\n\nSLRUs won't need the segmentation into 1 GB segments that md.c does, \nbecause each SLRU file is just 256 kB in size. That's OK. (BTW, I \npropose that we bump the SLRU segment size up to a whopping 1 MB or even \nmore, while we're at it. But one step at a time.)\n\nSLRUs also won't need the concept of relation forks. That's fine, we can \njust use MAIN_FORKNUM. elated to that, I'm somewhat bothered by the way \nthat SMgrRelation currently bundles all the relation forks together. A \ncomment in smgr.h says:\n\n> smgr.c maintains a table of SMgrRelation objects, which are essentially\n> cached file handles.\n\nBut when we introduced relation forks, that got a bit muddled. Each \nSMgrRelation object is now a file handle for a bunch of related relation \nforks, and each fork is a separate file that can be created and \ntruncated separately.\n\nThat means that an SMGR implementation, like md.c, needs to track the \nfile handles for each fork. I think things would be more clear if we \nunbundled the forks at the SMGR level, so that we would have a separate \nSMgrRelation struct for each fork. And let's rename it to SMgrFile to \nmake the role more clear. I think that would reduce the confusion when \nwe start using it for SLRUs; an SLRU is not a relation, after all. md.c \nwould still segment each logical file into 1 GB segments, but it would \nnot need to deal with forks.\n\nAttached is a draft patch to refactor it that way, and a refactored \nversion of your SLRU patch over that.\n\nThe relation cache now needs to hold a separate reference to the \nSMgrFile of each fork of a relation. And smgr cache invalidation still \nworks at relation granularity. Doing it per SmgrFile would be more clean \nin smgr.c, but in practice all the forks of a relation are unlinked and \ntruncated together, so sending a separate invalidation event for each \nSMgrFile would increase the cache invalidation traffic.\n\nIn the passing, I moved the DropRelationBuffers() calls from smgr.c to \nthe callers. smgr.c doesn't otherwise make any effort to keep the buffer \nmanager in sync with the state on-disk, that responsibility is normally \nwith the code that *uses* the smgr functions, so I think that's more \nlogical.\n\nThe first patch currently causes the '018_wal_optimize.pl' test to fail. \nI guess I messed up something in the relation truncation code, but I \nhaven't investigated it yet. I wanted to post this to get comments on \nthe design, before spending more time on that.\n\nWhat do you think?\n\n- Heikki",
"msg_date": "Mon, 25 Jul 2022 12:59:25 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "Hi Thomas,\r\n\r\nWhile I was working on adding the page headers to SLRU pages on your patch, I came across this code where it seems like \"MultiXactIdToMemberPage\" is mistakenly being used instead of MultiXactIdToOffsetPage in the TrimMultiXact function.\r\n\r\nBelow is the area of concern in the patch:\r\n\r\n@@ -2045,14 +1977,7 @@ TrimMultiXact(void)\r\n \toldestMXactDB = MultiXactState->oldestMultiXactDB;\r\n \tLWLockRelease(MultiXactGenLock);\r\n \r\n-\t/* Clean up offsets state */\r\n-\tLWLockAcquire(MultiXactOffsetSLRULock, LW_EXCLUSIVE);\r\n-\r\n-\t/*\r\n-\t * (Re-)Initialize our idea of the latest page number for offsets.\r\n-\t */\r\n-\tpageno = MultiXactIdToOffsetPage(nextMXact);\r\n-\tMultiXactOffsetCtl->shared->latest_page_number = pag0eno;\r\n+\tpageno = MXOffsetToMemberPage(offset);\r\n\r\n\r\nLet us know if I am missing something here or if it is an error.\r\n\r\nSincerely,\r\n\r\nRishu Bagga (Amazon Web Services)\r\n\r\nOn 9/16/22, 5:37 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n Rebased, debugged and fleshed out a tiny bit more, but still with\r\n plenty of TODO notes and questions. I will talk about this idea at\r\n PGCon, so I figured it'd help to have a patch that actually applies,\r\n even if it doesn't work quite right yet. It's quite a large patch but\r\n that's partly because it removes a lot of lines...\r\n\r\n",
"msg_date": "Sat, 17 Sep 2022 00:41:14 +0000",
"msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On Sat, Sep 17, 2022 at 12:41 PM Bagga, Rishu <bagrishu@amazon.com> wrote:\n> While I was working on adding the page headers to SLRU pages on your patch, I came across this code where it seems like \"MultiXactIdToMemberPage\" is mistakenly being used instead of MultiXactIdToOffsetPage in the TrimMultiXact function.\n\nThanks Rishu. Right. Will fix soon in the next version, along with\nmy long overdue replies to Heikki and Konstantin.\n\n\n",
"msg_date": "Wed, 21 Sep 2022 21:32:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 11:54:36AM +0300, Heikki Linnakangas wrote:\n\n> Oh I just saw that you had a comment about that in the patch and had hacked\n> around it. Anyway, calling ResourceOwnerEnlargeBuffers() might be a\n> solution. Or switch to a separate \"CriticalResourceOwner\" that's guaranteed\n> to have enough pre-allocated space, before entering the critical section.\n\nWanted to bump up this thread. Rishu in my team posted a patch in the other \nSLRU thread [1] with the latest updates and fixes and looks like performance \nnumbers do not show any regression. This change is currently in the \nJanuary commitfest [2] as well. Any feedback would be appreciated!\n\n[1]\nhttps://www.postgresql.org/message-id/A09EAE0D-0D3F-4A34-ADE9-8AC1DCBE7D57%40amazon.com\n[2] https://commitfest.postgresql.org/41/3514/\n\nShawn \nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Fri, 20 Jan 2023 17:00:18 +0000",
"msg_from": "Shawn Debnath <clocksweep@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On 20/01/2023 19:00, Shawn Debnath wrote:\n> On Mon, Jul 25, 2022 at 11:54:36AM +0300, Heikki Linnakangas wrote:\n> \n>> Oh I just saw that you had a comment about that in the patch and had hacked\n>> around it. Anyway, calling ResourceOwnerEnlargeBuffers() might be a\n>> solution. Or switch to a separate \"CriticalResourceOwner\" that's guaranteed\n>> to have enough pre-allocated space, before entering the critical section.\n> \n> Wanted to bump up this thread. Rishu in my team posted a patch in the other\n> SLRU thread [1] with the latest updates and fixes and looks like performance\n> numbers do not show any regression. This change is currently in the\n> January commitfest [2] as well. Any feedback would be appreciated!\n\nHere's a rebased set of patches.\n\nThe second patch is failing the pg_upgrade tests. Before I dig into \nthat, I'd love to get some feedback on this general approach.\n\n- Heikki",
"msg_date": "Mon, 27 Feb 2023 15:31:55 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "On 27/02/2023 15:31, Heikki Linnakangas wrote:\n> On 20/01/2023 19:00, Shawn Debnath wrote:\n>> On Mon, Jul 25, 2022 at 11:54:36AM +0300, Heikki Linnakangas wrote:\n>>\n>>> Oh I just saw that you had a comment about that in the patch and had hacked\n>>> around it. Anyway, calling ResourceOwnerEnlargeBuffers() might be a\n>>> solution. Or switch to a separate \"CriticalResourceOwner\" that's guaranteed\n>>> to have enough pre-allocated space, before entering the critical section.\n>>\n>> Wanted to bump up this thread. Rishu in my team posted a patch in the other\n>> SLRU thread [1] with the latest updates and fixes and looks like performance\n>> numbers do not show any regression. This change is currently in the\n>> January commitfest [2] as well. Any feedback would be appreciated!\n> \n> Here's a rebased set of patches.\n> \n> The second patch is failing the pg_upgrade tests. Before I dig into\n> that, I'd love to get some feedback on this general approach.\n\nForgot to include the new \"slrulist.h\" file in the previous patch, fixed \nhere.\n\n- Heikki",
"msg_date": "Mon, 27 Feb 2023 15:36:30 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "Hi,\n\n> > Here's a rebased set of patches.\n> >\n> > The second patch is failing the pg_upgrade tests. Before I dig into\n> > that, I'd love to get some feedback on this general approach.\n>\n> Forgot to include the new \"slrulist.h\" file in the previous patch, fixed\n> here.\n\nUnfortunately the patchset rotted quite a bit since February and needs a rebase.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 11 Jul 2023 14:52:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "Hi,\n\n> Unfortunately the patchset rotted quite a bit since February and needs a rebase.\n\nA consensus was reached [1] to mark this patch as RwF for now. There\nare many patches to be reviewed and this one doesn't seem to be in the\nbest shape, so we have to prioritise. Please feel free re-submitting\nthe patch for the next commitfest.\n\n[1]: https://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 15:31:02 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
},
{
"msg_contents": "Hi,\n\n> > Unfortunately the patchset rotted quite a bit since February and needs a rebase.\n>\n> A consensus was reached to mark this patch as RwF for now. There\n> are many patches to be reviewed and this one doesn't seem to be in the\n> best shape, so we have to prioritise. Please feel free re-submitting\n> the patch for the next commitfest.\n\nSee also [1]\n\n\"\"\"\n[...]\nAlso, please consider joining the efforts and having one thread\nwith a single patchset rather than different threads with different\ncompeting patches. This will simplify the work of the reviewers a lot.\n\nPersonally I would suggest taking one step back and agree on a\nparticular RFC first and then continue working on a single patchset\naccording to this RFC. We did it in the past in similar cases and this\napproach proved to be productive.\n[...]\n\"\"\"\n\n[1]: https://postgr.es/m/CAJ7c6TME5Z8k4undYUmKavD_dQFL0ujA%2BzFCK1eTH0_pzM%3DXrA%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 19:02:26 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: SLRUs in the main buffer pool, redux"
}
] |
[
{
"msg_contents": "Hello,\n\nI am using the libpq to consume a replication slot.\nVery rarely, i would get a very strange error that i can't find any\ninformation on.\nIt is not mentioned in any documentation and i don't know under what\nconditions it triggers.\nAny help is appreciated.\n\nThe function i am calling is : `PQgetCopyData`\nthe error code is `-1` and the error text is `invalid ordering of\nspeculative insertion changes`\n\nHello,I am using the libpq to consume a replication slot.Very rarely, i would get a very strange error that i can't find any information on.It is not mentioned in any documentation and i don't know under what conditions it triggers.Any help is appreciated.The function i am calling is : `PQgetCopyData`the error code is `-1` and the error text is `invalid ordering of speculative insertion changes`",
"msg_date": "Fri, 14 Jan 2022 00:02:57 +0100",
"msg_from": "Petar Dambovaliev <petar.dambovaliev@nextroll.com>",
"msg_from_op": true,
"msg_subject": "Undocumented error"
},
{
"msg_contents": "Hi,\n\nOn 1/14/22 00:02, Petar Dambovaliev wrote:\n> Hello,\n> \n> I am using the libpq to consume a replication slot.\n> Very rarely, i would get a very strange error that i can't find any \n> information on.\n> It is not mentioned in any documentation and i don't know under what \n> conditions it triggers.\n> Any help is appreciated.\n> \n> The function i am calling is : `PQgetCopyData`\n> the error code is `-1` and the error text is `invalid ordering of \n> speculative insertion changes`\n\nWell, that's strange. I see the error message in the source code, but it \nkinda implies it's something that should not happen. So either there's a \nbug in how we WAL log this stuff, or maybe the decoding is wrong. In any \ncase it has to be very rare issue, because the code is like this since \n2018 and there have been 0 complaints so far.\n\nWhich Postgres version is this, exactly? Was the WAL generated by that \nsame version, or did you update/upgrade recently?\n\nAre you able to reproduce the issue? Do you know what did the \ntransaction that generated this WAL?\n\nIt'd be helpful to see the WAL that trigger this issue - presumably the \nerror message includes the LSN of the record at which this fails, so use \npg_waldump to dump that segment and show us a sufficiently large chunk \nfrom before that LSN. Not sure how much, because I don't know if you use \nsubtransactions, how long the transactions are, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 14 Jan 2022 15:30:31 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented error"
},
{
"msg_contents": "On Friday, January 14, 2022, Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> ,\n>\n> On 1/14/22 00:02, Petar Dambovaliev wrote:\n>\n>>\n>> the error code is `-1` and the error text is `invalid ordering of\n>> speculative insertion changes`\n>>\n>\n> Which Postgres version is this, exactly? Was the WAL generated by that\n> same version, or did you update/upgrade recently?\n\n\nThe OP failed to mention this is Aurora from AWS based off of 12.4 (we\nchatted on Discord about this).\n\nDavid J.\n\nOn Friday, January 14, 2022, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:,\n\nOn 1/14/22 00:02, Petar Dambovaliev wrote:\n\nthe error code is `-1` and the error text is `invalid ordering of speculative insertion changes`\n\nWhich Postgres version is this, exactly? Was the WAL generated by that same version, or did you update/upgrade recently?The OP failed to mention this is Aurora from AWS based off of 12.4 (we chatted on Discord about this). David J.",
"msg_date": "Fri, 14 Jan 2022 08:33:08 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented error"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at sort_inner_and_outer() I was rather confused what is\nstored in all_pathkeys, because the code does this:\n\n List *all_pathkeys;\n\n ...\n\n all_pathkeys = select_outer_pathkeys_for_merge(root,\n extra->mergeclause_list,\n joinrel);\n\n foreach(l, all_pathkeys)\n {\n List\t *front_pathkey = (List *) lfirst(l);\n ...\n\n /* Make a pathkey list with this guy first */\n if (l != list_head(all_pathkeys))\n outerkeys = lcons(front_pathkey,\n ...);\n else\n ...\n\nwhich seems to suggest all_pathkeys is a list of lists, because why else\nwould front_pathkey be a (List *). But that doesn't seem to be the case,\nfront_pathkey is actually a PathKey, not a List, as demonstrated by gdb:\n\n(gdb) p *front_pathkey\n$2 = {type = T_PathKey, length = 0, ...}\n\nMaybe it's some clever list-fu that I can't comprehend, but I guess it's\na bug present since ~2004. It's benign because we only ever pass the\nfront_pathkey to lcons() which does not really care.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 14 Jan 2022 01:48:33 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "minor bug in sort_inner_and_outer()"
}
] |
[
{
"msg_contents": "Hi,\n\nThe function CreateCheckPoint is specified as CreateCheckpoint in some\nof the code comments whereas in other places it is correctly\nmentioned. Attaching a tiny patch to use CreateCheckPoint consistently\nacross code comments.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Fri, 14 Jan 2022 08:55:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Consistently use the function name CreateCheckPoint instead of\n CreateCheckpoint in code comments"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 10:25 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> The function CreateCheckPoint is specified as CreateCheckpoint in some\n> of the code comments whereas in other places it is correctly\n> mentioned. Attaching a tiny patch to use CreateCheckPoint consistently\n> across code comments.\n>\n\nHeh, that's interesting, as I would have said that CreateCheckpoint is\nthe right casing vs CreateCheckPoint, but it looks like it has always\nbeen the other way (according to\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f0e37a85319e6c113ecd3303cddeb6edd5a6ac44).\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Thu, 13 Jan 2022 23:27:18 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Consistently use the function name CreateCheckPoint instead of\n CreateCheckpoint in code comments"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 8:55 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> The function CreateCheckPoint is specified as CreateCheckpoint in some\n> of the code comments whereas in other places it is correctly\n> mentioned. Attaching a tiny patch to use CreateCheckPoint consistently\n> across code comments.\n>\n\nLGTM. I'll take care of this unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 14 Jan 2022 20:05:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consistently use the function name CreateCheckPoint instead of\n CreateCheckpoint in code comments"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 8:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 14, 2022 at 8:55 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > The function CreateCheckPoint is specified as CreateCheckpoint in some\n> > of the code comments whereas in other places it is correctly\n> > mentioned. Attaching a tiny patch to use CreateCheckPoint consistently\n> > across code comments.\n> >\n>\n> LGTM. I'll take care of this unless someone thinks otherwise.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Jan 2022 08:45:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Consistently use the function name CreateCheckPoint instead of\n CreateCheckpoint in code comments"
}
] |
[
{
"msg_contents": "Hello,\n\nThere are some places in the pg_state_statement's regress test where the \nbool result of comparison between the number of rows obtained and \nwal_records generated by query should be displayed.\nNow counting the number of wal_records for some query in \npg_state_statement is done by the global pgWalUsage.wal_records counter \ndifference calculation.\nDuring query execution the extra wal_records may appear that are not \nrelate to the query.\nThere are two reasons why this might happen:\n1) Owing to the hit into pruning of some page in optional pruning \nfunction (heap_page_prune_opt()).\n2) When a new page is required for a new xid in clog and \nWriteZeroPageXlogRec() was called.\nIn both cases an extra wal record with zero xl_xid is generated, so \nwal_records counter gives an incremented value for this query and \npg_stat_statement test will fail.\n\nThis patch introduces an additional counter of wal records not related \nto the query being executed.\nDue to this counter pg_stat_statement finds out the number of wal \nrecords that are not relevant to the query and does not include them in \nthe per query statistic.\nThis removes the possibility of the error described above.\n\nThere is a way to reproduce this error when patch is not applied:\n1) start server with \"shared_preload_libraries = 'pg_stat_statements'\" \nstring in the postgresql.conf;\n2) replace makefile in contrib/pg_stat_statements with attached one;\n3) replace test file \ncontrib/pg_stat_statements/sql/pg_stat_statements.sql and expected \nresults contrib/pg_stat_statements/expected/pg_stat_statements.out\nwith shorter versions from attached files;\n4) copy test.sh to contrib/pg_stat_statements and make sure that PGHOME \npoint to your server;\n5) cd to contrib/pg_stat_statements and execute:\nexport ITER=1 && while ./start.sh || break; export ITER=$(($ITER+1)); do \n:; done\n\nUsually 100-200 iterations will be enough.\nTo catch the error more faster one can add wal_records column to SELECT\nin line 26 of contrib/pg_stat_statements/sql/pg_stat_statements.sql as \nfollowes:\nSELECT query, calls, rows, wal_records,\nand replace the contrib/pg_stat_statements/expected/pg_stat_statements.out\nwith attached pg_stat_statements-fast.out\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 14 Jan 2022 11:11:07 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Possible fails in pg_stat_statements test"
},
{
"msg_contents": "Hello!\n\nHere is the second version of the patch rebased onto the current master. \nNo logical changes.\nAll other attached files from previous letter are actual.\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 20 Mar 2022 20:09:07 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-14 11:11:07 +0300, Anton A. Melnikov wrote:\n> This patch introduces an additional counter of wal records not related to\n> the query being executed.\n\nThey're not unrelated though.\n\n\n> Due to this counter pg_stat_statement finds out the number of wal records\n> that are not relevant to the query and does not include them in the per\n> query statistic.\n\n-many. For read-only queries the generated WAL due to on-access pruning can be\na significant factor in performance. Removing that information makes\npg_stat_statments *less* useful.\n\n\n> This removes the possibility of the error described above.\n> \n> There is a way to reproduce this error when patch is not applied:\n> 1) start server with \"shared_preload_libraries = 'pg_stat_statements'\"\n> string in the postgresql.conf;\n> 2) replace makefile in contrib/pg_stat_statements with attached one;\n> 3) replace test file contrib/pg_stat_statements/sql/pg_stat_statements.sql\n> and expected results\n> contrib/pg_stat_statements/expected/pg_stat_statements.out\n> with shorter versions from attached files;\n> 4) copy test.sh to contrib/pg_stat_statements and make sure that PGHOME\n> point to your server;\n> 5) cd to contrib/pg_stat_statements and execute:\n> export ITER=1 && while ./start.sh || break; export ITER=$(($ITER+1)); do :;\n> done\n> \n> Usually 100-200 iterations will be enough.\n> To catch the error more faster one can add wal_records column to SELECT\n> in line 26 of contrib/pg_stat_statements/sql/pg_stat_statements.sql as\n> followes:\n> SELECT query, calls, rows, wal_records,\n> and replace the contrib/pg_stat_statements/expected/pg_stat_statements.out\n> with attached pg_stat_statements-fast.out\n\nCan the test failures be encountered without such an elaborate setup? If not,\nthen I don't really see why we need to do anything here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Mar 2022 10:36:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
},
{
"msg_contents": "Hello,\n\nthank you much for your attention and for your thought.\n\nOn 20.03.2022 20:36, Andres Freund wrote:\n>> This patch introduces an additional counter of wal records not related to\n>> the query being executed.\n> \n> They're not unrelated though.\n\nYes, i've missformulated here.\nIndeed there is a relation but it seems of the some other kind.\nIt would be nice to clarify the terminology.\nMaybe divide WAL records into two kinds:\n1) WAL records, the number of which depends on the given query itself. \n(say strong relation)\n2) WAL records, the number of which depends on the given query and on \nthe previous query history. (say weak relation)\n\nSo modified in the patch wal_records counter will belongs to the first \nkind while the number of wal records due to on-access pruning and new \nclog page generation to the second.\n\n> -many. For read-only queries the generated WAL due to on-access pruning can be\n> a significant factor in performance. Removing that information makes\n> pg_stat_statments *less* useful.\n\nA separate counter for the second type of records, say, \nextra_wal_records, will not only remove this disadvantage, but on the \ncontrary will provide additional information.\n\nThe next version of the patch with additional counter is attached.\n\nReally, now it is clearly seen that sometimes\n> WAL due to on-access pruning can be a significant factor !\nAfter pgbench -c10 -t300:\npostgres=# SELECT substring(query for 30), wal_records, \nextra_wal_records FROM pg_stat_statements WHERE extra_wal_records != 0;\n\n substring | wal_records | extra_wal_records\n--------------------------------+-------------+-------------------\n UPDATE pgbench_tellers SET tba | 4557 | 15\n create table pgbench_history(t | 48 | 1\n create table pgbench_branches( | 40 | 1\n UPDATE pgbench_accounts SET ab | 5868 | 1567\n drop table if exists pgbench_a | 94 | 1\n UPDATE pgbench_branches SET bb | 5993 | 14\n SELECT abalance FROM pgbench_a | 0 | 7\n(7 rows)\n\n> Can the test failures be encountered without such an elaborate setup? If not,\n> then I don't really see why we need to do anything here?\n\nThere was a real bug report from our test department. They do long time \nrepetitive tests and sometimes met this failure.\nSo i suppose there is a non-zero probability that such error can occur \nin the one-shot test as well.\nThe sequence given in the first letter helps to catch this failure quickly.\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 30 Mar 2022 09:20:02 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 2:20 AM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n> > Can the test failures be encountered without such an elaborate setup? If not,\n> > then I don't really see why we need to do anything here?\n>\n> There was a real bug report from our test department. They do long time\n> repetitive tests and sometimes met this failure.\n> So i suppose there is a non-zero probability that such error can occur\n> in the one-shot test as well.\n> The sequence given in the first letter helps to catch this failure quickly.\n\nI don't think that the idea of \"extra\" WAL records is very principled.\nIt's pretty vague what \"extra\" means, and your definition seems to be\nbasically \"whatever would be needed to make this test case pass.\" I\nthink the problem is basically with the test cases's idea that # of\nWAL records and # of table rows ought to be equal. I think that's just\nfalse. In general, we'd also have to worry about index insertions,\nwhich would provoke variable numbers of WAL records depending on\nwhether they cause a page split. And we'd have to worry about TOAST\ntable insertions, which could produce different numbers of records\ndepending on the size of the data, the configured block size and TOAST\nthreshold, and whether the TOAST table index incurs a page split. So\neven if we added a mechanism like what you propose here, we would only\nbe fixing this particular test case, not creating infrastructure of\nany general utility.\n\nIf it's true that this test case sometimes randomly fails, then we\nought to fix that somehow, maybe by just removing this particular\ncheck from the test case, or changing it to >=, or something like\nthat. But I don't think adding a new counter is the right idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Mar 2022 15:36:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
},
{
"msg_contents": "Hello!\n\nOn 30.03.2022 22:36, Robert Haas wrote:\n> I don't think that the idea of \"extra\" WAL records is very principled.\n> It's pretty vague what \"extra\" means, and your definition seems to be\n> basically \"whatever would be needed to make this test case pass.\" I\n> think the problem is basically with the test cases's idea that # of\n> WAL records and # of table rows ought to be equal. I think that's just\n> false. In general, we'd also have to worry about index insertions,\n> which would provoke variable numbers of WAL records depending on\n> whether they cause a page split. And we'd have to worry about TOAST\n> table insertions, which could produce different numbers of records\n> depending on the size of the data, the configured block size and TOAST\n> threshold, and whether the TOAST table index incurs a page split. \n\nThank you very much for this information. I really didn't take it into \naccount.\n\n> If it's true that this test case sometimes randomly fails, then we\n> ought to fix that somehow, maybe by just removing this particular\n> check from the test case, or changing it to >=, or something like\n> that. But I don't think adding a new counter is the right idea.\n\nIndeed. Then there is a very simple solution for this particular case as \nwal_records counter may only sometime becomes greater but never less.\nThe corresponding patch is attached.\n\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 31 Mar 2022 18:08:01 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 31, 2022 at 06:08:01PM +0300, Anton A. Melnikov wrote:\n> Hello!\n> \n> On 30.03.2022 22:36, Robert Haas wrote:\n> > I don't think that the idea of \"extra\" WAL records is very principled.\n> > It's pretty vague what \"extra\" means, and your definition seems to be\n> > basically \"whatever would be needed to make this test case pass.\"\n\nI agree, and even it there was a better definition there probably isn't much to\nlearn from it.\n\n> I\n> > think the problem is basically with the test cases's idea that # of\n> > WAL records and # of table rows ought to be equal. I think that's just\n> > false. In general, we'd also have to worry about index insertions,\n> > which would provoke variable numbers of WAL records depending on\n> > whether they cause a page split. And we'd have to worry about TOAST\n> > table insertions, which could produce different numbers of records\n> > depending on the size of the data, the configured block size and TOAST\n> > threshold, and whether the TOAST table index incurs a page split.\n\nIndeed, we added this test as it was hitting only a few queries with small\nrows, which we thought would be stable, but that's apparently not the case. I\nthink the reason we never had any problem is that the buildfarm currently\ndoesn't run pg_stat_statement regression test, as it's marked as\nNO_INSTALLCHECK. Other CI systems like at pgpro evidently have a different\napproach.\n\n> > If it's true that this test case sometimes randomly fails, then we\n> > ought to fix that somehow, maybe by just removing this particular\n> > check from the test case, or changing it to >=, or something like\n> > that. But I don't think adding a new counter is the right idea.\n> \n> Indeed. Then there is a very simple solution for this particular case as\n> wal_records counter may only sometime becomes greater but never less.\n> The corresponding patch is attached.\n\n+1 for this approach, and the patch looks good to me.\n\n\n",
"msg_date": "Fri, 1 Apr 2022 00:00:36 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 12:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > Indeed. Then there is a very simple solution for this particular case as\n> > wal_records counter may only sometime becomes greater but never less.\n> > The corresponding patch is attached.\n>\n> +1 for this approach, and the patch looks good to me.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Jul 2022 13:11:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
},
{
"msg_contents": "\n\nOn 06.07.2022 20:11, Robert Haas wrote:\n> On Thu, Mar 31, 2022 at 12:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>> Indeed. Then there is a very simple solution for this particular case as\n>>> wal_records counter may only sometime becomes greater but never less.\n>>> The corresponding patch is attached.\n>>\n>> +1 for this approach, and the patch looks good to me.\n> \n> Committed.\n> \n\nThanks a lot!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 9 Jul 2022 12:32:27 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: Possible fails in pg_stat_statements test"
}
] |
[
{
"msg_contents": "Hi Hackers,\r\nI've been working with commitTS code recently and during my testing I \r\nfound a bug when writing commit timestamps for subxids. Normally for \r\nsub-transaction commit timestamps in TransactionTreeSetCommitTsData(),\r\nwe iterate through the subxids until we find one that is on the next commits\r\npage. In the code [1] this is the jth subxid. In SetXidCommitTsInPage()\r\nwhere we set all the commit timestamps up to but not including the jth\r\ntimestamp. The jth timestamp then becomes the head timestamp for next\r\ngroup of timestamps on the next page. However, if the jth timestamp is \r\nthe last subxid (put another way, if the LAST subxid is the FIRST\r\ntimestamp on a new page), then the code will break on line 188 [2] and\r\nthe timestamp for the last subxid will never be written.\r\n\r\nThis can be reproduced by enabling track_commit_timestamp and running \r\na simple loop that has a single sub-transaction like:\r\n\r\n psql -t -c 'create table t (id int);'\r\n\r\n for i in {1..500}\r\n do\r\n psql -t -c 'begin; insert into t select 1; savepoint a; insert into t select 2;commit'\r\n done\r\n\r\nThen querying for NULL commitTS in that table will return that there are \r\nunwritten timestamps:\r\n\r\n postgres=# select count(*) from t where pg_xact_commit_timestamp(t.xmin) is NULL;\r\n count\r\n\r\n 1\r\n (1 row)\r\n\r\nThe fix for this is very simple\r\n\r\n\r\n /* if we wrote out all subxids, we're done. /\r\n - if (j + 1 >= nsubxids)\r\n + if (j >= nsubxids)\r\n break;\r\n \r\n[1] https://github.com/postgres/postgres/blame/master/src/backend/access/transam/commit_ts.c#L178\r\n[2] https://github.com/postgres/postgres/blame/master/src/backend/access/transam/commit_ts.c#L188",
"msg_date": "Fri, 14 Jan 2022 22:49:59 +0000",
"msg_from": "\"Kingsborough, Alex\" <kingsboa@amazon.com>",
"msg_from_op": true,
"msg_subject": "Null commitTS bug"
},
{
"msg_contents": "At Fri, 14 Jan 2022 22:49:59 +0000, \"Kingsborough, Alex\" <kingsboa@amazon.com> wrote in \n> The fix for this is very simple\n> \n> \n> /* if we wrote out all subxids, we're done. /\n> - if (j + 1 >= nsubxids)\n> + if (j >= nsubxids)\n> break;\n\nIt looks like a thinko and the fix is correct. (It's a matter of taste\nchoosing between it and \"j == nsubxids\").\n\nI found some confusing lines around but they need not a fix\nconsidering back-patching conflict?\n\n> for (i = 0, headxid = xid;;)\n..\n> i += j - i + 1;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 17 Jan 2022 11:17:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 11:17:24AM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 14 Jan 2022 22:49:59 +0000, \"Kingsborough, Alex\" <kingsboa@amazon.com> wrote in \n>> The fix for this is very simple\n>> \n>> \n>> /* if we wrote out all subxids, we're done. /\n>> - if (j + 1 >= nsubxids)\n>> + if (j >= nsubxids)\n>> break;\n> \n> It looks like a thinko and the fix is correct. (It's a matter of taste\n> choosing between it and \"j == nsubxids\").\n\nIt took me some time to understand the problem from the current code,\nbut I'd like to think that the suggested fix is less confusing.\n\n> I found some confusing lines around but they need not a fix\n> considering back-patching conflict?\n> \n>> for (i = 0, headxid = xid;;)\n> ..\n>> i += j - i + 1;\n\nI am not sure. Do you have anything specific in mind? Perhaps\nsomething that would help in making the code logic easier to follow?\n--\nMichael",
"msg_date": "Mon, 17 Jan 2022 12:45:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Jan 17, 2022 at 11:17:24AM +0900, Kyotaro Horiguchi wrote:\n>> I found some confusing lines around but they need not a fix\n>> considering back-patching conflict?\n>>> i += j - i + 1;\n\n> I am not sure. Do you have anything specific in mind? Perhaps\n> something that would help in making the code logic easier to follow?\n\nIsn't that a very bad way to write \"i = j + 1\"?\n\nI agree with Horiguchi-san that\n\n\tfor (i = 0, headxid = xid;;)\n\nis not great style either. A for-loop ought to be used to control the\nnumber of iterations, not as a confusing variable initialization.\nI think more idiomatic would be\n\n\theadxid = xid;\n\ti = 0;\n\tfor (;;)\n\nwhich makes it clear that this is not where the loop control is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jan 2022 23:01:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
},
{
"msg_contents": "On Sun, Jan 16, 2022 at 11:01:25PM -0500, Tom Lane wrote:\n> Isn't that a very bad way to write \"i = j + 1\"?\n> \n> I agree with Horiguchi-san that\n> \tfor (i = 0, headxid = xid;;)\n\nOkay. Horiguchi-san, would you like to write a patch?\n--\nMichael",
"msg_date": "Tue, 18 Jan 2022 10:43:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
},
{
"msg_contents": "At Tue, 18 Jan 2022 10:43:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sun, Jan 16, 2022 at 11:01:25PM -0500, Tom Lane wrote:\n> > Isn't that a very bad way to write \"i = j + 1\"?\n> > \n> > I agree with Horiguchi-san that\n> > \tfor (i = 0, headxid = xid;;)\n> \n> Okay. Horiguchi-san, would you like to write a patch?\n\nYes, I will.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:48:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
},
{
"msg_contents": "At Tue, 18 Jan 2022 13:48:11 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 18 Jan 2022 10:43:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Sun, Jan 16, 2022 at 11:01:25PM -0500, Tom Lane wrote:\n> > > Isn't that a very bad way to write \"i = j + 1\"?\n> > > \n> > > I agree with Horiguchi-san that\n> > > \tfor (i = 0, headxid = xid;;)\n> > \n> > Okay. Horiguchi-san, would you like to write a patch?\n> \n> Yes, I will.\n\nThis is that. I think this is a separate issue from the actual\nbug. This is applicable at least back to 9.6 and I think this should\nbe applied back to all supported versions to avoid future backptach\nconflicts.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n From 9211fa61513dcdbca273656454395a3dcf3ee4e7 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 20 Jan 2022 10:16:48 +0900\nSubject: [PATCH] Improve confusing code in TransactionTreeSetCommitTsData\n\nTransactionTreeSetCommitTsData has a bit confusing use of the +=\noperator. Simplifying it makes the code easier to follow. In the\nsame function for-loop initializes only non-controlling variables,\nwhich is not great style. They ought to be initialized outside the\nloop.\n---\n src/backend/access/transam/commit_ts.c | 7 +++++--\n 1 file changed, 5 insertions(+), 2 deletions(-)\n\ndiff --git a/src/backend/access/transam/commit_ts.c b/src/backend/access/transam/commit_ts.c\nindex 659109f8d4..88eac10456 100644\n--- a/src/backend/access/transam/commit_ts.c\n+++ b/src/backend/access/transam/commit_ts.c\n@@ -168,7 +168,10 @@ TransactionTreeSetCommitTsData(TransactionId xid, int nsubxids,\n \t * subxid not on the previous page as head. This way, we only have to\n \t * lock/modify each SLRU page once.\n \t */\n-\tfor (i = 0, headxid = xid;;)\n+\theadxid = xid;\n+\ti = 0;\n+\n+\tfor (;;)\n \t{\n \t\tint\t\t\tpageno = TransactionIdToCTsPage(headxid);\n \t\tint\t\t\tj;\n@@ -192,7 +195,7 @@ TransactionTreeSetCommitTsData(TransactionId xid, int nsubxids,\n \t\t * just wrote.\n \t\t */\n \t\theadxid = subxids[j];\n-\t\ti += j - i + 1;\n+\t\ti = j + 1;\n \t}\n \n \t/* update the cached value in shared memory */\n-- \n2.27.0",
"msg_date": "Thu, 20 Jan 2022 12:00:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 12:00:56PM +0900, Kyotaro Horiguchi wrote:\n> This is that. I think this is a separate issue from the actual\n> bug. This is applicable at least back to 9.6 and I think this should\n> be applied back to all supported versions to avoid future backptach\n> conflicts.\n\nThanks. I'll check that tomorrow.\n--\nMichael",
"msg_date": "Thu, 20 Jan 2022 19:50:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 12:00:56PM +0900, Kyotaro Horiguchi wrote:\n> This is that. I think this is a separate issue from the actual\n> bug. This is applicable at least back to 9.6 and I think this should\n> be applied back to all supported versions to avoid future backptach\n> conflicts.\n\nLooks fine enough. I have grouped that with Alex's fix and\nbackpatched the whole down to v10. Thanks!\n--\nMichael",
"msg_date": "Fri, 21 Jan 2022 15:21:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Null commitTS bug"
}
] |
[
{
"msg_contents": "I'd recently been thinking about monitoring how many bytes behind a\nlogical slot was and realized that's not really possible to compute\ncurrently. That's easy enough with a physical slot because we can get\nthe current WAL LSN easily enough and the slot exposes the current LSN\npositions of the slot. However for logical slots that naive\ncomputation isn't quite right. The logical slot can't flush past the\nlast commit, so even if there's 100s of megabytes of unflushed WAL on\nthe slot there may be zero lag (in terms of what's possible to\nprocess).\n\nI've attached a simple patch (sans tests and documentation) to get\nfeedback early. After poking around this afternoon it seemed to me\nthat the simplest approach was to hook into the commit timestamps\ninfrastructure and store the commit's XLogRecPtr in the cache of the\nmost recent value (but of course don't write it out to disk). That the\ndownside of making this feature dependent on \"track_commit_timestamps\n= on\", but that seems reasonable:\n\n1. Getting the xid of the last commit is similarly dependent on commit\ntimestamps infrastructure.\n2. It's a simple place to hook into and avoids new shared data and locking.\n\nThoughts?\n\nThanks,\nJames Coleman",
"msg_date": "Fri, 14 Jan 2022 19:42:27 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 7:42 PM James Coleman <jtc331@gmail.com> wrote:\n> I've attached a simple patch (sans tests and documentation) to get\n> feedback early. After poking around this afternoon it seemed to me\n> that the simplest approach was to hook into the commit timestamps\n> infrastructure and store the commit's XLogRecPtr in the cache of the\n> most recent value (but of course don't write it out to disk). That the\n> downside of making this feature dependent on \"track_commit_timestamps\n> = on\", but that seems reasonable:\n>\n> 1. Getting the xid of the last commit is similarly dependent on commit\n> timestamps infrastructure.\n> 2. It's a simple place to hook into and avoids new shared data and locking.\n>\n> Thoughts?\n\nIt doesn't seem great to me. It's making commit_ts do something other\nthan commit timestamps, which looks kind of ugly.\n\nIn general, I'm concerned about the cost of doing something like this.\nExtra shared memory updates as part of the process of committing a\ntransaction are not (and can't be made) free.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Jan 2022 16:20:11 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On 2022-Jan-14, James Coleman wrote:\n\n> The logical slot can't flush past the\n> last commit, so even if there's 100s of megabytes of unflushed WAL on\n> the slot there may be zero lag (in terms of what's possible to\n> process).\n>\n> I've attached a simple patch (sans tests and documentation) to get\n> feedback early. After poking around this afternoon it seemed to me\n> that the simplest approach was to hook into the commit timestamps\n> infrastructure and store the commit's XLogRecPtr in the cache of the\n> most recent value (but of course don't write it out to disk).\n\nMaybe it would work to have a single LSN in shared memory, as an atomic\nvariable, which uses monotonic advance[1] to be updated. Whether this is\nupdated or not would depend on a new GUC, maybe track_latest_commit_lsn.\nCausing performance pain during transaction commit is not great, but at\nleast this way it shouldn't be *too* a large hit.\n\n[1] part of a large patch at\nhttps://www.postgresql.org/message-id/202111222156.xmo2yji5ifi2%40alvherre.pgsql\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n",
"msg_date": "Mon, 17 Jan 2022 18:34:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 4:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jan-14, James Coleman wrote:\n> > The logical slot can't flush past the\n> > last commit, so even if there's 100s of megabytes of unflushed WAL on\n> > the slot there may be zero lag (in terms of what's possible to\n> > process).\n> >\n> > I've attached a simple patch (sans tests and documentation) to get\n> > feedback early. After poking around this afternoon it seemed to me\n> > that the simplest approach was to hook into the commit timestamps\n> > infrastructure and store the commit's XLogRecPtr in the cache of the\n> > most recent value (but of course don't write it out to disk).\n>\n> Maybe it would work to have a single LSN in shared memory, as an atomic\n> variable, which uses monotonic advance[1] to be updated. Whether this is\n> updated or not would depend on a new GUC, maybe track_latest_commit_lsn.\n> Causing performance pain during transaction commit is not great, but at\n> least this way it shouldn't be *too* a large hit.\n\nI don't know if it would or not, but it's such a hot path that I find\nthe idea a bit worrisome. Atomics aren't free - especially inside of a\nloop.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Jan 2022 16:55:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 4:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jan 14, 2022 at 7:42 PM James Coleman <jtc331@gmail.com> wrote:\n> > I've attached a simple patch (sans tests and documentation) to get\n> > feedback early. After poking around this afternoon it seemed to me\n> > that the simplest approach was to hook into the commit timestamps\n> > infrastructure and store the commit's XLogRecPtr in the cache of the\n> > most recent value (but of course don't write it out to disk). That the\n> > downside of making this feature dependent on \"track_commit_timestamps\n> > = on\", but that seems reasonable:\n> >\n> > 1. Getting the xid of the last commit is similarly dependent on commit\n> > timestamps infrastructure.\n> > 2. It's a simple place to hook into and avoids new shared data and locking.\n> >\n> > Thoughts?\n>\n> It doesn't seem great to me. It's making commit_ts do something other\n> than commit timestamps, which looks kind of ugly.\n\nI wondered about that, but commit_ts already does more than commit\ntimestamps by recording the xid of the last commit.\n\nFor that matter, keeping a cache of last commit metadata in shared\nmemory is arguably not obviously implied by \"track_commit_timestamps\",\nwhich leads to the below...\n\n> In general, I'm concerned about the cost of doing something like this.\n> Extra shared memory updates as part of the process of committing a\n> transaction are not (and can't be made) free.\n\nIt seems to me that to the degree there's a hot path concern here we\nought to separate out the last commit metadata caching from the\n\"track_commit_timestamps\" feature (at least in terms of how it's\ncontrolled by GUCs). If that were done we could also, in theory, allow\ncontrolling which items are tracked to reduce hot path cost if only a\nsubset is needed. For that matter it'd also allow turning on this\nmetadata caching without enabling the commit timestamp storage.\n\nI'm curious, though: I realize it's in the hot path, and I realize\nthat there's an accretive cost to even small features, but given we're\nalready paying the lock cost and updating memory in what is presumably\nthe same cache line, would you expect this cost to be clearly\nmeasurable?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Mon, 17 Jan 2022 20:39:05 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 4:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jan-14, James Coleman wrote:\n>\n> > The logical slot can't flush past the\n> > last commit, so even if there's 100s of megabytes of unflushed WAL on\n> > the slot there may be zero lag (in terms of what's possible to\n> > process).\n> >\n> > I've attached a simple patch (sans tests and documentation) to get\n> > feedback early. After poking around this afternoon it seemed to me\n> > that the simplest approach was to hook into the commit timestamps\n> > infrastructure and store the commit's XLogRecPtr in the cache of the\n> > most recent value (but of course don't write it out to disk).\n>\n> Maybe it would work to have a single LSN in shared memory, as an atomic\n> variable, which uses monotonic advance[1] to be updated. Whether this is\n> updated or not would depend on a new GUC, maybe track_latest_commit_lsn.\n> Causing performance pain during transaction commit is not great, but at\n> least this way it shouldn't be *too* a large hit.\n>\n> [1] part of a large patch at\n> https://www.postgresql.org/message-id/202111222156.xmo2yji5ifi2%40alvherre.pgsql\n\nI'd be happy to make it a separate GUC, though it seems adding an\nadditional atomic access is worse (assuming we can convince ourselves\nputting this into the commit timestamps infrastructure is acceptable)\ngiven here we're already under a lock.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Mon, 17 Jan 2022 20:41:10 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On 2022-Jan-17, Robert Haas wrote:\n\n> On Mon, Jan 17, 2022 at 4:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Maybe it would work to have a single LSN in shared memory, as an atomic\n> > variable, which uses monotonic advance[1] to be updated. Whether this is\n> > updated or not would depend on a new GUC, maybe track_latest_commit_lsn.\n> > Causing performance pain during transaction commit is not great, but at\n> > least this way it shouldn't be *too* a large hit.\n> \n> I don't know if it would or not, but it's such a hot path that I find\n> the idea a bit worrisome. Atomics aren't free - especially inside of a\n> loop.\n\nI think the aspect to worry about the most is what happens when the\nfeature is disabled. The cost for that should be just one comparison,\nwhich I think can be optimized by the compiler fairly well. That should\nbe cheap enough. People who enable it would have to pay the cost of the\natomics, which is of course much higher.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 18 Jan 2022 11:07:19 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 8:39 PM James Coleman <jtc331@gmail.com> wrote:\n> I wondered about that, but commit_ts already does more than commit\n> timestamps by recording the xid of the last commit.\n\nWell, if you're maintaining an SLRU, you do kind of need to know where\nthe leading and lagging ends are.\n\n> For that matter, keeping a cache of last commit metadata in shared\n> memory is arguably not obviously implied by \"track_commit_timestamps\",\n> which leads to the below...\n\nI suppose that's true in the strictest sense, but tracking information\ndoes seem to imply having a way to look it up.\n\n> I'm curious, though: I realize it's in the hot path, and I realize\n> that there's an accretive cost to even small features, but given we're\n> already paying the lock cost and updating memory in what is presumably\n> the same cache line, would you expect this cost to be clearly\n> measurable?\n\nIf you'd asked me ten years ago, I would have said \"no, can't matter,\"\nbut Andres has subsequently demonstrated that a lot of things that I\nthought were well-optimized were actually able to be optimized a lot\nbetter than I thought possible, and some of them were in this area.\nStill, I think it's unlikely that your patch would have a measurable\neffect for the reasons that you state. Wouldn't hurt to test, though.\nAs far as performance goes, I'm more concerned about Alvaro's patch.\nMy concern with this one is more around whether it's too much of a\nkludge.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Jan 2022 09:25:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 9:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 17, 2022 at 8:39 PM James Coleman <jtc331@gmail.com> wrote:\n> > I wondered about that, but commit_ts already does more than commit\n> > timestamps by recording the xid of the last commit.\n>\n> Well, if you're maintaining an SLRU, you do kind of need to know where\n> the leading and lagging ends are.\n\nAs far as I can tell the data in commitTsShared is used purely as an\noptimization for the path looking up the timestamp for an arbitrary\nxid when that xid happens to be the most recent one so that we don't\nhave to look up in the SLRU for that specific case. Maybe I'm missing\nsomething else you're seeing?\n\n> > For that matter, keeping a cache of last commit metadata in shared\n> > memory is arguably not obviously implied by \"track_commit_timestamps\",\n> > which leads to the below...\n>\n> I suppose that's true in the strictest sense, but tracking information\n> does seem to imply having a way to look it up.\n\nLooking up for an arbitrary commit, sure, (that's how I understand the\ncommit timestamps feature anyway) but it seems to me that the \"most\nrecent' is distinct. Reading the code it seems the only usage (besides\nthe boolean activation status also stored there) is in\nTransactionIdGetCommitTsData, and the only consumers of that in core\nappear to be the SQL callable functions to get the latest commit info.\nIt is in commit_ts.h though, so I'm guessing someone is using this\nexternally (and maybe that's why the feature has the shape it does).\n\n> > I'm curious, though: I realize it's in the hot path, and I realize\n> > that there's an accretive cost to even small features, but given we're\n> > already paying the lock cost and updating memory in what is presumably\n> > the same cache line, would you expect this cost to be clearly\n> > measurable?\n>\n> If you'd asked me ten years ago, I would have said \"no, can't matter,\"\n> but Andres has subsequently demonstrated that a lot of things that I\n> thought were well-optimized were actually able to be optimized a lot\n> better than I thought possible, and some of them were in this area.\n> Still, I think it's unlikely that your patch would have a measurable\n> effect for the reasons that you state. Wouldn't hurt to test, though.\n\nIf we get past your other main concern I'd be happy to spin something\nup to prove that out.\n\n> As far as performance goes, I'm more concerned about Alvaro's patch.\n> My concern with this one is more around whether it's too much of a\n> kludge.\n\nAs far as the kludginess factor: do you think additional GUCs would\nhelp clarify that? And/or are the earlier comments on the right path?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 18 Jan 2022 09:47:44 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 9:47 AM James Coleman <jtc331@gmail.com> wrote:\n> > Well, if you're maintaining an SLRU, you do kind of need to know where\n> > the leading and lagging ends are.\n>\n> As far as I can tell the data in commitTsShared is used purely as an\n> optimization for the path looking up the timestamp for an arbitrary\n> xid when that xid happens to be the most recent one so that we don't\n> have to look up in the SLRU for that specific case. Maybe I'm missing\n> something else you're seeing?\n\nI wasn't looking at the code, but that use also seems closer to the\npurpose of committs than your proposal.\n\n> > As far as performance goes, I'm more concerned about Alvaro's patch.\n> > My concern with this one is more around whether it's too much of a\n> > kludge.\n>\n> As far as the kludginess factor: do you think additional GUCs would\n> help clarify that? And/or are the earlier comments on the right path?\n\nTo be honest, I'm sort of keen to hear what other people think. I'm\nshooting from the hip a little bit here...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Jan 2022 09:58:21 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On 2022-Jan-17, James Coleman wrote:\n\n> I'd be happy to make it a separate GUC, though it seems adding an\n> additional atomic access is worse (assuming we can convince ourselves\n> putting this into the commit timestamps infrastructure is acceptable)\n> given here we're already under a lock.\n\nI was thinking it'd not be under any locks ... and I don't think it\nbelongs under commit timestamps either.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nThou shalt check the array bounds of all strings (indeed, all arrays), for\nsurely where thou typest \"foo\" someone someday shall type\n\"supercalifragilisticexpialidocious\" (5th Commandment for C programmers)\n\n\n",
"msg_date": "Tue, 18 Jan 2022 14:50:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 12:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jan-17, James Coleman wrote:\n>\n> > I'd be happy to make it a separate GUC, though it seems adding an\n> > additional atomic access is worse (assuming we can convince ourselves\n> > putting this into the commit timestamps infrastructure is acceptable)\n> > given here we're already under a lock.\n>\n> I was thinking it'd not be under any locks ... and I don't think it\n> belongs under commit timestamps either.\n\nI'm not sure if you saw the other side of this thread with Robert, but\nmy argument is basically that the commit_ts infrastructure already\ncurrently does more than just record commit timestamps for future use,\nit also includes what looks to me like a more general \"last commit\nmetadata\" facility (which is not actually at all necessary to the\nstoring of commit timestamps). It might make sense to refactor this\nsomewhat so that that's more obvious, but I'd like to know if it looks\nthat way to you as well, and, if so, does that make it make more sense\nto rely on the existing infrastructure rather than inventing a new\nfacility?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:41:51 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On 2022-Jan-18, James Coleman wrote:\n\n> Reading the code it seems the only usage (besides\n> the boolean activation status also stored there) is in\n> TransactionIdGetCommitTsData, and the only consumers of that in core\n> appear to be the SQL callable functions to get the latest commit info.\n> It is in commit_ts.h though, so I'm guessing someone is using this\n> externally (and maybe that's why the feature has the shape it does).\n\nLogical replication is the intended consumer of that info, for the\npurposes of conflict handling. I suppose pglogical uses it, but I don't\nknow that code myself.\n\n[ ... greps ... ]\n\nYeah, that function is called from pglogical.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 18 Jan 2022 15:52:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 1:52 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jan-18, James Coleman wrote:\n>\n> > Reading the code it seems the only usage (besides\n> > the boolean activation status also stored there) is in\n> > TransactionIdGetCommitTsData, and the only consumers of that in core\n> > appear to be the SQL callable functions to get the latest commit info.\n> > It is in commit_ts.h though, so I'm guessing someone is using this\n> > externally (and maybe that's why the feature has the shape it does).\n>\n> Logical replication is the intended consumer of that info, for the\n> purposes of conflict handling. I suppose pglogical uses it, but I don't\n> know that code myself.\n>\n> [ ... greps ... ]\n>\n> Yeah, that function is called from pglogical.\n\nThat's interesting, because my use case for the lsn is also logical\nreplication (monitoring).\n\nJames Coleman\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:55:56 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-17 18:34:16 -0300, Alvaro Herrera wrote:\n> Maybe it would work to have a single LSN in shared memory, as an atomic\n> variable, which uses monotonic advance[1] to be updated.\n\nThat could be a reasonable approach.\n\n\n> Whether this is updated or not would depend on a new GUC, maybe\n> track_latest_commit_lsn. Causing performance pain during transaction commit\n> is not great, but at least this way it shouldn't be *too* a large hit.\n\nWhat kind of consistency are we expecting from this new bit of information?\nDoes it have to be perfectly aligned with visibility? If so, it'd need to\nhappen in ProcArrayEndTransaction(), with ProcArrayLock held - which I'd\nconsider a complete no-go, that's way too contended.\n\nIf it's \"just\" another piece of work happening \"sometime around\" transaction\ncommit, it'd be a bit less concerning.\n\n\nI wonder if a very different approach could make sense here. Presumably this\nwouldn't need to be queried at a very high frequency, right? If so, what about\nstoring the latest commit LSN for each backend in PGPROC? That could be\nmaintained without a lock/atomics, and should be just about free.\npg_last_committed_xact() then would have to iterate over all PGPROCs to\ncomplete the LSN, but that's not too bad for an operation like that. We'd also\nneed to maintain a value for all disconnected backends, but that's also not a hot\npath.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:32:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-17 18:34:16 -0300, Alvaro Herrera wrote:\n> > Maybe it would work to have a single LSN in shared memory, as an atomic\n> > variable, which uses monotonic advance[1] to be updated.\n>\n> That could be a reasonable approach.\n>\n>\n> > Whether this is updated or not would depend on a new GUC, maybe\n> > track_latest_commit_lsn. Causing performance pain during transaction commit\n> > is not great, but at least this way it shouldn't be *too* a large hit.\n>\n> What kind of consistency are we expecting from this new bit of information?\n> Does it have to be perfectly aligned with visibility? If so, it'd need to\n> happen in ProcArrayEndTransaction(), with ProcArrayLock held - which I'd\n> consider a complete no-go, that's way too contended.\n\nMy use case wouldn't require perfect alignment with visibility (I'm\nnot sure about the use case Alvaro mentioned in pglogical).\n\n> If it's \"just\" another piece of work happening \"sometime around\" transaction\n> commit, it'd be a bit less concerning.\n\nThat raises the interesting question of where the existing commit_ts\ninfrastructure and last commit caching falls into that range.\n\n> I wonder if a very different approach could make sense here. Presumably this\n> wouldn't need to be queried at a very high frequency, right? If so, what about\n> storing the latest commit LSN for each backend in PGPROC? That could be\n> maintained without a lock/atomics, and should be just about free.\n> pg_last_committed_xact() then would have to iterate over all PGPROCs to\n> complete the LSN, but that's not too bad for an operation like that. We'd also\n> need to maintain a value for all disconnected backends, but that's also not a hot\n> path.\n\nI expect most monitoring setups default to around something like\nchecking anywhere from every single digit seconds to minutes.\n\nIf I read between the lines I imagine you'd see even e.g. every 2s as\nnot that big of a deal here, right?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 18 Jan 2022 16:40:25 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On 2022-01-18 16:40:25 -0500, James Coleman wrote:\n> If I read between the lines I imagine you'd see even e.g. every 2s as\n> not that big of a deal here, right?\n\nRight. Even every 0.2s wouldn't be a problem.\n\n\n",
"msg_date": "Tue, 18 Jan 2022 14:32:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if a very different approach could make sense here. Presumably this\n> wouldn't need to be queried at a very high frequency, right? If so, what about\n> storing the latest commit LSN for each backend in PGPROC? That could be\n> maintained without a lock/atomics, and should be just about free.\n> pg_last_committed_xact() then would have to iterate over all PGPROCs to\n> complete the LSN, but that's not too bad for an operation like that. We'd also\n> need to maintain a value for all disconnected backends, but that's also not a hot\n> path.\n\nOne other question on this: if we went with this would you expect a\nnew function to parallel pg_last_committed_xact()? Or allow the xid\nand lsn in the return of pg_last_committed_xact() potentially not to\nmatch (of course xid might also not be present if\ntrack_commit_timestamps isn't on)? Or would you expect the current xid\nand timestamp use the new infrastructure also?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 18 Jan 2022 18:31:42 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 18:31:42 -0500, James Coleman wrote:\n> One other question on this: if we went with this would you expect a\n> new function to parallel pg_last_committed_xact()?\n\nI don't think I have an opinion the user interface aspect.\n\n\n> Or allow the xid and lsn in the return of pg_last_committed_xact()\n> potentially not to match (of course xid might also not be present if\n> track_commit_timestamps isn't on)? Or would you expect the current xid and\n> timestamp use the new infrastructure also?\n\nWhen you say \"current xid\", what do you mean?\n\nI think it might make sense to use the new approach for all of these.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jan 2022 17:05:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 8:05 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-18 18:31:42 -0500, James Coleman wrote:\n> > One other question on this: if we went with this would you expect a\n> > new function to parallel pg_last_committed_xact()?\n>\n> I don't think I have an opinion the user interface aspect.\n>\n>\n> > Or allow the xid and lsn in the return of pg_last_committed_xact()\n> > potentially not to match (of course xid might also not be present if\n> > track_commit_timestamps isn't on)? Or would you expect the current xid and\n> > timestamp use the new infrastructure also?\n>\n> When you say \"current xid\", what do you mean?\n\nI mean the existing commitTsShared->xidLastCommit field which is\nreturned by pg_last_committed_xact().\n\n> I think it might make sense to use the new approach for all of these.\n\nI think that would mean we could potentially remove commitTsShared,\nbut before doing so I'd like to know if that'd break existing\nconsumers.\n\nAlvaro: You'd mentioned a use case in pglogical; if we moved the\nxidLastCommit (and possibly even the cached last timestamp) out of\ncommit_ts.c (meaning it'd also no longer be under the commit ts lock)\nwould that be a problem for the current use (whether in lock safety or\nin performance)?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 18 Jan 2022 20:32:40 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> I wonder if a very different approach could make sense here. Presumably this\n> wouldn't need to be queried at a very high frequency, right? If so, what about\n> storing the latest commit LSN for each backend in PGPROC? That could be\n> maintained without a lock/atomics, and should be just about free.\n> pg_last_committed_xact() then would have to iterate over all PGPROCs to\n> complete the LSN, but that's not too bad for an operation like that. We'd also\n> need to maintain a value for all disconnected backends, but that's also not a hot\n> path.\n\nIs something roughly like the attached what you'd envisioned? I\nwouldn't expect the final implementation to be in commit_ts.c, but I\nleft it there for expediency's sake in demonstrating the idea since\npg_last_committed_xact() currently finds its home there.\n\nI think we need a shared ProcArrayLock to read the array, correct? We\nalso need to do the global updating under lock, but given it's when a\nproc is removed, that shouldn't be a performance issue if I'm\nfollowing what you are saying.\n\nThanks,\nJames Coleman",
"msg_date": "Tue, 18 Jan 2022 20:58:01 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 20:58:01 -0500, James Coleman wrote:\n> Is something roughly like the attached what you'd envisioned?\n\nRoughly, yea.\n\n\n> I think we need a shared ProcArrayLock to read the array, correct?\n\nYou could perhaps get away without it, but it'd come at the price of needing\nto look at all procs, rather than the connected procs. And I don't think it's\nneeded.\n\n\n> We also need to do the global updating under lock, but given it's when a\n> proc is removed, that shouldn't be a performance issue if I'm following what\n> you are saying.\n\nYup.\n\n\n> +\tLWLockAcquire(ProcArrayLock, LW_SHARED);\n> +\tlsn = ShmemVariableCache->finishedProcsLastCommitLSN;\n> +\tfor (index = 0; index < ProcGlobal->allProcCount; index++)\n> +\t{\n> +\t\tXLogRecPtr procLSN = ProcGlobal->allProcs[index].lastCommitLSN;\n> +\t\tif (procLSN > lsn)\n> +\t\t\tlsn = procLSN;\n> +\t}\n> +\tLWLockRelease(ProcArrayLock);\n\nI think it'd be better to go through the pgprocnos infrastructure, so that\nonly connected procs need to be checked.\n\n LWLockAcquire(ProcArrayLock, LW_SHARED);\n for (i = 0; i < arrayP->numProcs; i++)\n {\n int pgprocno = arrayP->pgprocnos[i];\n PGPROC *proc = &allProcs[pgprocno];\n\n if (proc->lastCommitLSN > lsn)\n lsn =proc->lastCommitLSN;\n }\n\n\n> diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\n> index a58888f9e9..2a026b0844 100644\n> --- a/src/include/storage/proc.h\n> +++ b/src/include/storage/proc.h\n> @@ -258,6 +258,11 @@ struct PGPROC\n> \tPGPROC\t *lockGroupLeader;\t/* lock group leader, if I'm a member */\n> \tdlist_head\tlockGroupMembers;\t/* list of members, if I'm a leader */\n> \tdlist_node\tlockGroupLink;\t/* my member link, if I'm a member */\n> +\n> +\t/*\n> +\t * Last transaction metadata.\n> +\t */\n> +\tXLogRecPtr\tlastCommitLSN;\t\t/* cache of last committed LSN */\n> };\n\nWe do not rely on 64bit integers to be read/written atomically, just 32bit\nones. To make this work for older platforms you'd have to use a\npg_atomic_uint64. On new-ish platforms pg_atomic_read_u64/pg_atomic_write_u64\nend up as plain read/writes, but on older ones they'd do the necessarily\nlocking to make that safe...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jan 2022 18:19:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 9:19 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > + LWLockAcquire(ProcArrayLock, LW_SHARED);\n> > + lsn = ShmemVariableCache->finishedProcsLastCommitLSN;\n> > + for (index = 0; index < ProcGlobal->allProcCount; index++)\n> > + {\n> > + XLogRecPtr procLSN = ProcGlobal->allProcs[index].lastCommitLSN;\n> > + if (procLSN > lsn)\n> > + lsn = procLSN;\n> > + }\n> > + LWLockRelease(ProcArrayLock);\n>\n> I think it'd be better to go through the pgprocnos infrastructure, so that\n> only connected procs need to be checked.\n>\n> LWLockAcquire(ProcArrayLock, LW_SHARED);\n> for (i = 0; i < arrayP->numProcs; i++)\n> {\n> int pgprocno = arrayP->pgprocnos[i];\n> PGPROC *proc = &allProcs[pgprocno];\n>\n> if (proc->lastCommitLSN > lsn)\n> lsn =proc->lastCommitLSN;\n> }\n>\n>\n> > diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\n> > index a58888f9e9..2a026b0844 100644\n> > --- a/src/include/storage/proc.h\n> > +++ b/src/include/storage/proc.h\n> > @@ -258,6 +258,11 @@ struct PGPROC\n> > PGPROC *lockGroupLeader; /* lock group leader, if I'm a member */\n> > dlist_head lockGroupMembers; /* list of members, if I'm a leader */\n> > dlist_node lockGroupLink; /* my member link, if I'm a member */\n> > +\n> > + /*\n> > + * Last transaction metadata.\n> > + */\n> > + XLogRecPtr lastCommitLSN; /* cache of last committed LSN */\n> > };\n>\n> We do not rely on 64bit integers to be read/written atomically, just 32bit\n> ones. To make this work for older platforms you'd have to use a\n> pg_atomic_uint64. On new-ish platforms pg_atomic_read_u64/pg_atomic_write_u64\n> end up as plain read/writes, but on older ones they'd do the necessarily\n> locking to make that safe...\n\nAll right, here's an updated patch.\n\nThe final interface (new function or refactor the existing not to rely\non commit_ts) is still TBD (and I'd appreciate input on that from\nAlvaro and others).\n\nThanks,\nJames Coleman",
"msg_date": "Wed, 19 Jan 2022 21:23:12 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-19 21:23:12 -0500, James Coleman wrote:\n> { oid => '3537', descr => 'get identification of SQL object',\n> diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\n> index a58888f9e9..2a026b0844 100644\n> --- a/src/include/storage/proc.h\n> +++ b/src/include/storage/proc.h\n> @@ -258,6 +258,11 @@ struct PGPROC\n> \tPGPROC\t *lockGroupLeader;\t/* lock group leader, if I'm a member */\n> \tdlist_head\tlockGroupMembers;\t/* list of members, if I'm a leader */\n> \tdlist_node\tlockGroupLink;\t/* my member link, if I'm a member */\n> +\n> +\t/*\n> +\t * Last transaction metadata.\n> +\t */\n> +\tXLogRecPtr\tlastCommitLSN;\t\t/* cache of last committed LSN */\n> };\n\nMight be worth forcing this to be on a separate cacheline than stuff more\nhotly accessed by other backends, like the lock group stuff.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Jan 2022 19:12:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 10:12 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-19 21:23:12 -0500, James Coleman wrote:\n> > { oid => '3537', descr => 'get identification of SQL object',\n> > diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\n> > index a58888f9e9..2a026b0844 100644\n> > --- a/src/include/storage/proc.h\n> > +++ b/src/include/storage/proc.h\n> > @@ -258,6 +258,11 @@ struct PGPROC\n> > PGPROC *lockGroupLeader; /* lock group leader, if I'm a member */\n> > dlist_head lockGroupMembers; /* list of members, if I'm a leader */\n> > dlist_node lockGroupLink; /* my member link, if I'm a member */\n> > +\n> > + /*\n> > + * Last transaction metadata.\n> > + */\n> > + XLogRecPtr lastCommitLSN; /* cache of last committed LSN */\n> > };\n>\n> Might be worth forcing this to be on a separate cacheline than stuff more\n> hotly accessed by other backends, like the lock group stuff.\n\nWhat's the best way to do that? I'm poking around and don't see any\nobvious cases of doing that in a struct definition. I could add a\nchar* of size PG_CACHE_LINE_SIZE, but that seems unnecessarily\nwasteful, and the other ALIGN macros seem mostly used in situations\nwhere we're allocating memory. Is it possible in C to get the size of\nthe struct so far to be able to subtract from PG_CACHE_LINE_SIZE?\nMaybe there's some other approach I'm missing...\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Thu, 20 Jan 2022 08:15:21 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 8:15 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 10:12 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-01-19 21:23:12 -0500, James Coleman wrote:\n> > > { oid => '3537', descr => 'get identification of SQL object',\n> > > diff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\n> > > index a58888f9e9..2a026b0844 100644\n> > > --- a/src/include/storage/proc.h\n> > > +++ b/src/include/storage/proc.h\n> > > @@ -258,6 +258,11 @@ struct PGPROC\n> > > PGPROC *lockGroupLeader; /* lock group leader, if I'm a member */\n> > > dlist_head lockGroupMembers; /* list of members, if I'm a leader */\n> > > dlist_node lockGroupLink; /* my member link, if I'm a member */\n> > > +\n> > > + /*\n> > > + * Last transaction metadata.\n> > > + */\n> > > + XLogRecPtr lastCommitLSN; /* cache of last committed LSN */\n> > > };\n> >\n> > Might be worth forcing this to be on a separate cacheline than stuff more\n> > hotly accessed by other backends, like the lock group stuff.\n>\n> What's the best way to do that? I'm poking around and don't see any\n> obvious cases of doing that in a struct definition. I could add a\n> char* of size PG_CACHE_LINE_SIZE, but that seems unnecessarily\n> wasteful, and the other ALIGN macros seem mostly used in situations\n> where we're allocating memory. Is it possible in C to get the size of\n> the struct so far to be able to subtract from PG_CACHE_LINE_SIZE?\n> Maybe there's some other approach I'm missing...\n\nLooking at this again it seems like there are two ways to do this I see so far:\n\nFirst would be to have a container struct and two structs inside --\nsomething like one struct for local process access and one for shared\nprocess access. But that seems like it'd likely end up pretty messy in\nterms of how much it'd affect other parts of the code, so I'm hesitant\nto go down that path.\n\nAlternatively I see pg_attribute_aligned, but that's not defined\n(AFAICT) on clang, for example, so I'm not sure that'd be acceptable?\n\nIt doesn't seem to me that there's anything like CACHELINEALIGN that\nwould work in this context (in a struct definition) since that appears\nto be designed to work with allocated memory.\n\nIs there an approach I'm missing? Or does one of these seem reasonable?\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Fri, 28 Jan 2022 18:43:57 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-28 18:43:57 -0500, James Coleman wrote:\n> Alternatively I see pg_attribute_aligned, but that's not defined\n> (AFAICT) on clang, for example, so I'm not sure that'd be acceptable?\n\nclang should have it (it defines __GNUC__). The problem would be msvc, I\nthink. Not sure if there's a way to get to a common way of defining it between\ngcc-like compilers and msvc (the rest is niche enough that we don't need to\ncare about the efficiency I think).\n\n\n> Is there an approach I'm missing? Or does one of these seem reasonable?\n\nI'd probably just slap a char *pad[PG_CACHELINE_SIZE] in there if the above\ncan't be made work.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 28 Jan 2022 16:36:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On 2022-01-28 16:36:32 -0800, Andres Freund wrote:\n> On 2022-01-28 18:43:57 -0500, James Coleman wrote:\n> > Alternatively I see pg_attribute_aligned, but that's not defined\n> > (AFAICT) on clang, for example, so I'm not sure that'd be acceptable?\n> \n> clang should have it (it defines __GNUC__). The problem would be msvc, I\n> think. Not sure if there's a way to get to a common way of defining it between\n> gcc-like compilers and msvc (the rest is niche enough that we don't need to\n> care about the efficiency I think).\n\nSeems like it's doable:\n\nhttps://godbolt.org/z/3c5573bTW\n\n\n",
"msg_date": "Fri, 28 Jan 2022 16:47:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Fri, Jan 28, 2022 at 7:47 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-01-28 16:36:32 -0800, Andres Freund wrote:\n> > On 2022-01-28 18:43:57 -0500, James Coleman wrote:\n> > > Alternatively I see pg_attribute_aligned, but that's not defined\n> > > (AFAICT) on clang, for example, so I'm not sure that'd be acceptable?\n> >\n> > clang should have it (it defines __GNUC__). The problem would be msvc, I\n> > think. Not sure if there's a way to get to a common way of defining it between\n> > gcc-like compilers and msvc (the rest is niche enough that we don't need to\n> > care about the efficiency I think).\n>\n> Seems like it's doable:\n>\n> https://godbolt.org/z/3c5573bTW\n\nOh, thanks. I'd seen some discussion previously on the list about\nclang not supporting it, but that seems to have been incorrect. Also I\ndidn't know about that compiler site -- that's really neat.\n\nHere's an updated patch series using that approach; the first patch\ncan (and probably should be) committed separately/regardless to update\nthe pg_attribute_aligned to be used in MSVC.\n\nThanks,\nJames Coleman",
"msg_date": "Sat, 29 Jan 2022 14:51:32 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "On Sat, Jan 29, 2022 at 02:51:32PM -0500, James Coleman wrote:\n> Oh, thanks. I'd seen some discussion previously on the list about\n> clang not supporting it, but that seems to have been incorrect. Also I\n> didn't know about that compiler site -- that's really neat.\n> \n> Here's an updated patch series using that approach; the first patch\n> can (and probably should be) committed separately/regardless to update\n> the pg_attribute_aligned to be used in MSVC.\n\nI don't have much an opinion on 0002, except that I am worried about\nmore data added to PGPROC that make it larger. Now, 0001 looks like a\nhidden gem.\n\nBased on the upstream docs, it looks that using __declspec(align(a))\nis right:\nhttps://docs.microsoft.com/en-us/cpp/cpp/align-cpp?view=msvc-170\n\nIs __declspec available in Visual Studio 2013? I can see it in the\nupstream docs for 2015, but I am not sure about 2013.\n\n> /* This must match the corresponding code in c.h: */\n> #if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__)\n> #define pg_attribute_aligned(a) __attribute__((aligned(a)))\n> +#elif defined(_MSC_VER)\n> +#define pg_attribute_aligned(a) __declspec(align(a))\n> #endif\n> typedef __int128 int128a\n\nThis change in ./configure looks incorrect to me. Shouldn't the\nchange happen in c-compiler.m4 instead?\n\n> +#if defined(_MSC_VER)\n> +#define pg_attribute_aligned(a) __declspec(align(a))\n> +#endif\n\nThis way of doing things is inconsistent with the surroundings. I\nthink that you should have an #elif for _MSC_VER to keep all the\ndefinitions of pg_attribute_aligned(9 & friends in the same block.\n\nThis makes me wonder whether we would should introduce noreturn, as\nof:\nhttps://docs.microsoft.com/en-us/cpp/c-language/noreturn?view=msvc-140\n--\nMichael",
"msg_date": "Thu, 7 Apr 2022 16:36:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
},
{
"msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3515/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Aug 2022 11:50:54 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Add last commit LSN to pg_last_committed_xact()"
}
] |
[
{
"msg_contents": "Hi Hacker\n\nSolaris and FreeBSD supports large/super pages, and can be used \nautomatically by applications.\n\nSeems Postgres can't use the large/super pages on Solaris and FreeBSD \nos(I think can't use the large/super page HPUX and AIX), is there anyone \ncould take a look?\n\nfollowing is my testing:\n\n\n1. check OS supported large page size\n\n-bash-4.3$ pagesize -a\n4096\n2097152\n1073741824\n\n\n2. the OS version is 5.11\n\n-bash-4.3$ uname -a\nSunOS 08a6a65f-b5a0-c159-f184-e81c379d1f5d 5.11 \nhunghu-20220114T101258Z:a3282be5a8 i86pc i386 i86pc\n-bash-4.3$\n\n\n3. PostgreSQL shared buffers is 11G\n\n-bash-4.3$ grep -i shared_buffer postgresql.conf\nshared_buffers = 11GB # min 128kB\n\n\n4. checked on Solaris OS, all of the memory are 4k page for PostgreSQL , \nhad not use 2M or 1G page size\n\n-bash-4.3$ cat postmaster.pid |head -n 1\n31637\n-bash-4.3$ pmap -sxa 31637\n31637: /opt/local/bin/postgres\n Address Kbytes RSS Anon Locked Pgsz \nMode Mapped File\n0000000000400000 4 4 - - 4K r-x-- \npostgres\n0000000000401000 872 28 - - - r-x-- \npostgres\n00000000004DB000 84 84 - - 4K r-x-- \npostgres\n00000000004F0000 184 24 - - - r-x-- \npostgres\n000000000051E000 248 248 - - 4K r-x-- \npostgres\n000000000055C000 8 8 - - - r-x-- \npostgres\n000000000055E000 8 8 - - 4K r-x-- \npostgres\n0000000000560000 16 12 - - - r-x-- \npostgres\n0000000000564000 4 4 - - 4K r-x-- \npostgres\n0000000000565000 20 20 - - - r-x-- \npostgres\n000000000056A000 4 4 - - 4K r-x-- \npostgres\n000000000056B000 24 24 - - - r-x-- \npostgres\n0000000000571000 8 8 - - 4K r-x-- \npostgres\n0000000000573000 4 4 - - - r-x-- \npostgres\n0000000000574000 16 16 - - 4K r-x-- \npostgres\n0000000000578000 24 4 - - - r-x-- \npostgres\n000000000057E000 4 4 - - 4K r-x-- \npostgres\n000000000057F000 8 8 - - - r-x-- \npostgres\n0000000000581000 8 8 - - 4K r-x-- \npostgres\n0000000000583000 4 4 - - - r-x-- \npostgres\n0000000000584000 188 188 - - 4K r-x-- \npostgres\n00000000005B3000 84 28 - - - r-x-- \npostgres\n00000000005C8000 24 24 - - 4K r-x-- \npostgres\n00000000005CE000 76 40 - - - r-x-- \npostgres\n00000000005E1000 4 4 - - 4K r-x-- \npostgres\n00000000005E2000 368 280 - - - r-x-- \npostgres\n000000000063E000 4 4 - - 4K r-x-- \npostgres\n000000000063F000 80 36 - - - r-x-- \npostgres\n0000000000653000 12 12 - - 4K r-x-- \npostgres\n0000000000656000 8 8 - - - r-x-- \npostgres\n0000000000658000 4 4 - - 4K r-x-- \npostgres\n0000000000659000 12 12 - - - r-x-- \npostgres\n000000000065C000 8 8 - - 4K r-x-- \npostgres\n000000000065E000 4 4 - - - r-x-- \npostgres\n000000000065F000 4 4 - - 4K r-x-- \npostgres\n0000000000660000 12 12 - - - r-x-- \npostgres\n0000000000663000 8 8 - - 4K r-x-- \npostgres\n0000000000665000 12 12 - - - r-x-- \npostgres\n0000000000668000 4 4 - - 4K r-x-- \npostgres\n0000000000669000 4 4 - - - r-x-- \npostgres\n000000000066A000 8 8 - - 4K r-x-- \npostgres\n000000000066C000 32 32 - - - r-x-- \npostgres\n0000000000674000 4 4 - - 4K r-x-- \npostgres\n0000000000675000 4 4 - - - r-x-- \npostgres\n0000000000676000 4 4 - - 4K r-x-- \npostgres\n0000000000677000 156 156 - - - r-x-- \npostgres\n000000000069E000 4 4 - - 4K r-x-- \npostgres\n000000000069F000 416 396 - - - r-x-- \npostgres\n0000000000707000 4 4 - - 4K r-x-- \npostgres\n0000000000708000 32 32 - - - r-x-- \npostgres\n0000000000710000 4 4 - - 4K r-x-- \npostgres\n0000000000711000 396 280 - - - r-x-- \npostgres\n0000000000774000 4 4 - - 4K r-x-- \npostgres\n0000000000775000 96 68 - - - r-x-- \npostgres\n000000000078D000 12 12 - - 4K r-x-- \npostgres\n0000000000790000 364 352 - - - r-x-- \npostgres\n00000000007EB000 16 16 - - 4K r-x-- \npostgres\n00000000007EF000 4 4 - - - r-x-- \npostgres\n00000000007F0000 16 16 - - 4K r-x-- \npostgres\n00000000007F4000 4 4 - - - r-x-- \npostgres\n00000000007F5000 4 4 - - 4K r-x-- \npostgres\n00000000007F6000 8 8 - - - r-x-- \npostgres\n00000000007F8000 4 4 - - 4K r-x-- \npostgres\n00000000007F9000 76 64 - - - r-x-- \npostgres\n000000000080C000 4 4 - - 4K r-x-- \npostgres\n000000000080D000 4 4 - - - r-x-- \npostgres\n000000000080E000 4 4 - - 4K r-x-- \npostgres\n000000000080F000 504 436 - - - r-x-- \npostgres\n000000000088D000 8 8 - - 4K r-x-- \npostgres\n000000000088F000 8 8 - - - r-x-- \npostgres\n0000000000891000 20 20 - - 4K r-x-- \npostgres\n0000000000896000 12 12 - - - r-x-- \npostgres\n0000000000899000 4 4 - - 4K r-x-- \npostgres\n000000000089A000 8 8 - - - r-x-- \npostgres\n000000000089C000 28 28 - - 4K r-x-- \npostgres\n00000000008A3000 4 4 - - - r-x-- \npostgres\n00000000008A4000 4 4 - - 4K r-x-- \npostgres\n00000000008A5000 68 68 - - - r-x-- \npostgres\n00000000008B6000 4 4 - - 4K r-x-- \npostgres\n00000000008B7000 16 16 - - - r-x-- \npostgres\n00000000008BB000 4 4 - - 4K r-x-- \npostgres\n00000000008BC000 92 68 - - - r-x-- \npostgres\n00000000008D3000 8 8 - - 4K r-x-- \npostgres\n00000000008D5000 12 12 - - - r-x-- \npostgres\n00000000008D8000 4 4 - - 4K r-x-- \npostgres\n00000000008D9000 12 12 - - - r-x-- \npostgres\n00000000008DC000 4 4 - - 4K r-x-- \npostgres\n00000000008DD000 8 8 - - - r-x-- \npostgres\n00000000008DF000 4 4 - - 4K r-x-- \npostgres\n00000000008E0000 80 76 - - - r-x-- \npostgres\n00000000008F4000 4 4 - - 4K r-x-- \npostgres\n00000000008F5000 12 12 - - - r-x-- \npostgres\n00000000008F8000 4 4 - - 4K r-x-- \npostgres\n00000000008F9000 4 4 - - - r-x-- \npostgres\n00000000008FA000 4 4 - - 4K r-x-- \npostgres\n00000000008FB000 8 8 - - - r-x-- \npostgres\n00000000008FD000 16 16 - - 4K r-x-- \npostgres\n0000000000901000 8 8 - - - r-x-- \npostgres\n0000000000903000 24 24 - - 4K r-x-- \npostgres\n0000000000909000 12 12 - - - r-x-- \npostgres\n000000000090C000 4 4 - - 4K r-x-- \npostgres\n000000000090D000 4 4 - - - r-x-- \npostgres\n000000000090E000 8 8 - - 4K r-x-- \npostgres\n0000000000910000 12 12 - - - r-x-- \npostgres\n0000000000913000 4 4 - - 4K r-x-- \npostgres\n0000000000914000 8 8 - - - r-x-- \npostgres\n0000000000916000 8 8 - - 4K r-x-- \npostgres\n0000000000918000 4 4 - - - r-x-- \npostgres\n0000000000919000 16 16 - - 4K r-x-- \npostgres\n000000000091D000 4 4 - - - r-x-- \npostgres\n000000000091E000 8 8 - - 4K r-x-- \npostgres\n0000000000920000 4 4 - - - r-x-- \npostgres\n0000000000921000 8 8 - - 4K r-x-- \npostgres\n0000000000923000 4 4 - - - r-x-- \npostgres\n0000000000924000 4 4 - - 4K r-x-- \npostgres\n0000000000925000 24 24 - - - r-x-- \npostgres\n000000000092B000 4 4 - - 4K r-x-- \npostgres\n000000000092C000 96 64 - - - r-x-- \npostgres\n0000000000944000 4 4 - - 4K r-x-- \npostgres\n0000000000945000 104 96 - - - r-x-- \npostgres\n000000000095F000 4 4 - - 4K r-x-- \npostgres\n0000000000960000 28 28 - - - r-x-- \npostgres\n0000000000967000 4 4 - - 4K r-x-- \npostgres\n0000000000968000 24 24 - - - r-x-- \npostgres\n000000000096E000 4 4 - - 4K r-x-- \npostgres\n000000000096F000 480 260 - - - r-x-- \npostgres\n00000000009E7000 4 4 - - 4K r-x-- \npostgres\n00000000009E8000 232 148 - - - r-x-- \npostgres\n0000000000A22000 4 4 - - 4K r-x-- \npostgres\n0000000000A23000 140 60 - - - r-x-- \npostgres\n0000000000A46000 8 8 - - 4K r-x-- \npostgres\n0000000000A48000 140 112 - - - r-x-- \npostgres\n0000000000A6B000 4 4 - - 4K r-x-- \npostgres\n0000000000A6C000 12 12 - - - r-x-- \npostgres\n0000000000A6F000 24 24 - - 4K r-x-- \npostgres\n0000000000A75000 32 32 - - - r-x-- \npostgres\n0000000000A7D000 20 20 - - 4K r-x-- \npostgres\n0000000000A82000 4 4 - - - r-x-- \npostgres\n0000000000A83000 40 40 - - 4K r-x-- \npostgres\n0000000000A8D000 8 8 - - - r-x-- \npostgres\n0000000000A8F000 16 16 - - 4K r-x-- \npostgres\n0000000000A93000 4 4 - - - r-x-- \npostgres\n0000000000A94000 4 4 - - 4K r-x-- \npostgres\n0000000000A95000 4 4 - - - r-x-- \npostgres\n0000000000A96000 12 12 - - 4K r-x-- \npostgres\n0000000000A99000 12 12 - - - r-x-- \npostgres\n0000000000A9C000 8 8 - - 4K r-x-- \npostgres\n0000000000A9E000 64 64 - - - r-x-- \npostgres\n0000000000AAE000 4 4 - - 4K r-x-- \npostgres\n0000000000AAF000 8 8 - - - r-x-- \npostgres\n0000000000AB1000 20 20 - - 4K r-x-- \npostgres\n0000000000AB6000 4 4 - - - r-x-- \npostgres\n0000000000AB7000 12 12 - - 4K r-x-- \npostgres\n0000000000ABA000 4 4 - - - r-x-- \npostgres\n0000000000ABB000 12 12 - - 4K r-x-- \npostgres\n0000000000ABE000 12 12 - - - r-x-- \npostgres\n0000000000AC1000 24 24 - - 4K r-x-- \npostgres\n0000000000AC7000 76 72 - - - r-x-- \npostgres\n0000000000ADA000 4 4 - - 4K r-x-- \npostgres\n0000000000ADB000 824 752 - - - r-x-- \npostgres\n0000000000BA9000 4 4 - - 4K r-x-- \npostgres\n0000000000BAA000 60 60 - - - r-x-- \npostgres\n0000000000BB9000 8 8 - - 4K r-x-- \npostgres\n0000000000BBB000 4 4 - - - r-x-- \npostgres\n0000000000BBC000 4 4 - - 4K r-x-- \npostgres\n0000000000BBD000 52 20 - - - r-x-- \npostgres\n0000000000BCA000 4 4 - - 4K r-x-- \npostgres\n0000000000BCB000 4 4 - - - r-x-- \npostgres\n0000000000BCC000 8 8 - - 4K r-x-- \npostgres\n0000000000BCE000 36 36 - - - r-x-- \npostgres\n0000000000BD7000 4 4 - - 4K r-x-- \npostgres\n0000000000BD8000 8 8 - - - r-x-- \npostgres\n0000000000BDA000 4 4 - - 4K r-x-- \npostgres\n0000000000BDB000 4 4 - - - r-x-- \npostgres\n0000000000BDC000 4 4 - - 4K r-x-- \npostgres\n0000000000BDD000 88 36 - - - r-x-- \npostgres\n0000000000BF3000 4 4 - - 4K r-x-- \npostgres\n0000000000BF4000 72 68 - - - r-x-- \npostgres\n0000000000C06000 4 4 - - 4K r-x-- \npostgres\n0000000000C07000 12 12 - - - r-x-- \npostgres\n0000000000C0A000 12 12 - - 4K r-x-- \npostgres\n0000000000C0D000 4 4 - - - r-x-- \npostgres\n0000000000C0E000 20 20 - - 4K r-x-- \npostgres\n0000000000C13000 4 4 - - - r-x-- \npostgres\n0000000000C14000 8 8 - - 4K r-x-- \npostgres\n0000000000C16000 96 96 - - - r-x-- \npostgres\n0000000000C2E000 4 4 - - 4K r-x-- \npostgres\n0000000000C2F000 8 8 - - - r-x-- \npostgres\n0000000000C31000 8 8 - - 4K r-x-- \npostgres\n0000000000C33000 164 24 - - - r-x-- \npostgres\n0000000000C5C000 4 4 - - 4K r-x-- \npostgres\n0000000000C5D000 4 4 - - - r-x-- \npostgres\n0000000000C5E000 4 4 - - 4K r-x-- \npostgres\n0000000000C5F000 4 4 - - - r-x-- \npostgres\n0000000000C60000 44 44 - - 4K r-x-- \npostgres\n0000000000C6B000 16 16 - - - r-x-- \npostgres\n0000000000C6F000 20 20 - - 4K r-x-- \npostgres\n0000000000C74000 4 4 - - - r-x-- \npostgres\n0000000000C75000 12 12 - - 4K r-x-- \npostgres\n0000000000C78000 4 4 - - - r-x-- \npostgres\n0000000000C79000 8 8 - - 4K r-x-- \npostgres\n0000000000C7B000 8 8 - - - r-x-- \npostgres\n0000000000C7D000 4 4 - - 4K r-x-- \npostgres\n0000000000C7E000 20 4 - - - r-x-- \npostgres\n0000000000C83000 20 20 - - 4K r-x-- \npostgres\n0000000000C97000 72 72 4 - 4K rw--- \npostgres\n0000000000CA9000 4 4 - - 4K rw--- \npostgres\n0000000000CAA000 4 - - - - rw--- \npostgres\n0000000000CAB000 24 24 4 - 4K rw--- \npostgres\n0000000000CB1000 4 - - - - rw--- \npostgres\n0000000000CB2000 8 8 - - 4K rw--- \npostgres\n0000000000CB4000 140 - - - - rw--- \npostgres\n0000000000CD7000 856 856 8 - 4K \nrw--- [ heap ]\n0000000000DAD000 4 - - - - \nrw--- [ heap ]\n0000000000DAE000 8 8 - - 4K \nrw--- [ heap ]\n0000000000DB0000 4 - - - - \nrw--- [ heap ]\n0000000000DB1000 48 48 4 - 4K \nrw--- [ heap ]\n0000000000DBD000 16 - - - - \nrw--- [ heap ]\n0000000000DC1000 4 4 - - 4K \nrw--- [ heap ]\n0000000000DC2000 20 - - - - \nrw--- [ heap ]\n0000000000DC7000 60 60 - - 4K \nrw--- [ heap ]\n0000000000DD6000 4 - - - - \nrw--- [ heap ]\n0000000000DD7000 4 4 - - 4K \nrw--- [ heap ]\n0000000000DD8000 4 - - - - \nrw--- [ heap ]\n0000000000DD9000 4 4 - - 4K \nrw--- [ heap ]\n0000000000DDA000 4 - - - - \nrw--- [ heap ]\n0000000000DDB000 24 24 - - 4K \nrw--- [ heap ]\n0000000000DE1000 12 - - - - \nrw--- [ heap ]\n0000000000DE4000 52 52 - - 4K \nrw--- [ heap ]\n0000000000DF1000 48 - - - - \nrw--- [ heap ]\n0000000000DFD000 4 4 - - 4K \nrw--- [ heap ]\n0000000000DFE000 12 - - - - \nrw--- [ heap ]\n0000000000E01000 20 20 - - 4K \nrw--- [ heap ]\n0000000000E06000 4 - - - - \nrw--- [ heap ]\n0000000000E07000 12 12 - - 4K \nrw--- [ heap ]\n0000000000E0A000 24 - - - - \nrw--- [ heap ]\n0000000000E10000 20 20 - - 4K \nrw--- [ heap ]\n0000000000E15000 4 - - - - \nrw--- [ heap ]\n0000000000E16000 24 24 - - 4K \nrw--- [ heap ]\n0000000000E1C000 48 - - - - \nrw--- [ heap ]\nFFFFFAFCC0000000 32920 32920 32920 - 4K \nrw-s- [ anon ]\nFFFFFAFCC2026000 4 - - - - \nrw-s- [ anon ]\nFFFFFAFCC2027000 20 20 20 - 4K \nrw-s- [ anon ]\nFFFFFAFCC202C000 2048 24 - - - \nrw-s- [ anon ]\nFFFFFAFCC222C000 4 4 4 - 4K \nrw-s- [ anon ]\nFFFFFAFCC222D000 124 - - - - \nrw-s- [ anon ]\nFFFFFAFCC224C000 8 8 8 - 4K \nrw-s- [ anon ]\nFFFFFAFCC224E000 252 8 - - - \nrw-s- [ anon ]\nFFFFFAFCC228D000 8 8 8 - 4K \nrw-s- [ anon ]\nFFFFFAFCC228F000 60 8 - - - \nrw-s- [ anon ]\nFFFFFAFCC229E000 4 4 4 - 4K \nrw-s- [ anon ]\nFFFFFAFCC229F000 124 - - - - \nrw-s- [ anon ]\nFFFFFAFCC22BE000 90196 90196 90196 - 4K \nrw-s- [ anon ]\nFFFFFAFCC7AD3000 11534332 752 - - - \nrw-s- [ anon ]\nFFFFFAFF87AD2000 22532 22532 22532 - 4K \nrw-s- [ anon ]\nFFFFFAFF890D3000 28156 - - - - \nrw-s- [ anon ]\nFFFFFAFF8AC52000 72796 72796 72796 - 4K \nrw-s- [ anon ]\nFFFFFAFF8F369000 12 - - - - \nrw-s- [ anon ]\nFFFFFAFF8F36C000 56752 56752 56752 - 4K \nrw-s- [ anon ]\nFFFFFAFF92AD8000 28 - - - - \nrw-s- [ anon ]\nFFFFFAFF92ADF000 247348 247348 247348 - 4K \nrw-s- [ anon ]\nFFFFFAFFA1C6C000 124 - - - - \nrw-s- [ anon ]\nFFFFFAFFA1C8B000 8688 8688 8688 - 4K \nrw-s- [ anon ]\nFFFFFAFFA2507000 3216 - - - - \nrw-s- [ anon ]\nFFFFFAFFA282B000 20600 20600 20600 - 4K \nrw-s- [ anon ]\nFFFFFAFFA3C49000 60 - - - - \nrw-s- [ anon ]\nFFFFFAFFA3C58000 34984 34984 34984 - 4K \nrw-s- [ anon ]\nFFFFFAFFA5E82000 112 - - - - \nrw-s- [ anon ]\nFFFFFAFFA5E9E000 204 204 204 - 4K \nrw-s- [ anon ]\nFFFFFAFFA5ED1000 152676 - - - - \nrw-s- [ anon ]\nFFFFFAFFE9470000 4 4 - - 4K r-x-- \nlibsasl2.so.3.0.0\nFFFFFAFFE9471000 8 8 - - - r-x-- \nlibsasl2.so.3.0.0\nFFFFFAFFE9473000 28 28 - - 4K r-x-- \nlibsasl2.so.3.0.0\nFFFFFAFFE947A000 68 24 - - - r-x-- \nlibsasl2.so.3.0.0\nFFFFFAFFE948B000 4 4 - - 4K r-x-- \nlibsasl2.so.3.0.0\nFFFFFAFFE948C000 8 8 - - - r-x-- \nlibsasl2.so.3.0.0\nFFFFFAFFE949D000 8 8 - - 4K rw--- \nlibsasl2.so.3.0.0\nFFFFFAFFE94A0000 4 4 - - 4K r-x-- \nliblber.so.2.0.200\nFFFFFAFFE94A1000 4 4 - - - r-x-- \nliblber.so.2.0.200\nFFFFFAFFE94A2000 20 20 - - 4K r-x-- \nliblber.so.2.0.200\nFFFFFAFFE94A7000 28 16 - - - r-x-- \nliblber.so.2.0.200\nFFFFFAFFE94AE000 4 4 - - 4K r-x-- \nliblber.so.2.0.200\nFFFFFAFFE94AF000 4 4 - - - r-x-- \nliblber.so.2.0.200\nFFFFFAFFE94BF000 4 4 - - 4K rw--- \nliblber.so.2.0.200\nFFFFFAFFE94C0000 4 4 - - 4K r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE94C1000 36 28 - - - r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE94CA000 88 88 - - 4K r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE94E0000 4 4 - - - r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE94E1000 4 4 - - 4K r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE94E2000 236 92 - - - r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE951D000 4 4 - - 4K r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE951E000 28 28 - - - r-x-- \nlibldap.so.2.0.200\nFFFFFAFFE9534000 12 12 - - 4K rw--- \nlibldap.so.2.0.200\nFFFFFAFFE9537000 12 - - - - rw--- \nlibldap.so.2.0.200\nFFFFFAFFE95C0000 4 4 - - 4K r-x-- \nlibkrb5support.so.0.0.1\nFFFFFAFFE95C1000 4 4 - - - r-x-- \nlibkrb5support.so.0.0.1\nFFFFFAFFE95C2000 20 20 - - 4K r-x-- \nlibkrb5support.so.0.0.1\nFFFFFAFFE95C7000 20 4 - - - r-x-- \nlibkrb5support.so.0.0.1\nFFFFFAFFE95CC000 4 4 - - 4K r-x-- \nlibkrb5support.so.0.0.1\nFFFFFAFFE95DC000 4 4 - - 4K rw--- \nlibkrb5support.so.0.0.1\nFFFFFAFFE95E0000 12 12 - - 4K r-x-- \nlibcom_err.so.3.0.0\nFFFFFAFFE95F2000 4 4 - - 4K rw--- \nlibcom_err.so.3.0.0\nFFFFFAFFE9600000 4 4 - - 4K r-x-- \nlibk5crypto.so.3.0.1\nFFFFFAFFE9601000 12 12 - - - r-x-- \nlibk5crypto.so.3.0.1\nFFFFFAFFE9604000 36 36 - - 4K r-x-- \nlibk5crypto.so.3.0.1\nFFFFFAFFE960D000 104 16 - - - r-x-- \nlibk5crypto.so.3.0.1\nFFFFFAFFE9627000 8 8 - - 4K r-x-- \nlibk5crypto.so.3.0.1\nFFFFFAFFE9629000 32 32 - - - r-x-- \nlibk5crypto.so.3.0.1\nFFFFFAFFE9640000 8 8 - - 4K rw--- \nlibk5crypto.so.3.0.1\nFFFFFAFFE9642000 4 - - - - rw--- \nlibk5crypto.so.3.0.1\nFFFFFAFFE9650000 4 4 - - 4K r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE9651000 84 28 - - - r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE9666000 28 28 - - 4K r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE966D000 16 12 - - - r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE9671000 56 56 - - 4K r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE967F000 4 4 - - - r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE9680000 56 56 - - 4K r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE968E000 12 12 - - - r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE9691000 100 100 - - 4K r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE96AA000 8 8 - - - r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE96AC000 4 4 - - 4K r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE96AD000 372 - - - - r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE970A000 4 4 - - 4K r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE970B000 204 32 - - - r-x-- \nlibkrb5.so.3.0.3\nFFFFFAFFE974D000 68 68 - - 4K rw--- \nlibkrb5.so.3.0.3\nFFFFFAFFE9760000 4 4 - - 4K r-x-- \nlibgcc_s.so.1\nFFFFFAFFE9761000 8 8 - - - r-x-- \nlibgcc_s.so.1\nFFFFFAFFE9763000 24 24 - - 4K r-x-- \nlibgcc_s.so.1\nFFFFFAFFE9769000 12 12 - - - r-x-- \nlibgcc_s.so.1\nFFFFFAFFE976C000 4 4 - - 4K r-x-- \nlibgcc_s.so.1\nFFFFFAFFE976D000 24 12 - - - r-x-- \nlibgcc_s.so.1\nFFFFFAFFE9773000 4 4 - - 4K r-x-- \nlibgcc_s.so.1\nFFFFFAFFE9774000 20 20 - - - r-x-- \nlibgcc_s.so.1\nFFFFFAFFE9779000 4 4 - - 4K r-x-- \nlibgcc_s.so.1\nFFFFFAFFE977A000 4 4 - - - r-x-- \nlibgcc_s.so.1\nFFFFFAFFE978A000 4 4 - - 4K rw--- \nlibgcc_s.so.1\nFFFFFAFFE9790000 4 4 - - 4K r-x-- \nliblzma.so.5.2.5\nFFFFFAFFE9791000 16 16 - - - r-x-- \nliblzma.so.5.2.5\nFFFFFAFFE9795000 32 32 - - 4K r-x-- \nliblzma.so.5.2.5\nFFFFFAFFE979D000 92 20 - - - r-x-- \nliblzma.so.5.2.5\nFFFFFAFFE97B4000 4 4 - - 4K r-x-- \nliblzma.so.5.2.5\nFFFFFAFFE97B5000 28 28 - - - r-x-- \nliblzma.so.5.2.5\nFFFFFAFFE97CB000 4 4 - - 4K rw--- \nliblzma.so.5.2.5\nFFFFFAFFE97D0000 12 12 - - 4K r-x-- \nlibssp.so.0.0.0\nFFFFFAFFE97E2000 4 4 - - 4K rw--- \nlibssp.so.0.0.0\nFFFFFAFFE97F0000 4 4 - - 4K r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE97F1000 16 16 - - - r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE97F5000 4 4 - - 4K r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE97F6000 4 4 - - - r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE97F7000 4 4 - - 4K r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE97F8000 4 - - - - r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE97F9000 12 12 - - 4K r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE97FC000 80 20 - - - r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE9810000 4 4 - - 4K r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE9811000 804 32 - - - r-x-- \nlibiconv.so.2.5.1\nFFFFFAFFE98E9000 8 8 - - 4K rw--- \nlibiconv.so.2.5.1\nFFFFFAFFE98F0000 24 24 - - 4K r-x-- \nlibintl.so.8.2.0\nFFFFFAFFE98F6000 4 4 - - - r-x-- \nlibintl.so.8.2.0\nFFFFFAFFE98F7000 20 20 - - 4K r-x-- \nlibintl.so.8.2.0\nFFFFFAFFE990B000 8 8 8 - 4K rw--- \nlibintl.so.8.2.0\nFFFFFAFFE9980000 4 4 - - 4K r-x-- \nlibz.so.1.0.2\nFFFFFAFFE9981000 4 4 - - - r-x-- \nlibz.so.1.0.2\nFFFFFAFFE9982000 16 16 - - 4K r-x-- \nlibz.so.1.0.2\nFFFFFAFFE9986000 52 8 - - - r-x-- \nlibz.so.1.0.2\nFFFFFAFFE9993000 4 4 - - 4K r-x-- \nlibz.so.1.0.2\nFFFFFAFFE9994000 16 16 - - - r-x-- \nlibz.so.1.0.2\nFFFFFAFFE99A7000 4 4 - - 4K rw--- \nlibz.so.1.0.2\nFFFFFAFFE99B0000 4 4 - - 4K r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE99B1000 32 28 - - - r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE99B9000 88 88 - - 4K r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE99CF000 4 4 - - - r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE99D0000 4 4 - - 4K r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE99D1000 208 16 - - - r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE9A05000 4 4 - - 4K r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE9A06000 16 16 - - - r-x-- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE9A19000 16 16 - - 4K rw--- \nlibgssapi_krb5.so.2.0.2\nFFFFFAFFE9A20000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9A21000 352 28 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9A79000 60 60 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9A88000 4 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9A89000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9A8A000 52 16 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9A97000 120 120 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9AB5000 12 12 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9AB8000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9ABA000 4 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9ABB000 96 96 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9AD3000 4 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9AD4000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9AD5000 12 12 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9AD8000 256 256 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B18000 4 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B19000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B1A000 32 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B22000 12 12 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B25000 4 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B26000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B28000 372 72 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B85000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B87000 76 24 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B9A000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9B9C000 320 28 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BEC000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BED000 4 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BEE000 16 16 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BF2000 16 16 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BF6000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BF8000 8 - - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BFA000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9BFB000 48 28 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C07000 12 12 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C0A000 40 36 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C14000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C16000 12 12 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C19000 12 12 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C1C000 4 4 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C1D000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C1F000 20 12 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C24000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C25000 112 28 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C41000 20 20 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C46000 236 24 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C81000 8 8 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C83000 12 12 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C86000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9C87000 204 28 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CBA000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CBB000 20 20 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CC0000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CC1000 160 28 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CE9000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CEA000 12 12 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CED000 4 4 - - 4K r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9CEE000 80 32 - - - r-x-- \nlibcrypto.so.1.1\nFFFFFAFFE9D11000 188 188 - - 4K rw--- \nlibcrypto.so.1.1\nFFFFFAFFE9D40000 4 - - - - rw--- \nlibcrypto.so.1.1\nFFFFFAFFE9D41000 8 8 - - 4K rw--- \nlibcrypto.so.1.1\nFFFFFAFFE9D50000 4 4 - - 4K r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D51000 68 28 - - - r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D62000 16 16 - - 4K r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D66000 12 12 - - - r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D69000 40 40 - - 4K r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D73000 4 4 - - - r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D74000 92 92 - - 4K r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D8B000 4 4 - - - r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D8C000 4 4 - - 4K r-x-- \nlibssl.so.1.1\nFFFFFAFFE9D8D000 336 12 - - - r-x-- \nlibssl.so.1.1\nFFFFFAFFE9DE1000 4 4 - - 4K r-x-- \nlibssl.so.1.1\nFFFFFAFFE9DE2000 44 32 - - - r-x-- \nlibssl.so.1.1\nFFFFFAFFE9DFC000 52 52 - - 4K rw--- \nlibssl.so.1.1\nFFFFFAFFE9E09000 4 - - - - rw--- \nlibssl.so.1.1\nFFFFFAFFE9E10000 4 4 - - 4K r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E11000 152 28 - - - r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E37000 28 28 - - 4K r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E3E000 16 12 - - - r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E42000 48 48 - - 4K r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E4E000 16 16 - - - r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E52000 40 40 - - 4K r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E5C000 8 8 - - - r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E5E000 116 116 - - 4K r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E7B000 12 12 - - - r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E7E000 4 4 - - 4K r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9E7F000 952 64 - - - r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9F6D000 4 4 - - 4K r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9F6E000 132 32 - - - r-x-- \nlibxml2.so.2.9.12\nFFFFFAFFE9F9E000 48 48 - - 4K rw--- \nlibxml2.so.2.9.12\nFFFFFAFFE9FAA000 4 - - - - rw--- \nlibxml2.so.2.9.12\nFFFFFAFFED1D0000 4 4 - - 4K r-x-- \nlibresolv.so.2\nFFFFFAFFED1D1000 44 44 - - - r-x-- \nlibresolv.so.2\nFFFFFAFFED1DC000 8 8 - - 4K r-x-- \nlibresolv.so.2\nFFFFFAFFED1DE000 16 16 - - - r-x-- \nlibresolv.so.2\nFFFFFAFFED1E2000 48 48 - - 4K r-x-- \nlibresolv.so.2\nFFFFFAFFED1EE000 232 12 - - - r-x-- \nlibresolv.so.2\nFFFFFAFFED228000 8 8 - - 4K r-x-- \nlibresolv.so.2\nFFFFFAFFED22A000 12 - - - - r-x-- \nlibresolv.so.2\nFFFFFAFFED23D000 8 8 - - 4K rw--- \nlibresolv.so.2\nFFFFFAFFED23F000 4 - - - - rw--- \nlibresolv.so.2\nFFFFFAFFED53D000 4 4 - - 4K r-x-- \nlibdl.so.1\nFFFFFAFFED53E000 8 8 - - 4K r-x-- \nlibrt.so.1\nFFFFFAFFEDB00000 16 16 - - 4K r-x-- \nlibpam.so.1\nFFFFFAFFEDB04000 12 12 - - - r-x-- \nlibpam.so.1\nFFFFFAFFEDB07000 4 4 - - 4K r-x-- \nlibpam.so.1\nFFFFFAFFEDB08000 8 8 - - - r-x-- \nlibpam.so.1\nFFFFFAFFEDB1A000 4 4 - - 4K rw--- \nlibpam.so.1\nFFFFFAFFEE18F000 4 4 - - 4K r-x-- \nlibdoor.so.1\nFFFFFAFFEE500000 12 12 - - 4K r-x-- \nlibmp.so.2\nFFFFFAFFEE503000 4 4 - - - r-x-- \nlibmp.so.2\nFFFFFAFFEE504000 4 4 - - 4K r-x-- \nlibmp.so.2\nFFFFFAFFEE515000 4 4 - - 4K rw--- \nlibmp.so.2\nFFFFFAFFEE520000 16 16 - - 4K r-x-- \nlibmd.so.1\nFFFFFAFFEE524000 44 44 - - - r-x-- \nlibmd.so.1\nFFFFFAFFEE52F000 8 8 - - 4K r-x-- \nlibmd.so.1\nFFFFFAFFEE541000 4 4 - - 4K rw--- \nlibmd.so.1\nFFFFFAFFEE750000 16 16 - - 4K r-x-- \nlibgen.so.1\nFFFFFAFFEE754000 12 12 - - - r-x-- \nlibgen.so.1\nFFFFFAFFEE757000 4 4 - - 4K r-x-- \nlibgen.so.1\nFFFFFAFFEE768000 4 4 - - 4K rw--- \nlibgen.so.1\nFFFFFAFFEE8D5000 4 4 - - 4K rw-s- \n.SHMDPostgreSQL.2755784308\nFFFFFAFFEE8D6000 1956 - - - - rw-s- \n.SHMDPostgreSQL.2755784308\nFFFFFAFFEEAC0000 4 4 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEAC1000 72 72 - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEAD3000 12 12 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEAD6000 16 16 - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEADA000 88 88 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEAF0000 32 - - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEAF8000 4 4 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEAF9000 4 - - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEAFA000 32 32 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB02000 8 4 - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEB04000 8 8 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB06000 48 - - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEB12000 4 4 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB13000 40 4 - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEB1D000 4 4 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB1E000 4 - - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEB1F000 4 4 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB20000 132 20 - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEB41000 4 4 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB42000 12 - - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEB45000 4 4 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB46000 8 - - - - r-x-- \nlibnsl.so.1\nFFFFFAFFEEB48000 12 12 - - 4K r-x-- \nlibnsl.so.1\nFFFFFAFFEEB5B000 12 12 - - 4K rw--- \nlibnsl.so.1\nFFFFFAFFEEB5E000 4 - - - - rw--- \nlibnsl.so.1\nFFFFFAFFEEB5F000 20 20 - - 4K rw--- \nlibnsl.so.1\nFFFFFAFFEEB64000 4 - - - - rw--- \nlibnsl.so.1\nFFFFFAFFEEB65000 4 4 - - 4K rw--- \nlibnsl.so.1\nFFFFFAFFEEC4D000 12 12 - - 4K r-x-- \nlibpthread.so.1\nFFFFFAFFEED30000 4 4 - - 4K r-x-- \nlibsocket.so.1\nFFFFFAFFEED31000 4 4 - - - r-x-- \nlibsocket.so.1\nFFFFFAFFEED32000 36 36 - - 4K r-x-- \nlibsocket.so.1\nFFFFFAFFEED3B000 20 20 - - - r-x-- \nlibsocket.so.1\nFFFFFAFFEED40000 8 8 - - 4K r-x-- \nlibsocket.so.1\nFFFFFAFFEED52000 4 4 - - 4K rw--- \nlibsocket.so.1\nFFFFFAFFEED70000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED71000 28 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED78000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED79000 28 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED90000 12 12 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED93000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED94000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED95000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED96000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED97000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED98000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED99000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED9A000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED9B000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED9C000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED9D000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEED9E000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEED9F000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEEDB0000 64 64 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEDD0000 4 4 - - 4K r-x-- \nlibm.so.2\nFFFFFAFFEEDD1000 24 24 - - - r-x-- \nlibm.so.2\nFFFFFAFFEEDD7000 12 12 - - 4K r-x-- \nlibm.so.2\nFFFFFAFFEEDDA000 12 12 - - - r-x-- \nlibm.so.2\nFFFFFAFFEEDDD000 44 44 - - 4K r-x-- \nlibm.so.2\nFFFFFAFFEEDE8000 168 16 - - - r-x-- \nlibm.so.2\nFFFFFAFFEEE12000 8 8 - - 4K r-x-- \nlibm.so.2\nFFFFFAFFEEE14000 104 16 - - - r-x-- \nlibm.so.2\nFFFFFAFFEEE3E000 4 4 - - 4K rw--- \nlibm.so.2\nFFFFFAFFEEE3F000 12 12 - - - rw--- \nlibm.so.2\nFFFFFAFFEEE42000 4 4 - - 4K rw--- \nlibm.so.2\nFFFFFAFFEEE60000 4 4 4 4 4K \nrwxsR [ ism shmid=0x8 ]\nFFFFFAFFEEE70000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEE80000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEE90000 64 64 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEEB0000 4 4 4 - 4K \nrwxs- [ anon ]\nFFFFFAFFEEEC0000 12 12 8 - 4K \nrwx-- [ anon ]\nFFFFFAFFEEEC3000 4 - - - - \nrwx-- [ anon ]\nFFFFFAFFEEEC4000 8 8 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEED0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEEE0000 4 4 - - 4K \nrw--- [ anon ]\nFFFFFAFFEEEF0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEF00000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEF10000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEEF20000 4 4 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEF21000 168 168 - - - r-x-- \nlibc.so.1\nFFFFFAFFEEF4B000 32 32 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEF53000 112 112 - - - r-x-- \nlibc.so.1\nFFFFFAFFEEF6F000 120 120 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEF8D000 12 12 - - - r-x-- \nlibc.so.1\nFFFFFAFFEEF90000 40 40 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEF9A000 4 - - - - r-x-- \nlibc.so.1\nFFFFFAFFEEF9B000 76 76 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEFAE000 20 - - - - r-x-- \nlibc.so.1\nFFFFFAFFEEFB3000 36 36 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEFBC000 4 - - - - r-x-- \nlibc.so.1\nFFFFFAFFEEFBD000 40 40 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEFC7000 4 4 - - - r-x-- \nlibc.so.1\nFFFFFAFFEEFC8000 24 24 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEFCE000 12 8 - - - r-x-- \nlibc.so.1\nFFFFFAFFEEFD1000 32 32 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEFD9000 8 8 - - - r-x-- \nlibc.so.1\nFFFFFAFFEEFDB000 28 28 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEFE2000 28 16 - - - r-x-- \nlibc.so.1\nFFFFFAFFEEFE9000 80 80 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEEFFD000 72 56 - - - r-x-- \nlibc.so.1\nFFFFFAFFEF00F000 36 36 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEF018000 4 - - - - r-x-- \nlibc.so.1\nFFFFFAFFEF019000 20 20 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEF01E000 12 4 - - - r-x-- \nlibc.so.1\nFFFFFAFFEF021000 4 4 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEF022000 4 - - - - r-x-- \nlibc.so.1\nFFFFFAFFEF023000 120 120 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEF041000 20 4 - - - r-x-- \nlibc.so.1\nFFFFFAFFEF046000 16 16 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEF04A000 28 12 - - - r-x-- \nlibc.so.1\nFFFFFAFFEF051000 20 20 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEF056000 292 - - - - r-x-- \nlibc.so.1\nFFFFFAFFEF09F000 16 16 - - 4K r-x-- \nlibc.so.1\nFFFFFAFFEF0A3000 8 - - - - r-x-- \nlibc.so.1\nFFFFFAFFEF0B5000 48 48 8 - 4K rw--- \nlibc.so.1\nFFFFFAFFEF0C1000 4 4 - - 4K rw--- \nlibc.so.1\nFFFFFAFFEF0C2000 4 - - - - rw--- \nlibc.so.1\nFFFFFAFFEF0C3000 8 8 - - 4K rw--- \nlibc.so.1\nFFFFFAFFEF0D0000 484 484 - - 4K r-x-- \nlibumem.so.1\nFFFFFAFFEF159000 136 136 - - 4K rw--- \nlibumem.so.1\nFFFFFAFFEF17B000 4 - - - - rw--- \nlibumem.so.1\nFFFFFAFFEF17C000 48 48 - - 4K rw--- \nlibumem.so.1\nFFFFFAFFEF1A0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF1B0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF1C0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF1D0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF1E0000 4 4 - - 4K \nrw--- [ anon ]\nFFFFFAFFEF1F0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF200000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF210000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF220000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF230000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF240000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF250000 4 4 4 - 4K \nrwx-- [ anon ]\nFFFFFAFFEF260000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF270000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF280000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF290000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF2A0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF2B0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF2C0000 4 4 4 - 4K \nrwx-- [ anon ]\nFFFFFAFFEF2D0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF2E0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF2F0000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF300000 4 4 4 - 4K \nrwx-- [ anon ]\nFFFFFAFFEF310000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF320000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF330000 4 4 - - 4K r--s- \nld.config\nFFFFFAFFEF340000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF350000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF360000 4 4 - - 4K \nrw--- [ anon ]\nFFFFFAFFEF370000 4 4 - - 4K \nrw--- [ anon ]\nFFFFFAFFEF380000 4 4 - - 4K \nrwx-- [ anon ]\nFFFFFAFFEF390000 4 4 - - 4K r--s-\nFFFFFAFFEF396000 4 4 - - 4K r-x-- \nld.so.1\nFFFFFAFFEF397000 28 28 - - - r-x-- \nld.so.1\nFFFFFAFFEF39E000 4 4 - - 4K r-x-- \nld.so.1\nFFFFFAFFEF39F000 28 28 - - - r-x-- \nld.so.1\nFFFFFAFFEF3A6000 8 8 - - 4K r-x-- \nld.so.1\nFFFFFAFFEF3A8000 4 4 - - - r-x-- \nld.so.1\nFFFFFAFFEF3A9000 160 160 - - 4K r-x-- \nld.so.1\nFFFFFAFFEF3D1000 16 4 - - - r-x-- \nld.so.1\nFFFFFAFFEF3D5000 4 4 - - 4K r-x-- \nld.so.1\nFFFFFAFFEF3D6000 4 - - - - r-x-- \nld.so.1\nFFFFFAFFEF3D7000 56 56 - - 4K r-x-- \nld.so.1\nFFFFFAFFEF3E5000 8 - - - - r-x-- \nld.so.1\nFFFFFAFFEF3E7000 8 8 - - 4K r-x-- \nld.so.1\nFFFFFAFFEF3E9000 4 - - - - r-x-- \nld.so.1\nFFFFFAFFEF3FA000 12 12 8 - 4K rwx-- \nld.so.1\nFFFFFAFFEF3FD000 8 8 4 - 4K rwx-- \nld.so.1\nFFFFFAFFFFDF6000 40 40 24 - 4K \nrw--- [ stack ]\n---------------- ---------- ---------- ---------- ----------\n total Kb 12334612 602860 587164 4\n-bash-4.3$\n\n\n\n\n\n",
"msg_date": "Sun, 16 Jan 2022 13:02:32 +0800",
"msg_from": "DEVOPS_WwIT <devops@ww-it.cn>",
"msg_from_op": true,
"msg_subject": "Large Pages and Super Pages for PostgreSQL"
},
{
"msg_contents": "On Sun, Jan 16, 2022 at 6:03 PM DEVOPS_WwIT <devops@ww-it.cn> wrote:\n> Solaris and FreeBSD supports large/super pages, and can be used\n> automatically by applications.\n>\n> Seems Postgres can't use the large/super pages on Solaris and FreeBSD\n> os(I think can't use the large/super page HPUX and AIX), is there anyone\n> could take a look?\n\nHello,\n\nI can provide some clues and partial answers about page size on three\nof the OSes you mentioned:\n\n1. Solaris: I haven't used that OS for a long time, but I thought it\nwas supposed to promote memory to larger pages sizes transparently\nwith some heuristics. To control page size explicitly, it *looks*\nlike memcntl(2) with command MHA_MAPSIZE_VA could be used; that's what\nthe man page says, anyway. If someone is interested in writing a\npatch to do that, I'd be happy to review it and test it on illumos...\n\n2. AIX: We *nearly* made this work recently[1]. The summary is that\nAIX doesn't have a way to control the page size of anonymous shared\nmmap memory (our usual source of shared memory), so you have to use\nSystemV shared memory if you want non-default page size for shared\nmemory. We got as far as adding the option shared_memory_type=sysv,\nand the next step is pretty easy: just pass in some magic flags. This\njust needs someone with access and motivation to pick up that work...\n\n3. FreeBSD: FreeBSD does transparently migrate PostgreSQL memory to\n\"super\" pages quite well in my experience, but there is also a new\nfacility in FreeBSD 13 to ask for specific page sizes explicitly. I\nwrote a quick and dirty patch to enable PostgreSQL's huge_pages and\nhuge_page_size settings to work with that interface, but I haven't yet\ngot as far as testing it very hard or proposing it... but here it is,\nif you like experimental code[2].\n\nI don't know about HP-UX. I think it might be dead, Jim.\n\n[1] https://www.postgresql.org/message-id/flat/HE1PR0202MB28126DB4E0B6621CC6A1A91286D90%40HE1PR0202MB2812.eurprd02.prod.outlook.com\n[2] https://github.com/macdice/postgres/commit/a71aafe5582c2e61005af0d16ca82eed89445a67\n\n\n",
"msg_date": "Sun, 16 Jan 2022 20:32:17 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large Pages and Super Pages for PostgreSQL"
},
{
"msg_contents": "On Sun, Jan 16, 2022 at 8:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jan 16, 2022 at 6:03 PM DEVOPS_WwIT <devops@ww-it.cn> wrote:\n> > Solaris and FreeBSD supports large/super pages, and can be used\n> > automatically by applications.\n> >\n> > Seems Postgres can't use the large/super pages on Solaris and FreeBSD\n> > os(I think can't use the large/super page HPUX and AIX), is there anyone\n> > could take a look?\n>\n> 3. FreeBSD: FreeBSD does transparently migrate PostgreSQL memory to\n> \"super\" pages quite well in my experience, but there is also a new\n> facility in FreeBSD 13 to ask for specific page sizes explicitly. I\n> wrote a quick and dirty patch to enable PostgreSQL's huge_pages and\n> huge_page_size settings to work with that interface, but I haven't yet\n> got as far as testing it very hard or proposing it... but here it is,\n> if you like experimental code[2].\n\nI was reminded to rebase that and tidy it up a bit, by recent\ndiscussion of page table magic in other threads. Documentation of\nthese interfaces is sparse to put it mildly (I may try to improve that\nmyself) but basically the terminology is \"super\" for pages subject to\npromotion/demotion, and \"large\" when explicitly managed. Not\nproposing for commit right now as I need to learn more about all this\nand there are some policy decisions lurking in here (eg synchronous\ndefrag vs nowait depending on flags), but the patch may be useful for\nexperimentation. For example, it allows huge_page_size=1GB if your\nsystem can handle that.",
"msg_date": "Tue, 8 Nov 2022 11:59:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large Pages and Super Pages for PostgreSQL"
},
{
"msg_contents": "Hi Thomas\r\n\r\nThank you very much for the work.\r\n\r\nI just got latest FreeBSD 13.1 environment, and I'm going to test and \r\nverify it.\r\n\r\nso would you please rebase latest patch?\r\n\r\nbest wishes\r\n\r\nTony\r\n\r\nOn 2022/11/8 06:59, Thomas Munro wrote:\r\n> On Sun, Jan 16, 2022 at 8:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\r\n>> On Sun, Jan 16, 2022 at 6:03 PM DEVOPS_WwIT <devops@ww-it.cn> wrote:\r\n>>> Solaris and FreeBSD supports large/super pages, and can be used\r\n>>> automatically by applications.\r\n>>>\r\n>>> Seems Postgres can't use the large/super pages on Solaris and FreeBSD\r\n>>> os(I think can't use the large/super page HPUX and AIX), is there anyone\r\n>>> could take a look?\r\n>> 3. FreeBSD: FreeBSD does transparently migrate PostgreSQL memory to\r\n>> \"super\" pages quite well in my experience, but there is also a new\r\n>> facility in FreeBSD 13 to ask for specific page sizes explicitly. I\r\n>> wrote a quick and dirty patch to enable PostgreSQL's huge_pages and\r\n>> huge_page_size settings to work with that interface, but I haven't yet\r\n>> got as far as testing it very hard or proposing it... but here it is,\r\n>> if you like experimental code[2].\r\n> I was reminded to rebase that and tidy it up a bit, by recent\r\n> discussion of page table magic in other threads. Documentation of\r\n> these interfaces is sparse to put it mildly (I may try to improve that\r\n> myself) but basically the terminology is \"super\" for pages subject to\r\n> promotion/demotion, and \"large\" when explicitly managed. Not\r\n> proposing for commit right now as I need to learn more about all this\r\n> and there are some policy decisions lurking in here (eg synchronous\r\n> defrag vs nowait depending on flags), but the patch may be useful for\r\n> experimentation. For example, it allows huge_page_size=1GB if your\r\n> system can handle that.",
"msg_date": "Wed, 30 Nov 2022 15:29:14 +0800",
"msg_from": "ZHU XIAN WEN <tony.zhu@ww-it.cn>",
"msg_from_op": false,
"msg_subject": "Re: Large Pages and Super Pages for PostgreSQL"
}
] |
[
{
"msg_contents": "Hi,\n\nI was wondering in [1] what we could do about the slowest tests on\nwindows.\n\nOn 2021-12-31 11:25:28 -0800, Andres Freund wrote:\n> Picking a random successful cfbot run [1] I see the following tap tests taking\n> more than 20 seconds:\n>\n> 67188 ms pg_basebackup t/010_pg_basebackup.pl\n> 59710 ms recovery t/001_stream_rep.pl\n\nComparing these times to measurements taken on my normal linux workstation,\nsomething seemed just *very* off, even with a slow CI instance and windows in\nthe mix.\n\nA bunch of printf debugging later, I realized the problem is that several of\nthe pg_basebackups in tests take a *long* time. E.g. for t/001_stream_rep.pl\nthe backups from the standby each take just over 10s. That's awfully\nspecific...\n\n# Taking pg_basebackup my_backup from node \"standby_1\"\n# Running: pg_basebackup -D C:/dev/postgres/./tmp_check/t_001_stream_rep_standby_1_data/backup/my_backup -h C:/Users/myadmin/AppData/Local/Temp/yba26PBYX1 -p 59181 --checkpoint fast --no-sync --label my_backup -v\n# ran in 10.145s\n# Backup finished\n\nThis reproduceably happens and it's *not* related to the socket shutdown()\nchanges we've been debugging lately - even after a revert the problem\npersists.\n\nBecause our logging for basebackups is quite weak, both for server and client\nside, I needed to add a fair bit more debugging to figure it out:\n\npg_basebackup: wait to finish at 0.492\npg_basebackup: waiting for background process to finish streaming ...\npg_basebackup: stream poll timeout 10.112\n\nThe problem is that there's just no implemented way to timely shutdown the WAL\nstreaming thread in pg_basebackup. The code in pg_basebackup.c says:\n\n if (verbose)\n pg_log_info(\"waiting for background process to finish streaming ...\");\n ...\n /*\n * On Windows, since we are in the same process, we can just store the\n * value directly in the variable, and then set the flag that says\n * it's there.\n */\n...\n\t\txlogendptr = ((uint64) hi) << 32 | lo;\n\t\tInterlockedIncrement(&has_xlogendptr);\n\nBut just setting a variable doesn't do much if the thread is in\nHandleCopyStream()->CopyStreamPoll()->select()\n\nThe only reason we ever succeed shutting down, without more WAL coming in, is\nthat pg_basebackup defaults to sending a status message every 10 seconds. At\nwhich point the thread sees has_xlogendptr = true, and shuts down.\n\n\nA test specific workaround would be to just add --status-interval=1 to\nCluster.pm::backup(). But that seems very unsatisfying.\n\nI don't immediately see a solution for this, other than to add\nStreamCtl->stop_event (mirroring ->stop_socket) and then convert\nCopyStreamPoll() to use WaitForMultipleObjects(). Microsoft's select()\ndoesn't support pipes and there's no socketpair().\n\nAny more straightforward ideas?\n\n\n From a cursory look at history, it used to be that pg_basebackup had this\nbehaviour on all platforms, but it got fixed for other platforms in\n7834d20b57a by Tom (assuming the problem wasn't present there).\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20211231192528.wirwj4qaaw3ted5g%40alap3.anarazel.de\n\n\n",
"msg_date": "Sun, 16 Jan 2022 01:22:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pg_basebackup WAL streamer shutdown is bogus - leading to slow tests"
},
{
"msg_contents": "On Sun, Jan 16, 2022 at 10:22 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I was wondering in [1] what we could do about the slowest tests on\n> windows.\n>\n> On 2021-12-31 11:25:28 -0800, Andres Freund wrote:\n> > Picking a random successful cfbot run [1] I see the following tap tests taking\n> > more than 20 seconds:\n> >\n> > 67188 ms pg_basebackup t/010_pg_basebackup.pl\n> > 59710 ms recovery t/001_stream_rep.pl\n>\n> Comparing these times to measurements taken on my normal linux workstation,\n> something seemed just *very* off, even with a slow CI instance and windows in\n> the mix.\n>\n> A bunch of printf debugging later, I realized the problem is that several of\n> the pg_basebackups in tests take a *long* time. E.g. for t/001_stream_rep.pl\n> the backups from the standby each take just over 10s. That's awfully\n> specific...\n>\n> # Taking pg_basebackup my_backup from node \"standby_1\"\n> # Running: pg_basebackup -D C:/dev/postgres/./tmp_check/t_001_stream_rep_standby_1_data/backup/my_backup -h C:/Users/myadmin/AppData/Local/Temp/yba26PBYX1 -p 59181 --checkpoint fast --no-sync --label my_backup -v\n> # ran in 10.145s\n> # Backup finished\n>\n> This reproduceably happens and it's *not* related to the socket shutdown()\n> changes we've been debugging lately - even after a revert the problem\n> persists.\n>\n> Because our logging for basebackups is quite weak, both for server and client\n> side, I needed to add a fair bit more debugging to figure it out:\n>\n> pg_basebackup: wait to finish at 0.492\n> pg_basebackup: waiting for background process to finish streaming ...\n> pg_basebackup: stream poll timeout 10.112\n>\n> The problem is that there's just no implemented way to timely shutdown the WAL\n> streaming thread in pg_basebackup. The code in pg_basebackup.c says:\n>\n> if (verbose)\n> pg_log_info(\"waiting for background process to finish streaming ...\");\n> ...\n> /*\n> * On Windows, since we are in the same process, we can just store the\n> * value directly in the variable, and then set the flag that says\n> * it's there.\n> */\n> ...\n> xlogendptr = ((uint64) hi) << 32 | lo;\n> InterlockedIncrement(&has_xlogendptr);\n>\n> But just setting a variable doesn't do much if the thread is in\n> HandleCopyStream()->CopyStreamPoll()->select()\n>\n> The only reason we ever succeed shutting down, without more WAL coming in, is\n> that pg_basebackup defaults to sending a status message every 10 seconds. At\n> which point the thread sees has_xlogendptr = true, and shuts down.\n>\n>\n> A test specific workaround would be to just add --status-interval=1 to\n> Cluster.pm::backup(). But that seems very unsatisfying.\n>\n> I don't immediately see a solution for this, other than to add\n> StreamCtl->stop_event (mirroring ->stop_socket) and then convert\n> CopyStreamPoll() to use WaitForMultipleObjects(). Microsoft's select()\n> doesn't support pipes and there's no socketpair().\n>\n> Any more straightforward ideas?\n>\n>\n> From a cursory look at history, it used to be that pg_basebackup had this\n> behaviour on all platforms, but it got fixed for other platforms in\n> 7834d20b57a by Tom (assuming the problem wasn't present there).\n\nUgh, yeah that sounds like a correct analysis to me, and ugh, yeah\nthat's not very nice.\n\nAnd yes, I think we have to create an event, and then use\nWSAEventSelect() + WaitForSingleObjectEx(). Should be enough to just\nuse one event I think, and then the timeout -- but it might be more\nreadable to have a separate event for the socket and the stop? But we\ncan have just one event that's both used to stop and then use\nWSAEventSelect() to associate it with the socket as well as neede.\n\n(And yes, I agree that it's a lot better to fix it properly than to\njust reduce the timeout)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 16 Jan 2022 12:35:56 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't immediately see a solution for this, other than to add\n> StreamCtl->stop_event (mirroring ->stop_socket) and then convert\n> CopyStreamPoll() to use WaitForMultipleObjects(). Microsoft's select()\n> doesn't support pipes and there's no socketpair().\n> Any more straightforward ideas?\n> From a cursory look at history, it used to be that pg_basebackup had this\n> behaviour on all platforms, but it got fixed for other platforms in\n> 7834d20b57a by Tom (assuming the problem wasn't present there).\n\nHmm --- I see that I thought Windows was unaffected, but I didn't\nconsider this angle.\n\nCan we send the child process a signal to kick it off its wait?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jan 2022 11:34:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "On Sun, Jan 16, 2022 at 5:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > I don't immediately see a solution for this, other than to add\n> > StreamCtl->stop_event (mirroring ->stop_socket) and then convert\n> > CopyStreamPoll() to use WaitForMultipleObjects(). Microsoft's select()\n> > doesn't support pipes and there's no socketpair().\n> > Any more straightforward ideas?\n> > From a cursory look at history, it used to be that pg_basebackup had this\n> > behaviour on all platforms, but it got fixed for other platforms in\n> > 7834d20b57a by Tom (assuming the problem wasn't present there).\n>\n> Hmm --- I see that I thought Windows was unaffected, but I didn't\n> consider this angle.\n>\n> Can we send the child process a signal to kick it off its wait?\n\nNo. (1) on Windows it's not a child process, it's a thread. And (2)\nWindows doesn't have signals. We emulate those *in the backend* for\nwin32, but this problem is in the frontend where that emulation layer\ndoesn't exist.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 16 Jan 2022 17:36:13 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "On Sun, Jan 16, 2022 at 5:36 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sun, Jan 16, 2022 at 5:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I don't immediately see a solution for this, other than to add\n> > > StreamCtl->stop_event (mirroring ->stop_socket) and then convert\n> > > CopyStreamPoll() to use WaitForMultipleObjects(). Microsoft's select()\n> > > doesn't support pipes and there's no socketpair().\n> > > Any more straightforward ideas?\n> > > From a cursory look at history, it used to be that pg_basebackup had this\n> > > behaviour on all platforms, but it got fixed for other platforms in\n> > > 7834d20b57a by Tom (assuming the problem wasn't present there).\n> >\n> > Hmm --- I see that I thought Windows was unaffected, but I didn't\n> > consider this angle.\n> >\n> > Can we send the child process a signal to kick it off its wait?\n>\n> No. (1) on Windows it's not a child process, it's a thread. And (2)\n> Windows doesn't have signals. We emulate those *in the backend* for\n> win32, but this problem is in the frontend where that emulation layer\n> doesn't exist.\n\nActually, just after sending that...\n\nWhat we could do is do a WSACancelBlockingCall() which will cancel the\nselect() thereby making us do the check. However, per the docs\n(https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsacancelblockingcall)\nthis function is no longer exported in Winsock 2, so this does not\nseem to be the right way forward. There is no replacement function for\nit -- the suggestion is basically \"don't do that, use multithreading\ninstaed\" which I think brings us back to the original suggestion of\nWSAEventSelect().\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 16 Jan 2022 17:39:11 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-16 17:39:11 +0100, Magnus Hagander wrote:\n> On Sun, Jan 16, 2022 at 5:36 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Sun, Jan 16, 2022 at 5:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > I don't immediately see a solution for this, other than to add\n> > > > StreamCtl->stop_event (mirroring ->stop_socket) and then convert\n> > > > CopyStreamPoll() to use WaitForMultipleObjects(). Microsoft's select()\n> > > > doesn't support pipes and there's no socketpair().\n> > > > Any more straightforward ideas?\n> > > > From a cursory look at history, it used to be that pg_basebackup had this\n> > > > behaviour on all platforms, but it got fixed for other platforms in\n> > > > 7834d20b57a by Tom (assuming the problem wasn't present there).\n> > >\n> > > Hmm --- I see that I thought Windows was unaffected, but I didn't\n> > > consider this angle.\n> > >\n> > > Can we send the child process a signal to kick it off its wait?\n> >\n> > No. (1) on Windows it's not a child process, it's a thread. And (2)\n> > Windows doesn't have signals. We emulate those *in the backend* for\n> > win32, but this problem is in the frontend where that emulation layer\n> > doesn't exist.\n>\n> [...] which I think brings us back to the original suggestion of\n> WSAEventSelect().\n\nI hacked that up last night. And a fix or two later, it seems to be\nworking. What I'd missed at first is that the event needs to be reset in\nreached_end_position(), otherwise we'll busy loop.\n\nI wonder if using a short-lived event handle would have dangers of missing\nFD_CLOSE here as well? It'd probably be worth avoiding the risk by creating\nthe event just once.\n\nI just wasn't immediately sure where to stash it. Probably just by adding a\nfield in StreamCtl, that ReceiveXlogStream() then sets? So far it's constant\nonce passed to ReceiveXlogStream(), but I don't really see a reason why it'd\nneed to stay that way?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 16 Jan 2022 15:28:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-16 15:28:00 -0800, Andres Freund wrote:\n> I hacked that up last night. And a fix or two later, it seems to be\n> working. What I'd missed at first is that the event needs to be reset in\n> reached_end_position(), otherwise we'll busy loop.\n> \n> I wonder if using a short-lived event handle would have dangers of missing\n> FD_CLOSE here as well? It'd probably be worth avoiding the risk by creating\n> the event just once.\n> \n> I just wasn't immediately sure where to stash it. Probably just by adding a\n> field in StreamCtl, that ReceiveXlogStream() then sets? So far it's constant\n> once passed to ReceiveXlogStream(), but I don't really see a reason why it'd\n> need to stay that way?\n\nOops, attached the patch this time.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 16 Jan 2022 15:31:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 12:31 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-16 15:28:00 -0800, Andres Freund wrote:\n> > I hacked that up last night. And a fix or two later, it seems to be\n> > working. What I'd missed at first is that the event needs to be reset in\n> > reached_end_position(), otherwise we'll busy loop.\n\nYou can create the event with bManualReset set to False to avoid that,\nno? With this usecase, I don't really see a reason not to do that\ninstead?\n\n\n> > I wonder if using a short-lived event handle would have dangers of missing\n> > FD_CLOSE here as well? It'd probably be worth avoiding the risk by creating\n> > the event just once.\n> >\n> > I just wasn't immediately sure where to stash it. Probably just by adding a\n> > field in StreamCtl, that ReceiveXlogStream() then sets? So far it's constant\n> > once passed to ReceiveXlogStream(), but I don't really see a reason why it'd\n> > need to stay that way?\n>\n> Oops, attached the patch this time.\n\nDo we really want to create a new event every time? Those are kernel\nobjects, so they're not entirely free, but that part maybe doesn't\nmatter. Wouldn't it be cleaner to do it like we do in\npgwin32_waitforsinglesocket() which is create it once and store it in\na static variable? Or is that what you're suggesting above in the \"I\nwonder if\" part?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 17 Jan 2022 14:50:27 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-17 14:50:27 +0100, Magnus Hagander wrote:\n> On Mon, Jan 17, 2022 at 12:31 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-01-16 15:28:00 -0800, Andres Freund wrote:\n> > > I hacked that up last night. And a fix or two later, it seems to be\n> > > working. What I'd missed at first is that the event needs to be reset in\n> > > reached_end_position(), otherwise we'll busy loop.\n> \n> You can create the event with bManualReset set to False to avoid that,\n> no? With this usecase, I don't really see a reason not to do that\n> instead?\n\nThe problem I'm referring to is that some types of events are edge\ntriggered. Which we've been painfully reminded of recently:\nhttps://www.postgresql.org/message-id/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n\nIt appears there's no guarantee that you'll see e.g. FD_CLOSE if you use\nshort-lived events (the FD_CLOSE is recorded internally but not signalled\nimmediately if there's still readable data, and the internal record is reset\nby WSAEventSelect()).\n\n\n> > > I wonder if using a short-lived event handle would have dangers of missing\n> > > FD_CLOSE here as well? It'd probably be worth avoiding the risk by creating\n> > > the event just once.\n> > >\n> > > I just wasn't immediately sure where to stash it. Probably just by adding a\n> > > field in StreamCtl, that ReceiveXlogStream() then sets? So far it's constant\n> > > once passed to ReceiveXlogStream(), but I don't really see a reason why it'd\n> > > need to stay that way?\n> >\n> > Oops, attached the patch this time.\n> \n> Do we really want to create a new event every time? Those are kernel\n> objects, so they're not entirely free, but that part maybe doesn't\n> matter. Wouldn't it be cleaner to do it like we do in\n> pgwin32_waitforsinglesocket() which is create it once and store it in\n> a static variable? Or is that what you're suggesting above in the \"I\n> wonder if\" part?\n\nYes, that's what I was suggesting. I wasn't thinking of using a static var,\nbut putting it in StreamCtl. Note that what pgwin32_waitforsinglesocket()\nis doing doesn't protect against the problem referenced above, because it\nstill is reset by WSAEventSelect.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jan 2022 10:06:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "On 2022-01-17 10:06:56 -0800, Andres Freund wrote:\n> Yes, that's what I was suggesting. I wasn't thinking of using a static var,\n> but putting it in StreamCtl. Note that what pgwin32_waitforsinglesocket()\n> is doing doesn't protect against the problem referenced above, because it\n> still is reset by WSAEventSelect.\n\nDo we are about breaking StreamCtl ABI? I don't think so?\n\n\n",
"msg_date": "Sat, 29 Jan 2022 12:44:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-29 12:44:22 -0800, Andres Freund wrote:\n> On 2022-01-17 10:06:56 -0800, Andres Freund wrote:\n> > Yes, that's what I was suggesting. I wasn't thinking of using a static var,\n> > but putting it in StreamCtl. Note that what pgwin32_waitforsinglesocket()\n> > is doing doesn't protect against the problem referenced above, because it\n> > still is reset by WSAEventSelect.\n> \n> Do we are about breaking StreamCtl ABI? I don't think so?\n\nHere's a version of the patch only creating the event once. Needs a small bit\nof comment polishing, but otherwise I think it's sane?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 29 Jan 2022 13:47:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "On Sat, Jan 29, 2022 at 9:44 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-01-17 10:06:56 -0800, Andres Freund wrote:\n> > Yes, that's what I was suggesting. I wasn't thinking of using a static var,\n> > but putting it in StreamCtl. Note that what pgwin32_waitforsinglesocket()\n> > is doing doesn't protect against the problem referenced above, because it\n> > still is reset by WSAEventSelect.\n>\n> Do we are about breaking StreamCtl ABI? I don't think so?\n\nI would say no. It's an internal API and it's not like pg_basebackup\ncan load plugins.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 30 Jan 2022 16:45:45 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "On Sat, Jan 29, 2022 at 10:47 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-29 12:44:22 -0800, Andres Freund wrote:\n> > On 2022-01-17 10:06:56 -0800, Andres Freund wrote:\n> > > Yes, that's what I was suggesting. I wasn't thinking of using a static var,\n> > > but putting it in StreamCtl. Note that what pgwin32_waitforsinglesocket()\n> > > is doing doesn't protect against the problem referenced above, because it\n> > > still is reset by WSAEventSelect.\n> >\n> > Do we are about breaking StreamCtl ABI? I don't think so?\n>\n> Here's a version of the patch only creating the event once. Needs a small bit\n> of comment polishing, but otherwise I think it's sane?\n\nLGTM in general, yes.\n\nI'm wondering about the part that does:\n+ events[0] = stream->net_event;\n+ nevents++;\n+\n+ if (stream->stop_event != NULL)\n+ {\n+ events[1] = stream->stop_event;\n+ nevents++;\n+ }\n+\n\nUsing a combination of nevents but hardcoded indexes does work -- but\nonly as long as there is only one optional entry. Should they perhaps\nbe written\n+ events[nevents++] = stream->net_event;\n\ninstead, for future proofing? But then you'd also have to change the\nif() statement on the return side I guess.\n\nCan of course also be changed at such a point where a third event\nmight be added. Not important, but it poked me in the eye when I was\nreading it.\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 30 Jan 2022 16:51:12 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-30 16:51:12 +0100, Magnus Hagander wrote:\n> On Sat, Jan 29, 2022 at 10:47 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-01-29 12:44:22 -0800, Andres Freund wrote:\n> > > On 2022-01-17 10:06:56 -0800, Andres Freund wrote:\n> > > > Yes, that's what I was suggesting. I wasn't thinking of using a static var,\n> > > > but putting it in StreamCtl. Note that what pgwin32_waitforsinglesocket()\n> > > > is doing doesn't protect against the problem referenced above, because it\n> > > > still is reset by WSAEventSelect.\n> > >\n> > > Do we are about breaking StreamCtl ABI? I don't think so?\n> >\n> > Here's a version of the patch only creating the event once. Needs a small bit\n> > of comment polishing, but otherwise I think it's sane?\n> \n> LGTM in general, yes.\n\nThanks for checking.\n\n\n> I'm wondering about the part that does:\n> + events[0] = stream->net_event;\n> + nevents++;\n> +\n> + if (stream->stop_event != NULL)\n> + {\n> + events[1] = stream->stop_event;\n> + nevents++;\n> + }\n> +\n> \n> Using a combination of nevents but hardcoded indexes does work -- but\n> only as long as there is only one optional entry. Should they perhaps\n> be written\n> + events[nevents++] = stream->net_event;\n> \n> instead, for future proofing? But then you'd also have to change the\n> if() statement on the return side I guess.\n\nI did wonder about it, but the index checks get sufficiently more complicated\nthat it didn't quite seem worth it. It didn't seem that likely these would get\na third event to check...\n\nI think we're going to have to generalize something like our wait events to be\nfrontend usable at some point. The proportion and complexity of frontend code\nis increasing...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 30 Jan 2022 14:44:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-29 13:47:13 -0800, Andres Freund wrote:\n> Here's a version of the patch only creating the event once. Needs a small bit\n> of comment polishing, but otherwise I think it's sane?\n\nAh, it needs a bit more. I was not cleaning up the event at the exit of\nReceiveXlogStream(). For pg_basebackup that perhaps wouldn't matter, but\npg_receivewal loops...\n\nWe don't have a good spot for cleaning up right now. ReceiveXlogStream() has\nplenty returns. The attached changes those to a goto done; but pretty it is\nnot. But probably still the best way for the backbranches?\n\nI think the receivelog.c interface probably could do with a bit of\ncleanup... The control flow is quite complicated, with repeated checks all\nover etc :(. And the whole thing with giving the appearance of being\ninstantiatable multiple times, but then using global variables for state, is\n...\n\nAttached a revised version.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 30 Jan 2022 16:41:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup WAL streamer shutdown is bogus - leading to slow\n tests"
}
] |
[
{
"msg_contents": "The current hash_mem_multiplier default is 1.0, which is a fairly\nconservative default: it preserves the historic behavior, which is\nthat hash-based executor nodes receive the same work_mem budget as\nsort-based nodes. I propose that the default be increased to 2.0 for\nPostgres 15.\n\nArguments in favor of artificially favoring hash-based nodes like this\nwere made when hash_mem_mutiplier went in. The short version goes like\nthis:\n\nThe relationship between memory availability and overall\nperformance/throughput has very significant differences when we\ncompare sort-based nodes with hash-based nodes. It's hard to make\nreliable generalizations about how the performance/throughput of\nhash-based nodes will be affected as memory is subtracted, even if we\noptimistically assume that requirements are fairly fixed. Data\ncardinality tends to make the picture complicated, just for starters.\nBut overall, as a general rule, more memory tends to make everything\ngo faster.\n\nOn the other hand, sort-based nodes (e.g., GroupAggregate) have very\npredictable performance characteristics, and the possible upside of\nallowing a sort node to use more memory is quite bounded. There is a\nrelatively large drop-off when we go from not being able to fit\neverything in memory to needing to do an external sort. But even that\ndrop-off isn't very big -- not in absolute terms. More importantly,\nthere is hardly any impact as we continue to subtract memory (or add\nmore data). We'll still be able to do a single pass external sort with\nonly a small fraction of the memory needed to sort everything in\nmemory, which (perhaps surprisingly) is mostly all that matters.\n\nThe choice of 2.0 is still pretty conservative. I'm not concerned\nabout making hash nodes go faster (or used more frequently) -- at\nleast not primarily. I'm more worried about avoiding occasional OOMs\nfrom sort nodes that use much more memory than could ever really make\nsense. It's easy to demonstrate that making more memory available to\nan external sort makes just about no difference, until you give it all\nthe memory it can make use of. This effect is reliable (data\ncardinality won't matter, for example). And so the improvement that is\npossible from giving a sort more memory is far smaller than (say) the\nimprovement in performance we typically see when the optimizer\nswitches from a hash aggregate to a group aggregate.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 16 Jan 2022 16:28:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Time to increase hash_mem_multiplier default?"
},
{
"msg_contents": "On Sun, Jan 16, 2022 at 7:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> The current hash_mem_multiplier default is 1.0, which is a fairly\n> conservative default: it preserves the historic behavior, which is\n> that hash-based executor nodes receive the same work_mem budget as\n> sort-based nodes. I propose that the default be increased to 2.0 for\n> Postgres 15.\n\nI don't have anything really profound to say here, but in the last\nyear I did on a couple occasions recommend clients to raise\nhash_mem_multiplier to 2.0 to fix performance problems.\n\nDuring this cycle, we also got a small speedup in the external sorting\ncode. Also, if the \"generation context\" idea gets traction, that might\nbe another reason to consider differentiating the mem settings.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Jan 2022 14:31:51 -0500",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Time to increase hash_mem_multiplier default?"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 11:32 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I don't have anything really profound to say here, but in the last\n> year I did on a couple occasions recommend clients to raise\n> hash_mem_multiplier to 2.0 to fix performance problems.\n\nI would like to push ahead with an increase in the default for\nPostgres 15, to 2.0.\n\nAny objections to that plan?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Feb 2022 22:32:43 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Time to increase hash_mem_multiplier default?"
},
{
"msg_contents": "On Mon, Feb 14, 2022 at 10:32:43PM -0800, Peter Geoghegan wrote:\n> On Wed, Jan 19, 2022 at 11:32 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > I don't have anything really profound to say here, but in the last\n> > year I did on a couple occasions recommend clients to raise\n> > hash_mem_multiplier to 2.0 to fix performance problems.\n> \n> I would like to push ahead with an increase in the default for\n> Postgres 15, to 2.0.\n> \n> Any objections to that plan?\n\nThe only reason not to is that a single-node hash-aggregate plan will now use\n2x work_mem. Which won't make sense to someone who doesn't deal with\ncomplicated plans (and who doesn't know that work_mem is per-node and can be\nused multiplicitively). I don't see how one could address that other than to\nchange hash_mem_multiplier to nonhash_mem_divider.\n\nIt'll be in the release notes, so should be fine.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 15 Feb 2022 10:17:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Time to increase hash_mem_multiplier default?"
},
{
"msg_contents": "On Tue, Feb 15, 2022 at 8:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> The only reason not to is that a single-node hash-aggregate plan will now use\n> 2x work_mem. Which won't make sense to someone who doesn't deal with\n> complicated plans (and who doesn't know that work_mem is per-node and can be\n> used multiplicitively).\n\nHearing no objections, I pushed a commit to increase the default to 2.0.\n\nThanks\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 16 Feb 2022 18:42:45 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Time to increase hash_mem_multiplier default?"
}
] |
[
{
"msg_contents": "Hello,\n\nI encountered a minor road bump when checking out the pg source today. The Makefile's all target includes the following help message if GNUmakefile isn't available: \n\n echo \"You need to run the 'configure' program first. See the file\"; \\\n echo \"'INSTALL' for installation instructions.\" ; \\\n\nAfter consulting README.git, it looks as though INSTALL isn't created unless the source is bundled into a release or snapshot tarball. I'm happy to submit a patch to update the wording, but wanted to check on the preferred approach.\n\nPerhaps this would be sufficient?\n\n echo \"You need to run the 'configure' program first. See the file\"; \\\n echo \"'INSTALL' for installation instructions, or visit\" ; \\\n echo \"<https://www.postgresql.org/docs/devel/installation.html>\" ; \\\n\n-Tim\n\n\n",
"msg_date": "Mon, 17 Jan 2022 14:11:59 +1300",
"msg_from": "\"Tim McNamara\" <tim@mcnamara.nz>",
"msg_from_op": true,
"msg_subject": "New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "> On 17 Jan 2022, at 02:11, Tim McNamara <tim@mcnamara.nz> wrote:\n> \n> Hello,\n> \n> I encountered a minor road bump when checking out the pg source today. The Makefile's all target includes the following help message if GNUmakefile isn't available: \n> \n> echo \"You need to run the 'configure' program first. See the file\"; \\\n> echo \"'INSTALL' for installation instructions.\" ; \\\n> \n> After consulting README.git, it looks as though INSTALL isn't created unless the source is bundled into a release or snapshot tarball. I'm happy to submit a patch to update the wording, but wanted to check on the preferred approach.\n> \n> Perhaps this would be sufficient?\n> \n> echo \"You need to run the 'configure' program first. See the file\"; \\\n> echo \"'INSTALL' for installation instructions, or visit\" ; \\\n> echo \"<https://www.postgresql.org/docs/devel/installation.html>\" ; \\\n\nThat's a good point, and one few developers are likely to spot so thanks for\nraising the issue. To avoid replicating the wording we can do something like\nthe attached as well.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Mon, 17 Jan 2022 11:17:05 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 11:17 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 17 Jan 2022, at 02:11, Tim McNamara <tim@mcnamara.nz> wrote:\n> >\n> > Hello,\n> >\n> > I encountered a minor road bump when checking out the pg source today. The Makefile's all target includes the following help message if GNUmakefile isn't available:\n> >\n> > echo \"You need to run the 'configure' program first. See the file\"; \\\n> > echo \"'INSTALL' for installation instructions.\" ; \\\n> >\n> > After consulting README.git, it looks as though INSTALL isn't created unless the source is bundled into a release or snapshot tarball. I'm happy to submit a patch to update the wording, but wanted to check on the preferred approach.\n> >\n> > Perhaps this would be sufficient?\n> >\n> > echo \"You need to run the 'configure' program first. See the file\"; \\\n> > echo \"'INSTALL' for installation instructions, or visit\" ; \\\n> > echo \"<https://www.postgresql.org/docs/devel/installation.html>\" ; \\\n>\n> That's a good point, and one few developers are likely to spot so thanks for\n> raising the issue. To avoid replicating the wording we can do something like\n> the attached as well.\n\nNitpick: It reads very strange to do the existence check negative.\nSince you're filling a value in both the positive and negative branch\nit seems more logical with an \"if -f\" and then reversed order of the\nbranches.\n\nThat said, I'm not sure we're actually gaining anything by *not*\nreferring to the website as well. TBH, I bet the majority of users\nwill actually prefer to read them there. So I'd suggest always\nincluding the reference to the website as well, per the suggestion\nfrom Tim.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 17 Jan 2022 11:25:06 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "> On 17 Jan 2022, at 11:25, Magnus Hagander <magnus@hagander.net> wrote:\n\n> That said, I'm not sure we're actually gaining anything by *not*\n> referring to the website as well. TBH, I bet the majority of users\n> will actually prefer to read them there. So I'd suggest always\n> including the reference to the website as well, per the suggestion\n> from Tim.\n\nFair point, I'll go ahead and do that in a bit unless anyone objects.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 17 Jan 2022 13:26:33 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "> On 17 Jan 2022, at 13:26, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 17 Jan 2022, at 11:25, Magnus Hagander <magnus@hagander.net> wrote:\n> \n>> That said, I'm not sure we're actually gaining anything by *not*\n>> referring to the website as well. TBH, I bet the majority of users\n>> will actually prefer to read them there. So I'd suggest always\n>> including the reference to the website as well, per the suggestion\n>> from Tim.\n> \n> Fair point, I'll go ahead and do that in a bit unless anyone objects.\n\nI plan on applying the attached which address the feedback given.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 18 Jan 2022 16:51:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "\nOn 18.01.22 16:51, Daniel Gustafsson wrote:\n>> On 17 Jan 2022, at 13:26, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>>> On 17 Jan 2022, at 11:25, Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>>> That said, I'm not sure we're actually gaining anything by *not*\n>>> referring to the website as well. TBH, I bet the majority of users\n>>> will actually prefer to read them there. So I'd suggest always\n>>> including the reference to the website as well, per the suggestion\n>>> from Tim.\n>>\n>> Fair point, I'll go ahead and do that in a bit unless anyone objects.\n> \n> I plan on applying the attached which address the feedback given.\n\nThe indentation of the two INSTRUCTIONS= lines uses a different mix of \ntabs and spaces, so it looks a bit weird depending on how you view it.\n\nIt's also a bit strange that the single quotes are part of the value of \n$INSTRUCTIONS rather than part of the fixed text.\n\nThe URL links to the \"devel\" version of the installation instructions, \nwhich will not remain appropriate after release. I don't know how to \nfix that without creating an additional maintenance point. Since \nREADME.git already contains that link, I would leave off the web site \nbusiness and just make the change of the dynamically chosen file name.\n\n\n",
"msg_date": "Tue, 18 Jan 2022 17:05:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I plan on applying the attached which address the feedback given.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 11:06:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "> On 18 Jan 2022, at 17:05, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> On 18.01.22 16:51, Daniel Gustafsson wrote:\n\n>> I plan on applying the attached which address the feedback given.\n> \n> The indentation of the two INSTRUCTIONS= lines uses a different mix of tabs and spaces, so it looks a bit weird depending on how you view it.\n> \n> It's also a bit strange that the single quotes are part of the value of $INSTRUCTIONS rather than part of the fixed text.\n\nFixed both of these, thanks!\n\n> The URL links to the \"devel\" version of the installation instructions, which will not remain appropriate after release. I don't know how to fix that without creating an additional maintenance point. Since README.git already contains that link, I would leave off the web site business and just make the change of the dynamically chosen file name.\n\nI ended up pushing this with the URL in place as there IMO was consensus in the\nthread for including it. We could if we want to update it to point to v15 docs\nonce we branch off, but anything more than that is probably in the diminishing\nreturns territory in terms of effort involved.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 19 Jan 2022 14:54:14 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I ended up pushing this with the URL in place as there IMO was consensus in the\n> thread for including it. We could if we want to update it to point to v15 docs\n> once we branch off, but anything more than that is probably in the diminishing\n> returns territory in terms of effort involved.\n\nI think pointing at devel is fine, since this text is mostly aimed\nat developers or would-be developers anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jan 2022 09:57:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 4:58 PM Tim McNamara <tim@mcnamara.nz> wrote:\n\n> Hello,\n>\n> I encountered a minor road bump when checking out the pg source today. The\n> Makefile's all target includes the following help message if GNUmakefile\n> isn't available:\n>\n> echo \"You need to run the 'configure' program first. See the file\"; \\\n> echo \"'INSTALL' for installation instructions.\" ; \\\n>\n> After consulting README.git, it looks as though INSTALL isn't created\n> unless the source is bundled into a release or snapshot tarball. I'm happy\n> to submit a patch to update the wording, but wanted to check on the\n> preferred approach.\n>\n> Perhaps this would be sufficient?\n>\n> echo \"You need to run the 'configure' program first. See the file\"; \\\n> echo \"'INSTALL' for installation instructions, or visit\" ; \\\n> echo \"<https://www.postgresql.org/docs/devel/installation.html>\" ; \\\n>\n> -Tim\n>\n\nI noticed a similar thing in the README of the github repository. It asks\nto see the INSTALL file for build and installation instructions but I\ncouldn't find that file and that confused me. This might confuse other new\ndevelopers as well. So, maybe we should update the text in the README too?\n\nRegards,\nSamay\n\nOn Wed, Jan 19, 2022 at 4:58 PM Tim McNamara <tim@mcnamara.nz> wrote:Hello,\n\nI encountered a minor road bump when checking out the pg source today. The Makefile's all target includes the following help message if GNUmakefile isn't available: \n\n echo \"You need to run the 'configure' program first. See the file\"; \\\n echo \"'INSTALL' for installation instructions.\" ; \\\n\nAfter consulting README.git, it looks as though INSTALL isn't created unless the source is bundled into a release or snapshot tarball. I'm happy to submit a patch to update the wording, but wanted to check on the preferred approach.\n\nPerhaps this would be sufficient?\n\n echo \"You need to run the 'configure' program first. See the file\"; \\\n echo \"'INSTALL' for installation instructions, or visit\" ; \\\n echo \"<https://www.postgresql.org/docs/devel/installation.html>\" ; \\\n\n-TimI noticed a similar thing in the README of the github repository. It asks to see the INSTALL file for build and installation instructions but I couldn't find that file and that confused me. This might confuse other new developers as well. So, maybe we should update the text in the README too?Regards,Samay",
"msg_date": "Thu, 20 Jan 2022 16:29:34 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "pá 21. 1. 2022 v 1:29 odesílatel samay sharma <smilingsamay@gmail.com> napsal:\n>\n>\n>\n> On Wed, Jan 19, 2022 at 4:58 PM Tim McNamara <tim@mcnamara.nz> wrote:\n>>\n>> Hello,\n>>\n>> I encountered a minor road bump when checking out the pg source today. The Makefile's all target includes the following help message if GNUmakefile isn't available:\n>>\n>> echo \"You need to run the 'configure' program first. See the file\"; \\\n>> echo \"'INSTALL' for installation instructions.\" ; \\\n>>\n>> After consulting README.git, it looks as though INSTALL isn't created unless the source is bundled into a release or snapshot tarball. I'm happy to submit a patch to update the wording, but wanted to check on the preferred approach.\n>>\n>> Perhaps this would be sufficient?\n>>\n>> echo \"You need to run the 'configure' program first. See the file\"; \\\n>> echo \"'INSTALL' for installation instructions, or visit\" ; \\\n>> echo \"<https://www.postgresql.org/docs/devel/installation.html>\" ; \\\n>>\n>> -Tim\n>\n>\n> I noticed a similar thing in the README of the github repository. It asks to see the INSTALL file for build and installation instructions but I couldn't find that file and that confused me. This might confuse other new developers as well. So, maybe we should update the text in the README too?\n\nThere is README.git explaining this. README itself is meant to be used\nfor distributed source code. You can generate INSTALL locally for\nexample by running make dist (INSTALL will be present in\npostgresql-15devel directory).\n\nAnyway I do agree this is confusing. Maybe we can actually rename\nREADME.git to README and current README to README.dist or similar.\nREADME.dist can be copied to distribution package as README during\nMakefile magic.\n\nI can try to provide a patch if welcomed.\n\n> Regards,\n> Samay\n\n\n",
"msg_date": "Fri, 21 Jan 2022 09:09:17 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> There is README.git explaining this. README itself is meant to be used\n> for distributed source code. You can generate INSTALL locally for\n> example by running make dist (INSTALL will be present in\n> postgresql-15devel directory).\n\n> Anyway I do agree this is confusing. Maybe we can actually rename\n> README.git to README and current README to README.dist or similar.\n> README.dist can be copied to distribution package as README during\n> Makefile magic.\n\nIIRC, we discussed that when README.git was invented, and concluded\nthat it would just create different sorts of confusion. I might\nbe biased, as the person who is generally checking created tarballs\nfor sanity, but I really do not want any situation where a file\nappearing in the tarball is different from the same-named file in\nthe git tree.\n\nPerhaps it could be sane to not have *any* file named README in\nthe git tree, only README.git and README.dist, with the tarball\npreparation process copying README.dist to README. However,\nif I'm understanding what github does, that would leave us with\nno automatically-displayed documentation for the github repo.\nSo I'm not sure that helps with samay's concern.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 10:31:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "pá 21. 1. 2022 v 16:31 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> > There is README.git explaining this. README itself is meant to be used\n> > for distributed source code. You can generate INSTALL locally for\n> > example by running make dist (INSTALL will be present in\n> > postgresql-15devel directory).\n>\n> > Anyway I do agree this is confusing. Maybe we can actually rename\n> > README.git to README and current README to README.dist or similar.\n> > README.dist can be copied to distribution package as README during\n> > Makefile magic.\n>\n> IIRC, we discussed that when README.git was invented, and concluded\n> that it would just create different sorts of confusion. I might\n> be biased, as the person who is generally checking created tarballs\n> for sanity, but I really do not want any situation where a file\n> appearing in the tarball is different from the same-named file in\n> the git tree.\n>\n> Perhaps it could be sane to not have *any* file named README in\n> the git tree, only README.git and README.dist, with the tarball\n> preparation process copying README.dist to README. However,\n> if I'm understanding what github does, that would leave us with\n> no automatically-displayed documentation for the github repo.\n> So I'm not sure that helps with samay's concern.\n\nEspecially for GitHub use-case it is possible to add separate readme\ninto .github directory. But the problem with local clone will not be\nsolved anyway.\n\n From GitHub docs:\n\n\"If you put your README file in your repository's root, docs, or\nhidden .github directory, GitHub will recognize and automatically\nsurface your README to repository visitors.\"\n\nAnother solution would be to merge both README files together and make\nseparate section for development/git based codebase.\n\n> regards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 16:38:09 +0100",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> Another solution would be to merge both README files together and make\n> separate section for development/git based codebase.\n\nThere's a lot to be said for that approach: make it simpler, not\nmore complicated.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 11:39:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 11:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> > Another solution would be to merge both README files together and make\n> > separate section for development/git based codebase.\n>\n> There's a lot to be said for that approach: make it simpler, not\n> more complicated.\n\nYeah. And what about just getting rid of the INSTALL file altogether?\nI think that, in 2022, a lot of people are likely to use git to obtain\nthe source code rather than obtain a tarball. And regardless of what\nmethod they use to get the source code, they don't really need there\nto be a text file in the directory with installation instructions; a\nURL is just fine. There was a time when you couldn't count on people\nto have a web browser conveniently available, either because that\nwhole world wide web thing hadn't really caught on yet, or because\nthey didn't even have an always-on Internet connection. In that world,\nan INSTALL file in the tarball makes a lot of sense. But these delays,\nthe number of people who are still obtaining PostgreSQL via\nUUCP-over-modem-relay has got to be ... relatively limited.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Jan 2022 11:49:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jan 21, 2022 at 11:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n>>> Another solution would be to merge both README files together and make\n>>> separate section for development/git based codebase.\n\n>> There's a lot to be said for that approach: make it simpler, not\n>> more complicated.\n\n> Yeah.\n\nJosef, you want to draft a patch?\n\n> And what about just getting rid of the INSTALL file altogether?\n> I think that, in 2022, a lot of people are likely to use git to obtain\n> the source code rather than obtain a tarball. And regardless of what\n> method they use to get the source code, they don't really need there\n> to be a text file in the directory with installation instructions; a\n> URL is just fine. There was a time when you couldn't count on people\n> to have a web browser conveniently available, either because that\n> whole world wide web thing hadn't really caught on yet, or because\n> they didn't even have an always-on Internet connection. In that world,\n> an INSTALL file in the tarball makes a lot of sense. But these delays,\n> the number of people who are still obtaining PostgreSQL via\n> UUCP-over-modem-relay has got to be ... relatively limited.\n\nI'm not convinced by this argument. In the first place, the INSTALL\nfile isn't doing any harm. I don't know that I'd bother to build the\ninfrastructure for it today, but we already have that infrastructure\nand it's not causing us any particular maintenance burden. In the\nsecond place, I think your argument is a bit backwards. Sure, people\nwho are relying on a git pull can be expected to have easy access to\non-line docs; that's exactly why we aren't troubled by not providing\nready-to-go INSTALL docs in that case. But that doesn't follow for\npeople who are using a tarball. In particular, it might not be that\neasy to find on-line docs matching the specific tarball version they\nare working with. (With the planned meson conversion, that's about to\nbecome a bigger deal than it's been in the recent past.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 12:19:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not convinced by this argument. In the first place, the INSTALL\n> file isn't doing any harm. I don't know that I'd bother to build the\n> infrastructure for it today, but we already have that infrastructure\n> and it's not causing us any particular maintenance burden.\n\nI think it *is* doing harm. It confuses people. We get semi-regular\nthreads on the list like this one where people are confused by the\nfile not being there, but I can't remember ever seeing a thread where\nsomeone said that it was great, or said that they thought it needed\nimprovement, or said that they used it and then something interesting\nhappened afterward, or anything like that. AFAICR, the only threads on\nthe mailing list that mention the file at all are started by people\nwho were told to look there and couldn't find the file. Now we can\nspeculate that there is a far larger number of people who find the\nfile, love it, and have no problems with it or suggestions for\nimprovement or need to comment upon it in any way, and that's hard to\ndisprove. But I doubt it.\n\n> In the\n> second place, I think your argument is a bit backwards. Sure, people\n> who are relying on a git pull can be expected to have easy access to\n> on-line docs; that's exactly why we aren't troubled by not providing\n> ready-to-go INSTALL docs in that case. But that doesn't follow for\n> people who are using a tarball. In particular, it might not be that\n> easy to find on-line docs matching the specific tarball version they\n> are working with. (With the planned meson conversion, that's about to\n> become a bigger deal than it's been in the recent past.)\n\nI would guess that these days if you're brave enough to compile from\nsource, you are very, very likely to get that source from git rather\nthan a tarball. These days if you Google \"[name of any piece of\nsoftware] source code\" the first hit is the git repository. I grant\nthat the second hit, in the case of PostgreSQL, is a link to download\npage for tarballs, but when I try plugging other things in there\ninstead of \"postgresql\" the git repository is always the first hit,\nand sometimes there's a download page after that. Again, this doesn't\nprove anything, but I do think it's suggestive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Jan 2022 12:52:25 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jan 21, 2022 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm not convinced by this argument. In the first place, the INSTALL\n>> file isn't doing any harm. I don't know that I'd bother to build the\n>> infrastructure for it today, but we already have that infrastructure\n>> and it's not causing us any particular maintenance burden.\n\n> I think it *is* doing harm. It confuses people. We get semi-regular\n> threads on the list like this one where people are confused by the\n> file not being there,\n\nThat's not the fault of INSTALL, that's the fault of the README files,\nwhich I think we're agreed we can fix. (Or, if you suppose that they\ncame to the code with some previous expectation that there'd be an\nINSTALL file, taking it away is certainly not going to improve matters.)\n\n> but I can't remember ever seeing a thread where\n> someone said that it was great, or said that they thought it needed\n> improvement, or said that they used it and then something interesting\n> happened afterward, or anything like that.\n\nIt's just another copy of the same documentation, so I can't really\nimagine a situation where someone would feel a need to mention it\nspecifically.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 13:34:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-21 11:49:12 -0500, Robert Haas wrote:\n> On Fri, Jan 21, 2022 at 11:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> > > Another solution would be to merge both README files together and make\n> > > separate section for development/git based codebase.\n> >\n> > There's a lot to be said for that approach: make it simpler, not\n> > more complicated.\n\nI agree, that's the right direction.\n\n\n> Yeah. And what about just getting rid of the INSTALL file altogether?\n\nYea, I think that might be worth doing too, at least in some form. It's\ncertainly not helpful to have it in the tarball but not the git tree.\n\nI tried to find the discussion around removing INSTALL from the source tree,\nbut it seems to actually have centered much more around HISTORY\nhttps://www.postgresql.org/message-id/200403091751.i29HpiV24304%40candle.pha.pa.us\n\nIt seems quite workable to continue for INSTALL to be generated, but have the\nresult checked in. The rate of changes to {installation,install-windows}.sgml\nisn't that high, and when things change, it's actually useful to be able to\nsee the current instructions from a console.\n\nMight even be good to be forced to see the text version of INSTALL when\nchanging the sgml docs...\n\n\n> I think that, in 2022, a lot of people are likely to use git to obtain\n> the source code rather than obtain a tarball.\n\nIndeed.\n\n\n> And regardless of what method they use to get the source code, they don't\n> really need there to be a text file in the directory with installation\n> instructions; a URL is just fine.\n\nEven working with git trees, I do quite prefer having the instructions\navailable in a terminal compatible way, TBH. The building happens in a\nterminal, after all. In our case it's made worse by the browser version being\nsplit across ~10 pages and multiple chapters.\n\n\n> There was a time when you couldn't count on people to have a web browser\n> conveniently available, either because that whole world wide web thing\n> hadn't really caught on yet, or because they didn't even have an always-on\n> Internet connection. In that world, an INSTALL file in the tarball makes a\n> lot of sense. But these delays, the number of people who are still obtaining\n> PostgreSQL via UUCP-over-modem-relay has got to be ... relatively limited.\n\nThere's still people having to build postgres on systems without internet\naccess - but typically they'll have access to the instructions when developin\nthe scripts for that...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 Jan 2022 14:11:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It seems quite workable to continue for INSTALL to be generated, but have the\n> result checked in. The rate of changes to {installation,install-windows}.sgml\n> isn't that high, and when things change, it's actually useful to be able to\n> see the current instructions from a console.\n> Might even be good to be forced to see the text version of INSTALL when\n> changing the sgml docs...\n\nNot sure about that, because\n\n(1) if done wrong, it'd make it impossible to commit into the\ndocs unless you have a working docs toolchain on your machine,\nwhether you wanted to touch installation.sgml or not;\n\n(2) we'd inevitably get a lot of diff noise because of different\ncommitters having different versions of the docs toolchain.\n(Unlike configure, trying to require uniformity of those tools\nseems quite impractical.)\n\nPerhaps this could be finessed by making updating of INSTALL\nthe responsibility of some post-commit hook on the git server.\nNot sure that we want to go there, though. In any case, that\napproach would negate your point about seeing the results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:25:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-21 17:25:08 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It seems quite workable to continue for INSTALL to be generated, but have the\n> > result checked in. The rate of changes to {installation,install-windows}.sgml\n> > isn't that high, and when things change, it's actually useful to be able to\n> > see the current instructions from a console.\n> > Might even be good to be forced to see the text version of INSTALL when\n> > changing the sgml docs...\n>\n> Not sure about that, because\n>\n> (1) if done wrong, it'd make it impossible to commit into the\n> docs unless you have a working docs toolchain on your machine,\n> whether you wanted to touch installation.sgml or not;\n\nHm, do we really want committers do that at any frequency? I think that\nconcern makes sense for contributors, but I think it's reasonable to expect\nthe docs to be built before committing changes.\n\nIt might be relevant that the dependencies for INSTALL generation are\nconsiderably smaller than for a full docs build. It needs xsltproc, xmllint\nand pandoc. Not tiny, but still a lot less than the full docbook toolchain.\n\nOn a debian container with just enough stuff installed to get through\n./configure --without-readline --without-zlib (to minimize things installed\nfrom another source):\n\napt-get install -y xsltproc libxml2-utils pandoc\n...\nThe following NEW packages will be installed:\n libcmark-gfm-extensions0 libcmark-gfm0 libicu67 libxml2 libxml2-utils libxslt1.1 pandoc pandoc-data xsltproc\n0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.\nNeed to get 28.8 MB of archives.\nAfter this operation, 193 MB of additional disk space will be used.\n\n\n\n\nRe committers, the biggest issue would presumably be working on windows :(. I\ndon't think the docs build is integrated into the msvc tooling right now.\n\n\n\n> (2) we'd inevitably get a lot of diff noise because of different\n> committers having different versions of the docs toolchain.\n> (Unlike configure, trying to require uniformity of those tools\n> seems quite impractical.)\n\nFair point, no idea how big that aspect is. I'd expect xsltproc to be halfway\nOK in that regard, and it's certainly not changing much anymore. Pandoc, I\nhave no idea.\n\n\n> Perhaps this could be finessed by making updating of INSTALL\n> the responsibility of some post-commit hook on the git server.\n> Not sure that we want to go there, though. In any case, that\n> approach would negate your point about seeing the results.\n\nIt would. I guess it'd still be better than the situation today, but...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 Jan 2022 14:53:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-21 17:25:08 -0500, Tom Lane wrote:\n> > Perhaps this could be finessed by making updating of INSTALL\n> > the responsibility of some post-commit hook on the git server.\n> > Not sure that we want to go there, though. In any case, that\n> > approach would negate your point about seeing the results.\n>\n> It would. I guess it'd still be better than the situation today, but...\n\npost-commit hooks don't run on the git server, they run locally on\nyour machine. There is a \"post receive\" hook that runs on the git\nserver, but we definitely don't want that one to fabricate new commits\nI think.\n\nAnd it certainly cannot *modify* the commit that came in in flight, as\nthat would change the hash, and basically break the whole integrity of\nthe commit chain.\n\nWe could certainly have a cronjob somewhere that ran to check that\nthey were in sync and would auto-generate a patch if they weren't, for\na committer to review, but I'm not sure how much that would help?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 9 Feb 2022 22:32:59 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-09 22:32:59 +0100, Magnus Hagander wrote:\n> On Fri, Jan 21, 2022 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-01-21 17:25:08 -0500, Tom Lane wrote:\n> > > Perhaps this could be finessed by making updating of INSTALL\n> > > the responsibility of some post-commit hook on the git server.\n> > > Not sure that we want to go there, though. In any case, that\n> > > approach would negate your point about seeing the results.\n> >\n> > It would. I guess it'd still be better than the situation today, but...\n> \n> post-commit hooks don't run on the git server, they run locally on\n> your machine. There is a \"post receive\" hook that runs on the git\n> server, but we definitely don't want that one to fabricate new commits\n> I think.\n\nWhy not? We probably wouldn't want to do synchronously as part of the receive\nhook, but if we have a policy that INSTALL is not to be updated by humans, but\nupdated automatically whenever its sources are modified, I'd be OK with\nauto-committing that.\n\n\nBut before we go there, it might be worth checking if the generated INSTALL\nactually changes meaningfully across \"doc toolchain\" versions. If not, a\nsimpler receive hook verifying that INSTALL was updated when the relevant sgml\nfiles changed probably would be sufficient.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Feb 2022 15:37:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-09 22:32:59 +0100, Magnus Hagander wrote:\n>> post-commit hooks don't run on the git server, they run locally on\n>> your machine. There is a \"post receive\" hook that runs on the git\n>> server, but we definitely don't want that one to fabricate new commits\n>> I think.\n\n> Why not? We probably wouldn't want to do synchronously as part of the receive\n> hook, but if we have a policy that INSTALL is not to be updated by humans, but\n> updated automatically whenever its sources are modified, I'd be OK with\n> auto-committing that.\n\nWhat happens when the INSTALL build fails (which is quite possible,\nI believe, even if a plain html build works)?\n\nI don't really want any post-commit or post-receive hooks doing\nanything interesting to the tree. I think the odds for trouble\nare significantly greater than any value we'd get out of it.\n\nI'm in favor of unifying README and README.git along the lines\nwe discussed above. I think that going further than that\nwill be a lot of trouble for very little gain; in fact no gain,\nbecause I do not buy any of the arguments that have been\nmade about why changing the INSTALL setup would be beneficial.\nIf we adjust the README contents to be less confusing about that,\nwe're done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Feb 2022 18:52:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Thu, Feb 10, 2022 at 12:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n(bringing this one back from the dead)\n\n\n\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-02-09 22:32:59 +0100, Magnus Hagander wrote:\n> >> post-commit hooks don't run on the git server, they run locally on\n> >> your machine. There is a \"post receive\" hook that runs on the git\n> >> server, but we definitely don't want that one to fabricate new commits\n> >> I think.\n>\n> > Why not? We probably wouldn't want to do synchronously as part of the receive\n> > hook, but if we have a policy that INSTALL is not to be updated by humans, but\n> > updated automatically whenever its sources are modified, I'd be OK with\n> > auto-committing that.\n\nAuto committing in git becomes very.. Special. In that case you would\npush but after that you would still not be in sync and have to pull\nback down to compare.\n\nIt would also require the running of a comparatively very complex\nbuild script inside the very restricted environment that is the git\nmaster server. I do not want to have to deal with that level of\ncomplexity there, and whatever security implications can come from it.\n\nBut if you want to accomplish something similar you could have a batch\njob that runs on maybe a daily basis and builds the INSTALL file and\ncommits it to the repo. I still don't really like the idea of\ncommitting automatically (in the github/gitlab world, having such a\ntool generate a MR/PR would be perfectly fine, but pushing directly to\nthe repo I really prefer being something that has human eyes on the\nprocess). Or post a patch with it for someone to look at.\n\n\n> What happens when the INSTALL build fails (which is quite possible,\n> I believe, even if a plain html build works)?\n\nYes, it breaks every time somebody accidentally puts a link to\nsomewhere else in the documentation, doesn't it?\n\n\n> I don't really want any post-commit or post-receive hooks doing\n> anything interesting to the tree. I think the odds for trouble\n> are significantly greater than any value we'd get out of it.\n\nVery much agreed.\n\n\n> I'm in favor of unifying README and README.git along the lines\n> we discussed above. I think that going further than that\n> will be a lot of trouble for very little gain; in fact no gain,\n> because I do not buy any of the arguments that have been\n> made about why changing the INSTALL setup would be beneficial.\n> If we adjust the README contents to be less confusing about that,\n> we're done.\n\n+1.\n\nIf anything, I'm more behind the idea of just getting rid of the\nINSTALL file. A reference to the install instructions in the README\nshould be enough today. The likelihood of somebody getting a postgres\nsource tarball and trying to build it for the first time while not\nhaving internet access is extremely low I'd say.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 7 Mar 2022 16:12:43 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> If anything, I'm more behind the idea of just getting rid of the\n> INSTALL file. A reference to the install instructions in the README\n> should be enough today. The likelihood of somebody getting a postgres\n> source tarball and trying to build it for the first time while not\n> having internet access is extremely low I'd say.\n\nI agree that there's no longer a lot of reason to insist that the\ninstallation instructions need to be present in a flat text file\nas opposed to some other way.\n\nHowever, just putting a URL into README seems problematic, because how\nwill we ensure that it's the correct version-specific URL? (And it does\nneed to be version-specific; the set of configure options changes over\ntime, and that's not even mentioning whatever user-visible effects\nchanging to meson will have.) You could imagine generating the URL\nduring tarball build, but that does nothing for the people who pull\ndirectly from git.\n\nI thought briefly about directing people to read\ndoc/src/sgml/html/installation.html, but that has the same problem\nthat it won't be present in a fresh git pull.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Mar 2022 10:57:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Mon, Mar 7, 2022 at 4:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > If anything, I'm more behind the idea of just getting rid of the\n> > INSTALL file. A reference to the install instructions in the README\n> > should be enough today. The likelihood of somebody getting a postgres\n> > source tarball and trying to build it for the first time while not\n> > having internet access is extremely low I'd say.\n>\n> I agree that there's no longer a lot of reason to insist that the\n> installation instructions need to be present in a flat text file\n> as opposed to some other way.\n>\n> However, just putting a URL into README seems problematic, because how\n> will we ensure that it's the correct version-specific URL? (And it does\n> need to be version-specific; the set of configure options changes over\n> time, and that's not even mentioning whatever user-visible effects\n> changing to meson will have.) You could imagine generating the URL\n> during tarball build, but that does nothing for the people who pull\n> directly from git.\n>\n> I thought briefly about directing people to read\n> doc/src/sgml/html/installation.html, but that has the same problem\n> that it won't be present in a fresh git pull.\n\nYeah, if we just care about tarballs that works, but then it also\nworks to inject the version number in the README file.\n\nBut taking a step back, who is the actual audience for this? Do we\n*need* a link pointing directly there, or is it enough to just point\nto \"use the docs on the web\"? We can't link to the incorrect version,\nbut can we just link to /docs/ and leave it at that?\n\nIf not, could we make the change of URL a part of the branching step?\nBranch to a stable release would the include modifying README, and be\nmad ea step of version_stamp.pl?\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 7 Mar 2022 22:05:33 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> But taking a step back, who is the actual audience for this? Do we\n> *need* a link pointing directly there, or is it enough to just point\n> to \"use the docs on the web\"? We can't link to the incorrect version,\n> but can we just link to /docs/ and leave it at that?\n\nWell, it's people compiling from source, so I guess we can assume some\namount of cluefulness? I think perhaps it'd be okay to say \"go here\nand then navigate to the proper sub-page for your version\".\n\n> If not, could we make the change of URL a part of the branching step?\n> Branch to a stable release would the include modifying README, and be\n> mad ea step of version_stamp.pl?\n\nDoesn't really help people working from git, I think, because the\nmaster branch is always going to claim to be \"devel\" even when you\nrewind it to some old state. Maybe we can assume people doing\nsuch a thing have even more clue ... but on the whole I'd rather\nnot add the additional complication.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Mar 2022 17:51:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Mon, Mar 7, 2022 at 5:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > But taking a step back, who is the actual audience for this? Do we\n> > *need* a link pointing directly there, or is it enough to just point\n> > to \"use the docs on the web\"? We can't link to the incorrect version,\n> > but can we just link to /docs/ and leave it at that?\n>\n> Well, it's people compiling from source, so I guess we can assume some\n> amount of cluefulness? I think perhaps it'd be okay to say \"go here\n> and then navigate to the proper sub-page for your version\".\n\nIt's kind of hard for me to imagine that not being enough for somebody.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Mar 2022 12:19:34 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "On Mon, Mar 7, 2022 at 11:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > But taking a step back, who is the actual audience for this? Do we\n> > *need* a link pointing directly there, or is it enough to just point\n> > to \"use the docs on the web\"? We can't link to the incorrect version,\n> > but can we just link to /docs/ and leave it at that?\n>\n> Well, it's people compiling from source, so I guess we can assume some\n> amount of cluefulness? I think perhaps it'd be okay to say \"go here\n> and then navigate to the proper sub-page for your version\".\n>\n> > If not, could we make the change of URL a part of the branching step?\n> > Branch to a stable release would the include modifying README, and be\n> > mad ea step of version_stamp.pl?\n>\n> Doesn't really help people working from git, I think, because the\n> master branch is always going to claim to be \"devel\" even when you\n> rewind it to some old state. Maybe we can assume people doing\n> such a thing have even more clue ... but on the whole I'd rather\n> not add the additional complication.\n\nWell it could per major version couldn't it? When we start working on\nv16, we stamp master as that, and we could use that for the links. It\nwill work \"for the past\", but if will of course not be able to track\nhow the docs changes between the individual commits -- since our\nwebsite only has the latest release for each one. If we need that it\nneeds to be in the source tree -- but is that actually a requirement?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 9 Mar 2022 17:51:40 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Mon, Mar 7, 2022 at 11:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Doesn't really help people working from git, I think, because the\n>> master branch is always going to claim to be \"devel\" even when you\n>> rewind it to some old state. Maybe we can assume people doing\n>> such a thing have even more clue ... but on the whole I'd rather\n>> not add the additional complication.\n\n> Well it could per major version couldn't it? When we start working on\n> v16, we stamp master as that, and we could use that for the links. It\n> will work \"for the past\", but if will of course not be able to track\n> how the docs changes between the individual commits -- since our\n> website only has the latest release for each one. If we need that it\n> needs to be in the source tree -- but is that actually a requirement?\n\nI think that adds more complication than usefulness. ISTM having the\nmaster branch not identify itself more specifically than \"devel\" is\nactually a good thing in this context, for precisely the reason that\nthe corresponding docs are likely to be in flux. Seeing \"v16\" seems\nlikely to lull people into a false sense of certainty that whatever\nthey find on the web matches the code they actually have.\n\nSo I'm coming to the position that the README file ought not contain\nany link more specific than https://www.postgresql.org/docs/\nand that it should then tell you to look at the installation chapters\nin the appropriate version's docs. (Considering we have multiple\ninstallation chapters nowadays, we couldn't provide an exact URL\nanyway.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Mar 2022 12:06:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New developer papercut - Makefile references INSTALL"
}
] |
[
{
"msg_contents": "Hi,\n\nShould archive_command being blank when archiving is enabled result in\na fatal error? This doesn't even produce a warning when restarting,\njust an entry in the log when it goes to archive a WAL segment, and\nfinds the archive_command is empty.\n\nIs there a valid scenario where someone would have archiving enabled\nbut no archive command? Naturally it will build up WAL until it is\ncorrected, which will result in a less desirable error, and likely at\na less convenient time, and to avoid it, someone either has to have\nchecked the logs and noticed this error, or got curious as to why\ntheir WAL collection is nearly running out of shelf space.\n\nThom\n\n\n",
"msg_date": "Mon, 17 Jan 2022 14:43:30 +0000",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Blank archive_command"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 8:14 PM Thom Brown <thom@linux.com> wrote:\n>\n> Hi,\n>\n> Should archive_command being blank when archiving is enabled result in\n> a fatal error? This doesn't even produce a warning when restarting,\n> just an entry in the log when it goes to archive a WAL segment, and\n> finds the archive_command is empty.\n>\n> Is there a valid scenario where someone would have archiving enabled\n> but no archive command? Naturally it will build up WAL until it is\n> corrected, which will result in a less desirable error, and likely at\n> a less convenient time, and to avoid it, someone either has to have\n> checked the logs and noticed this error, or got curious as to why\n> their WAL collection is nearly running out of shelf space.\n\nYes, the .ready files under archive_status and wal files under pg_wal\ndirectory grow up with archive_mode on but no archive_command. The\narchiver keeps emitting \"archive_mode enabled, yet archive_command is\nnot set\" warnings into server logs, maybe this is something that needs\nto be monitored for. The expectation is to have a good archiving\nconfiguration setup in place which updates both archive_command and\narchive_mode to appropriate values.\n\nThe server keeps the WAL files from the point when archive_mode is\nenabled, but not from the point when the archive_command is set. The\narchive_mode needs postmaster restart whereas archive_command doesn't,\nif the archive_command too needed a postmaster restart, then we would\nhave failed FATALly if archive_command was empty. But making the\narchive_command a needs-postmaster-restart class of parameter is not\nthe path we go IMO because avoiding pomaster restarts in production\nenvironments is to be avoided whenever possible.\n\nAn extreme scenario I can think of is if the archive_command is set to\nempty by a service layer code. Of course, this is something postgres\ndoesn't need to care about. However, a reasonable thing to do is to\nemit a WARNING or ERROR-out when archive_command is set to null in\nit's check_archive_command when archive_mode is on?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 17 Jan 2022 20:55:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Blank archive_command"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> However, a reasonable thing to do is to\n> emit a WARNING or ERROR-out when archive_command is set to null in\n> it's check_archive_command when archive_mode is on?\n\nWe have been burned badly in the past by attempts to do that sort of\nthing (ie, make behavior that's conditional on combinations of GUC\nsettings). There tends to be collateral damage along the lines of\n\"certain orders of operations stop working\". I'm not really eager\nto open that can of worms here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jan 2022 10:32:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Blank archive_command"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 9:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > However, a reasonable thing to do is to\n> > emit a WARNING or ERROR-out when archive_command is set to null in\n> > it's check_archive_command when archive_mode is on?\n>\n> We have been burned badly in the past by attempts to do that sort of\n> thing (ie, make behavior that's conditional on combinations of GUC\n> settings). There tends to be collateral damage along the lines of\n> \"certain orders of operations stop working\". I'm not really eager\n> to open that can of worms here.\n\n+1 to create any GUC setting dependencies. Let's leave the\nresponsibility of setting appropriate archive_command to the archiving\nhandlers outside postgres. FWIW, having a note in the archive_command\nGUC definition in the docs might help to some extent.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 17 Jan 2022 21:05:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Blank archive_command"
},
{
"msg_contents": "On Mon, 17 Jan 2022 at 15:25, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jan 17, 2022 at 8:14 PM Thom Brown <thom@linux.com> wrote:\n> >\n> > Hi,\n> >\n> > Should archive_command being blank when archiving is enabled result in\n> > a fatal error? This doesn't even produce a warning when restarting,\n> > just an entry in the log when it goes to archive a WAL segment, and\n> > finds the archive_command is empty.\n> >\n> > Is there a valid scenario where someone would have archiving enabled\n> > but no archive command? Naturally it will build up WAL until it is\n> > corrected, which will result in a less desirable error, and likely at\n> > a less convenient time, and to avoid it, someone either has to have\n> > checked the logs and noticed this error, or got curious as to why\n> > their WAL collection is nearly running out of shelf space.\n>\n> Yes, the .ready files under archive_status and wal files under pg_wal\n> directory grow up with archive_mode on but no archive_command. The\n> archiver keeps emitting \"archive_mode enabled, yet archive_command is\n> not set\" warnings into server logs, maybe this is something that needs\n> to be monitored for. The expectation is to have a good archiving\n> configuration setup in place which updates both archive_command and\n> archive_mode to appropriate values.\n>\n> The server keeps the WAL files from the point when archive_mode is\n> enabled, but not from the point when the archive_command is set. The\n> archive_mode needs postmaster restart whereas archive_command doesn't,\n> if the archive_command too needed a postmaster restart, then we would\n> have failed FATALly if archive_command was empty. But making the\n> archive_command a needs-postmaster-restart class of parameter is not\n> the path we go IMO because avoiding pomaster restarts in production\n> environments is to be avoided whenever possible.\n\nOkay, that makes sense. Thanks. I guess people have to be careful\nwith their settings. I was hoping there was one less footgun that\ncould be disarmed.\n>\n> An extreme scenario I can think of is if the archive_command is set to\n> empty by a service layer code. Of course, this is something postgres\n> doesn't need to care about. However, a reasonable thing to do is to\n> emit a WARNING or ERROR-out when archive_command is set to null in\n> it's check_archive_command when archive_mode is on?\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n\n-- \nThom\n\n\n",
"msg_date": "Mon, 17 Jan 2022 15:45:08 +0000",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Re: Blank archive_command"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 9:05 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> +1 to not create any GUC setting dependencies. Let's leave the\n> responsibility of setting appropriate archive_command to the archiving\n> handlers outside postgres. FWIW, having a note in the archive_command\n> GUC definition in the docs might help to some extent.\n\nOn further search, the documentation says the following which is enough IMO:\n\nThis parameter can only be set in the postgresql.conf file or on the\nserver command line. It is ignored unless archive_mode was enabled at\nserver start. If archive_command is an empty string (the default)\nwhile archive_mode is enabled, WAL archiving is temporarily disabled,\nbut the server continues to accumulate WAL segment files in the\nexpectation that a command will soon be provided. Setting\narchive_command to a command that does nothing but return true, e.g.,\n/bin/true (REM on Windows), effectively disables archiving, but also\nbreaks the chain of WAL files needed for archive recovery, so it\nshould only be used in unusual circumstances.\n\nhttps://www.postgresql.org/docs/devel/runtime-config-wal.html#GUC-ARCHIVE-COMMAND\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 17 Jan 2022 21:22:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Blank archive_command"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 10:53 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> This parameter can only be set in the postgresql.conf file or on the\n> server command line. It is ignored unless archive_mode was enabled at\n> server start. If archive_command is an empty string (the default)\n> while archive_mode is enabled, WAL archiving is temporarily disabled,\n> but the server continues to accumulate WAL segment files in the\n> expectation that a command will soon be provided. Setting\n> archive_command to a command that does nothing but return true, e.g.,\n> /bin/true (REM on Windows), effectively disables archiving, but also\n> breaks the chain of WAL files needed for archive recovery, so it\n> should only be used in unusual circumstances.\n\nYeah, the fact that this has been documented behavior for a long time\nis a good reason not to get too excited about the possibility of\nchanging it. People are likely using it intentionally.\n\nIt might be nice to do something about the fact that you can't change\narchive_mode without a server restart, though. I suspect we had a good\nreason for that limitation from an engineering perspective, but from a\nuser perspective, it sucks pretty hard.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Jan 2022 12:52:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Blank archive_command"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It might be nice to do something about the fact that you can't change\n> archive_mode without a server restart, though. I suspect we had a good\n> reason for that limitation from an engineering perspective, but from a\n> user perspective, it sucks pretty hard.\n\nAgreed. I don't recall what the motivation for that was, but\nmaybe it could be fixed with some more elbow grease.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jan 2022 14:54:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Blank archive_command"
}
] |
[
{
"msg_contents": "pg_upgrade: Preserve relfilenodes and tablespace OIDs.\n\nCurrently, database OIDs, relfilenodes, and tablespace OIDs can all\nchange when a cluster is upgraded using pg_upgrade. It seems better\nto preserve them, because (1) it makes troubleshooting pg_upgrade\neasier, since you don't have to do a lot of work to match up files\nin the old and new clusters, (2) it allows 'rsync' to save bandwidth\nwhen used to re-sync a cluster after an upgrade, and (3) if we ever\nencrypt or sign blocks, we would likely want to use a nonce that\ndepends on these values.\n\nThis patch only arranges to preserve relfilenodes and tablespace\nOIDs. The task of preserving database OIDs is left for another patch,\nsince it involves some complexities that don't exist in these cases.\n\nDatabase OIDs have a similar issue, but there are some tricky points\nin that case that do not apply to these cases, so that problem is left\nfor another patch.\n\nShruthi KC, based on an earlier patch from Antonin Houska, reviewed\nand with some adjustments by me.\n\nDiscussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/9a974cbcba005256a19991203583a94b4f9a21a9\n\nModified Files\n--------------\nsrc/backend/bootstrap/bootparse.y | 3 +-\nsrc/backend/catalog/heap.c | 63 ++++++++++---\nsrc/backend/catalog/index.c | 23 ++++-\nsrc/backend/commands/tablespace.c | 17 +++-\nsrc/backend/utils/adt/pg_upgrade_support.c | 44 +++++++++\nsrc/bin/pg_dump/pg_dump.c | 104 ++++++++++++++-------\nsrc/bin/pg_dump/pg_dumpall.c | 3 +\nsrc/bin/pg_upgrade/info.c | 31 +-----\nsrc/bin/pg_upgrade/pg_upgrade.c | 13 +--\nsrc/bin/pg_upgrade/pg_upgrade.h | 10 +-\nsrc/bin/pg_upgrade/relfilenode.c | 6 +-\nsrc/include/catalog/binary_upgrade.h | 5 +\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/heap.h | 3 +-\nsrc/include/catalog/pg_proc.dat | 16 ++++\n.../spgist_name_ops/expected/spgist_name_ops.out | 12 ++-\n16 files changed, 247 insertions(+), 108 deletions(-)",
"msg_date": "Mon, 17 Jan 2022 19:05:34 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: pg_upgrade: Preserve relfilenodes and tablespace OIDs."
},
{
"msg_contents": "Re: Robert Haas\n> pg_upgrade: Preserve relfilenodes and tablespace OIDs.\n\n> src/bin/pg_dump/pg_dumpall.c | 3 +\n\n--- a/src/bin/pg_dump/pg_dumpall.c\n+++ b/src/bin/pg_dump/pg_dumpall.c\n@@ -1066,6 +1066,9 @@ dumpTablespaces(PGconn *conn)\n /* needed for buildACLCommands() */\n fspcname = pg_strdup(fmtId(spcname));\n\n+ appendPQExpBufferStr(buf, \"\\n-- For binary upgrade, must preserve pg_table\n+ appendPQExpBuffer(buf, \"SELECT pg_catalog.binary_upgrade_set_next_pg_table\n\nThis needs to be guarded with \"if (binary_upgrade)\".\n\nError message during a Debian pg_upgradecluster (-m dump) from 14 to 15:\n\n2022-02-13 12:44:01.272 CET [168032] postgres@template1 LOG: statement: SELECT pg_catalog.binary_upgrade_set_next_pg_tablespace_oid('16408'::pg_catalog.oid);\n2022-02-13 12:44:01.272 CET [168032] postgres@template1 ERROR: function can only be called when server is in binary upgrade mode\n2022-02-13 12:44:01.272 CET [168032] postgres@template1 STATEMENT: SELECT pg_catalog.binary_upgrade_set_next_pg_tablespace_oid('16408'::pg_catalog.oid);\n\nChristoph\n\n\n",
"msg_date": "Sun, 13 Feb 2022 12:51:10 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: pg_upgrade: Preserve relfilenodes and tablespace OIDs."
},
{
"msg_contents": "On Sun, Feb 13, 2022 at 6:51 AM Christoph Berg <myon@debian.org> wrote:\n> Re: Robert Haas\n> > pg_upgrade: Preserve relfilenodes and tablespace OIDs.\n>\n> > src/bin/pg_dump/pg_dumpall.c | 3 +\n>\n> --- a/src/bin/pg_dump/pg_dumpall.c\n> +++ b/src/bin/pg_dump/pg_dumpall.c\n> @@ -1066,6 +1066,9 @@ dumpTablespaces(PGconn *conn)\n> /* needed for buildACLCommands() */\n> fspcname = pg_strdup(fmtId(spcname));\n>\n> + appendPQExpBufferStr(buf, \"\\n-- For binary upgrade, must preserve pg_table\n> + appendPQExpBuffer(buf, \"SELECT pg_catalog.binary_upgrade_set_next_pg_table\n>\n> This needs to be guarded with \"if (binary_upgrade)\".\n\nRight. Sorry about that, and sorry for not responding sooner also. Fix\npushed now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Feb 2022 10:59:37 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: pg_upgrade: Preserve relfilenodes and tablespace OIDs."
},
{
"msg_contents": "Re: Robert Haas\n> > This needs to be guarded with \"if (binary_upgrade)\".\n> \n> Right. Sorry about that, and sorry for not responding sooner also. Fix\n> pushed now.\n\nThanks, the 14-15 upgrade test is passing again here.\n\nChristoph\n\n\n",
"msg_date": "Fri, 18 Feb 2022 10:39:09 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: pg_upgrade: Preserve relfilenodes and tablespace OIDs."
}
] |
[
{
"msg_contents": "Hi,\n\nThis patch is important for postgresql 13.5 and 14.1 if you want to use\nthem with Python-3.11.\n\nMiklos",
"msg_date": "Mon, 17 Jan 2022 20:58:10 +0100",
"msg_from": "\"Horvath, Miklos\" <mikloshorvath@blackpanther.hu>",
"msg_from_op": true,
"msg_subject": "Python-3.11 patch"
},
{
"msg_contents": "On 17.01.22 20:58, Horvath, Miklos wrote:\n> This patch is important for postgresql 13.5 and 14.1 if you want to use \n> them with Python-3.11.\n\nThanks, this was already done and will be in the next minor releases for \nall branches.\n\n\n",
"msg_date": "Tue, 18 Jan 2022 11:26:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Python-3.11 patch"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing another patch, I noticed that it slightly adjusted the\ntreatment of datlastsysoid. That made me wonder what datlastsysoid is\nused for, so I started poking around and discovered that the answer,\nat least insofar as I can determine, is \"nothing\". The documentation\nclaims that the value is useful \"particularly to pg_dump,\" which turns\nout not to be true any more. Tom's recent commit,\n30e7c175b81d53c0f60f6ad12d1913a6d7d77008, to remove pg_dump/pg_dumpall\nsupport for dumping from pre-9.2 servers, removed all remaining uses\nof this value from the source tree. It's still maintained. We just\ndon't do anything with it.\n\nSince that doesn't seem like an especially good idea, PFA a patch to\nremove it. Note that, even prior to that commit, it wasn't being used\nfor anything when dumping modern servers, so it would still have been\nOK to remove it from the current system catalog structure. Now,\nthough, we can remove all references to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 17 Jan 2022 15:09:13 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "removing datlastsysoid"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Since that doesn't seem like an especially good idea, PFA a patch to\n> remove it. Note that, even prior to that commit, it wasn't being used\n> for anything when dumping modern servers, so it would still have been\n> OK to remove it from the current system catalog structure. Now,\n> though, we can remove all references to it.\n\n+1. Another reason to get rid of it is that it has nothing to do\nwith the system OID ranges defined in access/transam.h.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jan 2022 15:43:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> +1. Another reason to get rid of it is that it has nothing to do\n> with the system OID ranges defined in access/transam.h.\n\nAgreed. Thanks for looking. Committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 Jan 2022 09:02:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "On Thu, 20 Jan 2022 at 14:03, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 17, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > +1. Another reason to get rid of it is that it has nothing to do\n> > with the system OID ranges defined in access/transam.h.\n>\n> Agreed. Thanks for looking. Committed.\n>\n\nSo we just ran into this whilst updating pgAdmin to support PG15. How is\none supposed to figure out what the last system OID is now from an\narbitrary database? pgAdmin uses that value in well over 300 places in its\nsource.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, 20 Jan 2022 at 14:03, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jan 17, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> +1. Another reason to get rid of it is that it has nothing to do\n> with the system OID ranges defined in access/transam.h.\n\nAgreed. Thanks for looking. Committed.So we just ran into this whilst updating pgAdmin to support PG15. How is one supposed to figure out what the last system OID is now from an arbitrary database? pgAdmin uses that value in well over 300 places in its source. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 16 May 2022 14:43:16 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "On 5/16/22 9:43 AM, Dave Page wrote:\n> \n> \n> On Thu, 20 Jan 2022 at 14:03, Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> On Mon, Jan 17, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> > +1. Another reason to get rid of it is that it has nothing to do\n> > with the system OID ranges defined in access/transam.h.\n> \n> Agreed. Thanks for looking. Committed.\n> \n> \n> So we just ran into this whilst updating pgAdmin to support PG15. How is \n> one supposed to figure out what the last system OID is now from an \n> arbitrary database? pgAdmin uses that value in well over 300 places in \n> its source.\n\nWe ran into the same issue in pgBackRest. The old query that initdb used \nto generate these values is no good for PG15 since the template \ndatabases now have fixed low oids.\n\nOut solution was to use the constant:\n\n#define FirstNormalObjectId\t\t16384\n\nAnd treat anything below that as a system oid. This constant has not \nchanged in a very long time (if ever) but we added it to our list of \nconstants to recheck with each release.\n\nWe used the initdb query to provide backward compatibility for older \nversions of pgbackrest using PG <= 14, but are using FirstNormalObjectId \ngoing forward.\n\nSee \nhttps://github.com/pgbackrest/pgbackrest/commit/692fe496bdb5fa6dcffeb9f85b6188ceb1df707a \nfor details.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 16 May 2022 10:06:42 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "On Mon, 16 May 2022 at 15:06, David Steele <david@pgmasters.net> wrote:\n\n> On 5/16/22 9:43 AM, Dave Page wrote:\n> >\n> >\n> > On Thu, 20 Jan 2022 at 14:03, Robert Haas <robertmhaas@gmail.com\n> > <mailto:robertmhaas@gmail.com>> wrote:\n> >\n> > On Mon, Jan 17, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us\n> > <mailto:tgl@sss.pgh.pa.us>> wrote:\n> > > +1. Another reason to get rid of it is that it has nothing to do\n> > > with the system OID ranges defined in access/transam.h.\n> >\n> > Agreed. Thanks for looking. Committed.\n> >\n> >\n> > So we just ran into this whilst updating pgAdmin to support PG15. How is\n> > one supposed to figure out what the last system OID is now from an\n> > arbitrary database? pgAdmin uses that value in well over 300 places in\n> > its source.\n>\n> We ran into the same issue in pgBackRest. The old query that initdb used\n> to generate these values is no good for PG15 since the template\n> databases now have fixed low oids.\n>\n> Out solution was to use the constant:\n>\n> #define FirstNormalObjectId 16384\n>\n> And treat anything below that as a system oid. This constant has not\n> changed in a very long time (if ever) but we added it to our list of\n> constants to recheck with each release.\n>\n\nYes, that seems reasonable. Changing that value would very likely break\npg_upgrade I can imagine, so I suspect it'll stay as it is for a while\nlonger.\n\n\n>\n> We used the initdb query to provide backward compatibility for older\n> versions of pgbackrest using PG <= 14, but are using FirstNormalObjectId\n> going forward.\n>\n> See\n>\n> https://github.com/pgbackrest/pgbackrest/commit/692fe496bdb5fa6dcffeb9f85b6188ceb1df707a\n> for details.\n>\n\n Thanks David!\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Mon, 16 May 2022 at 15:06, David Steele <david@pgmasters.net> wrote:On 5/16/22 9:43 AM, Dave Page wrote:\n> \n> \n> On Thu, 20 Jan 2022 at 14:03, Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> On Mon, Jan 17, 2022 at 3:43 PM Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> > +1. Another reason to get rid of it is that it has nothing to do\n> > with the system OID ranges defined in access/transam.h.\n> \n> Agreed. Thanks for looking. Committed.\n> \n> \n> So we just ran into this whilst updating pgAdmin to support PG15. How is \n> one supposed to figure out what the last system OID is now from an \n> arbitrary database? pgAdmin uses that value in well over 300 places in \n> its source.\n\nWe ran into the same issue in pgBackRest. The old query that initdb used \nto generate these values is no good for PG15 since the template \ndatabases now have fixed low oids.\n\nOut solution was to use the constant:\n\n#define FirstNormalObjectId 16384\n\nAnd treat anything below that as a system oid. This constant has not \nchanged in a very long time (if ever) but we added it to our list of \nconstants to recheck with each release.Yes, that seems reasonable. Changing that value would very likely break pg_upgrade I can imagine, so I suspect it'll stay as it is for a while longer. \n\nWe used the initdb query to provide backward compatibility for older \nversions of pgbackrest using PG <= 14, but are using FirstNormalObjectId \ngoing forward.\n\nSee \nhttps://github.com/pgbackrest/pgbackrest/commit/692fe496bdb5fa6dcffeb9f85b6188ceb1df707a \nfor details. Thanks David!-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 16 May 2022 15:16:55 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "Dave Page <dpage@pgadmin.org> writes:\n> On Mon, 16 May 2022 at 15:06, David Steele <david@pgmasters.net> wrote:\n>> Out solution was to use the constant:\n>> \n>> #define FirstNormalObjectId 16384\n>> \n>> And treat anything below that as a system oid. This constant has not\n>> changed in a very long time (if ever) but we added it to our list of\n>> constants to recheck with each release.\n\n> Yes, that seems reasonable. Changing that value would very likely break\n> pg_upgrade I can imagine, so I suspect it'll stay as it is for a while\n> longer.\n\nYeah, raising that would be extremely painful for pg_upgrade.\n\nI think that when we approach the point where the system OID range\nis saturated, we'll give up the principle of system OIDs being\nglobally unique instead of doing that. There's no fundamental\nreason why unique-per-catalog wouldn't be good enough, and letting\nthat be the standard would give us many more years of breathing room.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 May 2022 10:26:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "\n\nOn 5/16/22 10:26 AM, Tom Lane wrote:\n> Dave Page <dpage@pgadmin.org> writes:\n>> On Mon, 16 May 2022 at 15:06, David Steele <david@pgmasters.net> wrote:\n>>> Out solution was to use the constant:\n>>>\n>>> #define FirstNormalObjectId 16384\n>>>\n>>> And treat anything below that as a system oid. This constant has not\n>>> changed in a very long time (if ever) but we added it to our list of\n>>> constants to recheck with each release.\n> \n>> Yes, that seems reasonable. Changing that value would very likely break\n>> pg_upgrade I can imagine, so I suspect it'll stay as it is for a while\n>> longer.\n> \n> Yeah, raising that would be extremely painful for pg_upgrade.\n> \n> I think that when we approach the point where the system OID range\n> is saturated, we'll give up the principle of system OIDs being\n> globally unique instead of doing that. There's no fundamental\n> reason why unique-per-catalog wouldn't be good enough, and letting\n> that be the standard would give us many more years of breathing room.\nI'm in favor of global IDs since they help prevent incorrect joins, but \nagree that what you propose would likely be the least painful solution.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 16 May 2022 10:37:45 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "On 2022-May-16, David Steele wrote:\n\n> On 5/16/22 10:26 AM, Tom Lane wrote:\n\n> > I think that when we approach the point where the system OID range\n> > is saturated, we'll give up the principle of system OIDs being\n> > globally unique instead of doing that. There's no fundamental\n> > reason why unique-per-catalog wouldn't be good enough, and letting\n> > that be the standard would give us many more years of breathing room.\n>\n> I'm in favor of global IDs since they help prevent incorrect joins, but\n> agree that what you propose would likely be the least painful solution.\n\nI just had that property alert me of a bug last week, so yeah. I wish\nthere was a way to keep that at least partially -- say use an individual\nOID counter for pg_proc (the most populous OID-bearing catalog) and keep\na shared one for all other catalogs.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n\n",
"msg_date": "Mon, 16 May 2022 17:19:05 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
},
{
"msg_contents": "On 5/16/22 11:19 AM, Alvaro Herrera wrote:\n> On 2022-May-16, David Steele wrote:\n> \n>> On 5/16/22 10:26 AM, Tom Lane wrote:\n> \n>>> I think that when we approach the point where the system OID range\n>>> is saturated, we'll give up the principle of system OIDs being\n>>> globally unique instead of doing that. There's no fundamental\n>>> reason why unique-per-catalog wouldn't be good enough, and letting\n>>> that be the standard would give us many more years of breathing room.\n>>\n>> I'm in favor of global IDs since they help prevent incorrect joins, but\n>> agree that what you propose would likely be the least painful solution.\n> \n> I just had that property alert me of a bug last week, so yeah. I wish\n> there was a way to keep that at least partially -- say use an individual\n> OID counter for pg_proc (the most populous OID-bearing catalog) and keep\n> a shared one for all other catalogs.\n\nI have used a similar strategy before. For example, a global sequence \nfor all dimension tables and then a per-table sequence for large fact \ntables.\n\nThis is not exactly that scenario, but what you are proposing would keep \nmost of the benefit of a global ID. pg_proc is not a very commonly \njoined table for users in my experience.\n\nNow we just need to remember all this ten years from now...\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 16 May 2022 11:31:55 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: removing datlastsysoid"
}
] |
[
{
"msg_contents": "Hi\n\n From the documentation for pg_replication_origin_oid() [1]:\n\n> Looks up a replication origin by name and returns the internal ID.\n> If no such replication origin is found an error is thrown.\n\nHowever, it actually returns NULL if the origin does not exist:\n\n postgres=# SELECT * FROM pg_replication_origin;\n roident | roname\n ---------+--------\n (0 rows)\n\n postgres=# SELECT pg_replication_origin_oid('foo'),\npg_replication_origin_oid('foo') IS NULL;\n pg_replication_origin_oid | ?column?\n ---------------------------+----------\n | t\n (1 row)\n\nGiven that the code has remained unchanged since the function was\nintroduced in 9.5, it seems reasonable to change the documentation\nto match the function behaviour rather than the other way round.\n\n\nRegards\n\nIan Barwick\n\n[1] https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-REPLICATION-TABLE\n\n-- \nEnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Tue, 18 Jan 2022 10:19:41 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "docs: pg_replication_origin_oid() description does not match\n behaviour"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 10:19:41AM +0900, Ian Lawrence Barwick wrote:\n> Given that the code has remained unchanged since the function was\n> introduced in 9.5, it seems reasonable to change the documentation\n> to match the function behaviour rather than the other way round.\n\nObviously. I'll go fix that as you suggest, if there are no\nobjections.\n--\nMichael",
"msg_date": "Tue, 18 Jan 2022 10:23:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: pg_replication_origin_oid() description does not match\n behaviour"
},
{
"msg_contents": "On 1/17/22, 5:24 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Tue, Jan 18, 2022 at 10:19:41AM +0900, Ian Lawrence Barwick wrote:\r\n>> Given that the code has remained unchanged since the function was\r\n>> introduced in 9.5, it seems reasonable to change the documentation\r\n>> to match the function behaviour rather than the other way round.\r\n>\r\n> Obviously. I'll go fix that as you suggest, if there are no\r\n> objections.\r\n\r\n+1\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 18 Jan 2022 18:20:22 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: docs: pg_replication_origin_oid() description does not match\n behaviour"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 06:20:22PM +0000, Bossart, Nathan wrote:\n> +1\n\nAnd done.\n--\nMichael",
"msg_date": "Wed, 19 Jan 2022 10:40:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: pg_replication_origin_oid() description does not match\n behaviour"
},
{
"msg_contents": "2022年1月19日(水) 10:40 Michael Paquier <michael@paquier.xyz>:\n>\n> On Tue, Jan 18, 2022 at 06:20:22PM +0000, Bossart, Nathan wrote:\n> > +1\n>\n> And done.\n\nThanks!\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Jan 2022 12:25:45 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: docs: pg_replication_origin_oid() description does not match\n behaviour"
}
] |
[
{
"msg_contents": "Hello\n\nWe know that PostgreSQL doesn't support a single relation size over 32TB, limited by the MaxBlockNumber. But if we just 'insert into' one relation over 32TB, it will get an error message 'unexpected data beyond EOF in block 0 of relation' in ReadBuffer_common. The '0 block' is from mdnblocks function where the segment number is over 256 and make segno * RELSEG_SIZE over uint32's max value. So is it necessary to make the error message more readable like 'The relation size is over max value ...' and elog in mdnblocks?\n\nThis scene we met is as below, 'shl $0x18, %eax' make $ebx from 256 to 0, which makes segno from 256 to 0. \n\n 0x0000000000c2cc51 <+289>: callq 0xc657f0 <PathNameOpenFilePerm>\n 0x0000000000c2cc56 <+294>: mov -0x8(%r15),%rdi\n 0x0000000000c2cc5a <+298>: mov %r15,%rsi\n 0x0000000000c2cc5d <+301>: mov %eax,%r14d\n 0x0000000000c2cc60 <+304>: mov 0x10(%rdi),%rax\n 0x0000000000c2cc64 <+308>: callq *0x8(%rax)\n 0x0000000000c2cc67 <+311>: test %r14d,%r14d\n 0x0000000000c2cc6a <+314>: jns 0xc2cd68 <mdnblocks+568>\n=> 0x0000000000c2cc70 <+320>: add $0x28,%rsp\n 0x0000000000c2cc74 <+324>: mov %ebx,%eax\n 0x0000000000c2cc76 <+326>: shl $0x18,%eax\n 0x0000000000c2cc79 <+329>: pop %rbx\n 0x0000000000c2cc7a <+330>: pop %r12\n 0x0000000000c2cc7c <+332>: pop %r13\n 0x0000000000c2cc7e <+334>: pop %r14\n 0x0000000000c2cc80 <+336>: pop %r15\n 0x0000000000c2cc82 <+338>: pop %rbp\n 0x0000000000c2cc83 <+339>: retq\nHelloWe know that PostgreSQL doesn't support a single relation size over 32TB, limited by the MaxBlockNumber. But if we just 'insert into' one relation over 32TB, it will get an error message 'unexpected data beyond EOF in block 0 of relation' in ReadBuffer_common. The '0 block' is from mdnblocks function where the segment number is over 256 and make segno * RELSEG_SIZE over uint32's max value. So is it necessary to make the error message more readable like 'The relation size is over max value ...' and elog in mdnblocks?This scene we met is as below, 'shl $0x18, %eax' make $ebx from 256 to 0, which makes segno from 256 to 0. 0x0000000000c2cc51 <+289>: callq 0xc657f0 <PathNameOpenFilePerm> 0x0000000000c2cc56 <+294>: mov -0x8(%r15),%rdi 0x0000000000c2cc5a <+298>: mov %r15,%rsi 0x0000000000c2cc5d <+301>: mov %eax,%r14d 0x0000000000c2cc60 <+304>: mov 0x10(%rdi),%rax 0x0000000000c2cc64 <+308>: callq *0x8(%rax) 0x0000000000c2cc67 <+311>: test %r14d,%r14d 0x0000000000c2cc6a <+314>: jns 0xc2cd68 <mdnblocks+568>=> 0x0000000000c2cc70 <+320>: add $0x28,%rsp 0x0000000000c2cc74 <+324>: mov %ebx,%eax 0x0000000000c2cc76 <+326>: shl $0x18,%eax 0x0000000000c2cc79 <+329>: pop %rbx 0x0000000000c2cc7a <+330>: pop %r12 0x0000000000c2cc7c <+332>: pop %r13 0x0000000000c2cc7e <+334>: pop %r14 0x0000000000c2cc80 <+336>: pop %r15 0x0000000000c2cc82 <+338>: pop %rbp 0x0000000000c2cc83 <+339>: retq",
"msg_date": "Tue, 18 Jan 2022 14:21:14 +0800",
"msg_from": "\"=?UTF-8?B?6ZmI5L2z5piVKOatpeecnyk=?=\" <buzhen.cjx@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?MzJUQiByZWxhdGlvbiBzaXplIG1ha2UgbWRuYmxvY2tzIG92ZXJmbG93?="
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 18, 2022 at 02:21:14PM +0800, 陈佳昕(步真) wrote:\n> \n> We know that PostgreSQL doesn't support a single relation size over 32TB,\n> limited by the MaxBlockNumber. But if we just 'insert into' one relation over\n> 32TB, it will get an error message 'unexpected data beyond EOF in block 0 of\n> relation' in ReadBuffer_common. The '0 block' is from mdnblocks function\n> where the segment number is over 256 and make segno * RELSEG_SIZE over\n> uint32's max value. So is it necessary to make the error message more\n> readable like 'The relation size is over max value ...' and elog in\n> mdnblocks?\n\nI didn't try it but this is supposed to be caught by mdextend():\n\n\t/*\n\t * If a relation manages to grow to 2^32-1 blocks, refuse to extend it any\n\t * more --- we mustn't create a block whose number actually is\n\t * InvalidBlockNumber. (Note that this failure should be unreachable\n\t * because of upstream checks in bufmgr.c.)\n\t */\n\tif (blocknum == InvalidBlockNumber)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n\t\t\t\t errmsg(\"cannot extend file \\\"%s\\\" beyond %u blocks\",\n\t\t\t\t\t\trelpath(reln->smgr_rnode, forknum),\n\t\t\t\t\t\tInvalidBlockNumber)));\n\n\nDidn't you hit this?\n\n\n",
"msg_date": "Tue, 18 Jan 2022 15:25:10 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 32TB relation size make mdnblocks overflow"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Tue, Jan 18, 2022 at 02:21:14PM +0800, 陈佳昕(步真) wrote:\n>> We know that PostgreSQL doesn't support a single relation size over 32TB,\n>> limited by the MaxBlockNumber. But if we just 'insert into' one relation over\n>> 32TB, it will get an error message 'unexpected data beyond EOF in block 0 of\n>> relation' in ReadBuffer_common.\n\n> I didn't try it but this is supposed to be caught by mdextend():\n> ...\n> Didn't you hit this?\n\nProbably not, if the OP was testing something predating 8481f9989,\nie anything older than the latest point releases.\n\n(This report does seem to validate my comment in the commit log\nthat \"I think it might confuse ReadBuffer's logic for data-past-EOF\nlater on\". I'd not bothered to build a non-assert build to check\nthat, but this looks about like what I guessed would happen.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 10:31:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 32TB relation size make mdnblocks overflow"
},
{
"msg_contents": "I think we must meet some corner cases about our storage. The relation has 32TB blocks, so 'mdnblocks' gets the unexpected value, we will check it again.\nThanks a lot.\nI think we must meet some corner cases about our storage. The relation has 32TB blocks, so 'mdnblocks' gets the unexpected value, we will check it again.Thanks a lot.",
"msg_date": "Tue, 18 Jan 2022 23:49:31 +0800",
"msg_from": "\"=?UTF-8?B?6ZmI5L2z5piVKOatpeecnyk=?=\" <buzhen.cjx@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmU6IDMyVEIgcmVsYXRpb24gc2l6ZSBtYWtlIG1kbmJsb2NrcyBvdmVyZmxvdw==?="
}
] |
[
{
"msg_contents": "In [1] capacity for $SUBJECT was added to most of the batch scripts, but\nclean.bat was not included. I propose to do so with the attached patch.\n\nBy the way, are pgbison.bat and pgflex.bat directly called anywhere?\n\n[1]\nhttps://www.postgresql.org/message-id/2b7a674b-5fb0-d264-75ef-ecc7a31e54f8@postgrespro.ru\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Tue, 18 Jan 2022 10:41:07 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] allow src/tools/msvc/clean.bat script to be called from the\n root of the source tree"
},
{
"msg_contents": "On 1/18/22 04:41, Juan José Santamaría Flecha wrote:\n> In [1] capacity for $SUBJECT was added to most of the batch scripts,\n> but clean.bat was not included. I propose to do so with the attached\n> patch.\n\n\nThat looks a bit ugly. How about this (untested) instead?\n\n\n>\n> By the way, are pgbison.bat and pgflex.bat directly called anywhere?\n\n\n\nNot to my knowledge. One of the things that's annoying about them is\nthat the processor names are hardcoded, so if you install winflexbison\nas I usually do you have to rename the executables (or rename the\nchocolatey shims) or the scripts won't work.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 18 Jan 2022 11:49:14 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] allow src/tools/msvc/clean.bat script to be called from\n the root of the source tree"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 5:49 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 1/18/22 04:41, Juan José Santamaría Flecha wrote:\n> > In [1] capacity for $SUBJECT was added to most of the batch scripts,\n> > but clean.bat was not included. I propose to do so with the attached\n> > patch.\n>\n>\n> That looks a bit ugly. How about this (untested) instead?\n>\n> It is WFM, so I am fine with it.\n\n>\n> > By the way, are pgbison.bat and pgflex.bat directly called anywhere?\n>\n> Not to my knowledge. One of the things that's annoying about them is\n> that the processor names are hardcoded, so if you install winflexbison\n> as I usually do you have to rename the executables (or rename the\n> chocolatey shims) or the scripts won't work.\n>\n> We could use those batches to get to the hardcoded name, but that would\nprobably be as annoying as renaming. If those batches serve no purpose\nright now, it should be fine to remove them from the tree. I use the\nexecutables from a MinGW installation, and they keep their actual name.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jan 18, 2022 at 5:49 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 1/18/22 04:41, Juan José Santamaría Flecha wrote:\n> In [1] capacity for $SUBJECT was added to most of the batch scripts,\n> but clean.bat was not included. I propose to do so with the attached\n> patch.\n\n\nThat looks a bit ugly. How about this (untested) instead?\nIt is WFM, so I am fine with it. \n> By the way, are pgbison.bat and pgflex.bat directly called anywhere?\nNot to my knowledge. One of the things that's annoying about them is\nthat the processor names are hardcoded, so if you install winflexbison\nas I usually do you have to rename the executables (or rename the\nchocolatey shims) or the scripts won't work.We could use those batches to get to the hardcoded name, but that would probably be as annoying as renaming. If those batches serve no purpose right now, it should be fine to remove them from the tree. I use the executables from a MinGW installation, and they keep their actual name.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 18 Jan 2022 21:08:37 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] allow src/tools/msvc/clean.bat script to be called from\n the root of the source tree"
},
{
"msg_contents": "\nOn 1/18/22 15:08, Juan José Santamaría Flecha wrote:\n>\n>\n>\n> > By the way, are pgbison.bat and pgflex.bat directly called anywhere?\n>\n> Not to my knowledge. One of the things that's annoying about them is\n> that the processor names are hardcoded, so if you install winflexbison\n> as I usually do you have to rename the executables (or rename the\n> chocolatey shims) or the scripts won't work.\n>\n> We could use those batches to get to the hardcoded name, but that\n> would probably be as annoying as renaming. If those batches serve no\n> purpose right now, it should be fine to remove them from the tree. I\n> use the executables from a MinGW installation, and they keep their\n> actual name.\n>\n\nOK, but I don't think we should require installation of MinGW for an\nMSVC build.\n\n\ncheers\n\n\nandrew\n\n--\n\nAndrew Dunstan\n\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 18 Jan 2022 16:17:26 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] allow src/tools/msvc/clean.bat script to be called from\n the root of the source tree"
}
] |
[
{
"msg_contents": "Modify pg_basebackup to use a new COPY subprotocol for base backups.\n\nIn the new approach, all files across all tablespaces are sent in a\nsingle COPY OUT operation. The CopyData messages are no longer raw\narchive content; rather, each message is prefixed with a type byte\nthat describes its purpose, e.g. 'n' signifies the start of a new\narchive and 'd' signifies archive or manifest data. This protocol\nis significantly more extensible than the old approach, since we can\nlater create more message types, though not without concern for\nbackward compatibility.\n\nThe new protocol sends a few things to the client that the old one\ndid not. First, it sends the name of each archive explicitly, instead\nof letting the client compute it. This is intended to make it easier\nto write future patches that might send archives in a format other\nthat tar (e.g. cpio, pax, tar.gz). Second, it sends explicit progress\nmessages rather than allowing the client to assume that progress is\ndefined by the number of bytes received. This will help with future\nfeatures where the server compresses the data, or sends it someplace\ndirectly rather than transmitting it to the client.\n\nThe old protocol is still supported for compatibility with previous\nreleases. The new protocol is selected by means of a new\nTARGET option to the BASE_BACKUP command. Currently, the\nonly supported target is 'client'. Support for additional\ntargets will be added in a later commit.\n\nPatch by me. The patch set of which this is a part has had review\nand/or testing from Jeevan Ladhe, Tushar Ahuja, Suraj Kharage,\nDipesh Pandit, and Mark Dilger.\n\nDiscussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/cc333f32336f5146b75190f57ef587dff225f565\n\nModified Files\n--------------\ndoc/src/sgml/protocol.sgml | 130 +++++++++-\nsrc/backend/replication/basebackup.c | 36 ++-\nsrc/backend/replication/basebackup_copy.c | 277 +++++++++++++++++++-\nsrc/bin/pg_basebackup/pg_basebackup.c | 410 +++++++++++++++++++++++++++---\nsrc/include/replication/basebackup_sink.h | 1 +\n5 files changed, 806 insertions(+), 48 deletions(-)",
"msg_date": "Tue, 18 Jan 2022 18:50:57 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Modify pg_basebackup to use a new COPY subprotocol for base\n back"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 1:51 PM Robert Haas <rhaas@postgresql.org> wrote:\n> Modify pg_basebackup to use a new COPY subprotocol for base backups.\n\nAndres pointed out to me that longfin is sad:\n\n2022-01-18 14:52:35.484 EST [82470:4] LOG: server process (PID 82487)\nwas terminated by signal 4: Illegal instruction: 4\n2022-01-18 14:52:35.484 EST [82470:5] DETAIL: Failed process was\nrunning: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS,\nCHECKPOINT 'fast', MANIFEST 'yes', TARGET 'client')\n\nUnfortunately, I can't reproduce this locally, even with COPT=-Wall\n-Werror -fno-omit-frame-pointer -fsanitize-trap=alignment\n-Wno-deprecated-declarations -DWRITE_READ_PARSE_PLAN_TREES\n-DSTRESS_SORT_INT_MIN -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS.\n\nTom, any chance you can get a stack trace?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Jan 2022 15:54:19 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Andres pointed out to me that longfin is sad:\n\n> 2022-01-18 14:52:35.484 EST [82470:4] LOG: server process (PID 82487)\n> was terminated by signal 4: Illegal instruction: 4\n\n> Tom, any chance you can get a stack trace?\n\nHmm, I'd assumed that was just a cosmic ray or something.\nI'll check if it reproduces, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 16:36:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Andres pointed out to me that longfin is sad:\n>\n> > 2022-01-18 14:52:35.484 EST [82470:4] LOG: server process (PID 82487)\n> > was terminated by signal 4: Illegal instruction: 4\n>\n> > Tom, any chance you can get a stack trace?\n>\n> Hmm, I'd assumed that was just a cosmic ray or something.\n> I'll check if it reproduces, though.\n\nThomas pointed out to me that thorntail also failed, and that it\nincluded a backtrace. Unfortunately it's not somewhat confusing. The\ninnermost frame is:\n\n#0 0x00000100006319a4 in bbsink_archive_contents (len=<optimized\nout>, sink=<optimized out>) at\n/home/nm/farm/sparc64_deb10_gcc_64_ubsan/HEAD/pgsql.build/../pgsql/src/backend/replication/basebackup.c:1672\n1672 return true;\n\nLine 1672 of basebackup.c is indeed \"return true\" but we're inside of\nsendFile(), not bbsink_archive_contents(). However,\nbbsink_archive_contents() is an inline function so maybe the failure\nis misattributed. I wonder whether the \"sink\" pointer in that function\nis somehow not valid ... but I don't know how that would happen, or\nwhy it would happen only on this machine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Jan 2022 16:58:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "I wrote:\n>> Tom, any chance you can get a stack trace?\n\n> Hmm, I'd assumed that was just a cosmic ray or something.\n\nMy mistake: it's failing because of -fsanitize=alignment.\n\nHere's the stack trace:\n\n * frame #0: 0x000000010885dfd0 postgres`sendFile(sink=0x00007fdedf071cb0, readfilename=\"./global/4178\", tarfilename=\"global/4178\", statbuf=0x00007ffee77dfaf8, missing_ok=true, dboid=0, manifest=0x00007ffee77e2780, spcoid=0x0000000000000000) at basebackup.c:1552:10\n frame #1: 0x000000010885cb7f postgres`sendDir(sink=0x00007fdedf071cb0, path=\"./global\", basepathlen=1, sizeonly=false, tablespaces=0x00007fdedf072718, sendtblspclinks=true, manifest=0x00007ffee77e2780, spcoid=0x0000000000000000) at basebackup.c:1354:12\n frame #2: 0x000000010885ca6b postgres`sendDir(sink=0x00007fdedf071cb0, path=\".\", basepathlen=1, sizeonly=false, tablespaces=0x00007fdedf072718, sendtblspclinks=true, manifest=0x00007ffee77e2780, spcoid=0x0000000000000000) at basebackup.c:1346:13\n frame #3: 0x00000001088595be postgres`perform_base_backup(opt=0x00007ffee77e2e68, sink=0x00007fdedf071cb0) at basebackup.c:352:5\n frame #4: 0x0000000108856b0b postgres`SendBaseBackup(cmd=0x00007fdedf05b510) at basebackup.c:932:3\n frame #5: 0x00000001088711c8 postgres`exec_replication_command(cmd_string=\"BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', MANIFEST 'yes', TARGET 'client')\") at walsender.c:1734:4 [opt]\n frame #6: 0x00000001088dd61e postgres`PostgresMain(dbname=<unavailable>, username=<unavailable>) at postgres.c:4494:12 [opt]\n\nIt failed at\n\n-> 1552 if (!PageIsNew(page) && PageGetLSN(page) < sink->bbs_state->startptr)\n\nand the problem is evidently that the page pointer isn't nicely aligned:\n\n(lldb) p page\n(char *) $4 = 0x00007fdeded7e041 \"\"\n\n(I checked the \"sink\" data structure too for luck, but it seems fine.)\n\nI see that thorntail has now also fallen over, presumably for\nthe same reason.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 17:06:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Unfortunately, I can't reproduce this locally, even with COPT=-Wall\n> -Werror -fno-omit-frame-pointer -fsanitize-trap=alignment\n> -Wno-deprecated-declarations -DWRITE_READ_PARSE_PLAN_TREES\n> -DSTRESS_SORT_INT_MIN -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS.\n\nNow that I re-read what you did, I believe you need both of\n\n-fsanitize=alignment -fsanitize-trap=alignment\n\nto enable those traps to happen. That seems to be the case with\nApple's clang, anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 17:12:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "On 2022-01-18 17:12:00 -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Unfortunately, I can't reproduce this locally, even with COPT=-Wall\n> > -Werror -fno-omit-frame-pointer -fsanitize-trap=alignment\n> > -Wno-deprecated-declarations -DWRITE_READ_PARSE_PLAN_TREES\n> > -DSTRESS_SORT_INT_MIN -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS.\n>\n> Now that I re-read what you did, I believe you need both of\n>\n> -fsanitize=alignment -fsanitize-trap=alignment\n>\n> to enable those traps to happen. That seems to be the case with\n> Apple's clang, anyway.\n\nFWIW, I can reproduce it on linux, but only if I -fno-sanitize-recover instead\nof -fsanitize-trap=alignment. That then also produces a nicer explanation of\nthe problem:\n\n/home/andres/src/postgresql/src/backend/replication/basebackup.c:1552:10: runtime error: member access within misaligned address 0x000002b9ce09 for type 'PageHeaderData' (aka 'struct PageHeaderData'), which requires 4 byte alignment\n0x000002b9ce09: note: pointer points here\n 00 00 00 64 00 00 00 00 c8 ad 0c 01 c5 1b 00 00 48 00 f0 1f f0 1f 04 20 00 00 00 00 62 31 05 00\n ^\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /home/andres/src/postgresql/src/backend/replication/basebackup.c:1552:10 in\n2022-01-18 17:36:17.746 PST [1448756] LOG: server process (PID 1448774) exited with exit code 1\n2022-01-18 17:36:17.746 PST [1448756] DETAIL: Failed process was running: BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, CHECKPOINT 'fast', MANIFEST 'yes', TARGET 'client')\n\nThe problem originates in bbsink_copystream_begin_backup()...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jan 2022 17:49:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now that I re-read what you did, I believe you need both of\n>\n> -fsanitize=alignment -fsanitize-trap=alignment\n>\n> to enable those traps to happen. That seems to be the case with\n> Apple's clang, anyway.\n\nAh, I guess I copied and pasted the options wrong, or something.\nAnyway, I have an idea how to fix this. I didn't realize that we were\ngoing to read from the bbsink's buffer like this, and it's not\nproperly aligned for that. I'll jigger things around to fix that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Jan 2022 20:55:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 8:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Ah, I guess I copied and pasted the options wrong, or something.\n> Anyway, I have an idea how to fix this. I didn't realize that we were\n> going to read from the bbsink's buffer like this, and it's not\n> properly aligned for that. I'll jigger things around to fix that.\n\nHere's a patch. I'm still not able to reproduce the problem either\nwith the flags you propose (which don't cause a failure) or the ones\nwhich Andres suggests (which make clang bitterly unhappy) or the ones\nclang says I should use instead of the ones Andres suggests (which\nmake initdb fall over, so we never even get to the point of attempting\nanything related to the code this patch modified).\n\nHere's a patch, based in part on some off-list discussion with Andres.\nI believe Andres has already confirmed that this fix works, but it\nwouldn't hurt if Tom wants to verify it also.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 18 Jan 2022 21:23:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Here's a patch, based in part on some off-list discussion with Andres.\n> I believe Andres has already confirmed that this fix works, but it\n> wouldn't hurt if Tom wants to verify it also.\n\nWFM too --- at least, pg_basebackup's \"make check\" passes now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 21:29:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 9:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Here's a patch, based in part on some off-list discussion with Andres.\n> > I believe Andres has already confirmed that this fix works, but it\n> > wouldn't hurt if Tom wants to verify it also.\n>\n> WFM too --- at least, pg_basebackup's \"make check\" passes now.\n\nThanks for checking. Committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Jan 2022 08:16:17 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Modify pg_basebackup to use a new COPY subprotocol for\n base back"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\nThe doc mentions that standby_slot_names is to only be used for waiting on physical slots. However, it seems like we calculate the flush_pos for logical slots as well in wait_for_standby_confirmation?\r\n\r\nRe-posting some of my previous comment since it seems like it got lost...\r\n\r\nIn wait_for_standby_confirmation, should we move standby_slot_names -> namelist into the while (true) block so if we have wrong values set we fix it with a SIGHUP? Similarly, in slotsync.c, we never update standby_slot_names once the worker is launched. \r\n\r\nIf a logical slot was dropped on the writer, should the worker drop logical slots that it was previously synchronizing but are no longer present? Or should we leave that to the user to manage? I'm trying to think why users would want to sync logical slots to a reader but not have that be dropped if we're able to detect they're no longer present on the writer. Maybe if there was a use-case to set standby_slot_names one-at-a-time, you wouldn't want other logical slots to be dropped, but dropping sounds like reasonable behavior for '*'?\r\n\r\nIs there a reason we're deciding to use one-worker syncing per database instead of one general worker that syncs across all the databases? I imagine I'm missing something obvious here.\r\n\r\nAs for how standby_slot_names should be configured, I'd prefer the flexibility similar to what we have for synchronus_standby_names since that seems the most analogous. It'd provide flexibility for failovers, which I imagine is the most common use-case. \r\n\r\nJohn H\r\n\r\nOn 1/3/22, 5:49 AM, \"Peter Eisentraut\" <peter.eisentraut@enterprisedb.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n Here is an updated patch to fix some build failures. No feature changes.\r\n\r\n On 14.12.21 23:12, Peter Eisentraut wrote:\r\n > On 31.10.21 11:08, Peter Eisentraut wrote:\r\n >> I want to reactivate $subject. I took Petr Jelinek's patch from [0],\r\n >> rebased it, added a bit of testing. It basically works, but as\r\n >> mentioned in [0], there are various issues to work out.\r\n >>\r\n >> The idea is that the standby runs a background worker to periodically\r\n >> fetch replication slot information from the primary. On failover, a\r\n >> logical subscriber would then ideally find up-to-date replication\r\n >> slots on the new publisher and can just continue normally.\r\n >\r\n >> So, again, this isn't anywhere near ready, but there is already a lot\r\n >> here to gather feedback about how it works, how it should work, how to\r\n >> configure it, and how it fits into an overall replication and HA\r\n >> architecture.\r\n >\r\n > Here is an updated patch. The main changes are that I added two\r\n > configuration parameters. The first, synchronize_slot_names, is set on\r\n > the physical standby to specify which slots to sync from the primary. By\r\n > default, it is empty. (This also fixes the recovery test failures that\r\n > I had to disable in the previous patch version.) The second,\r\n > standby_slot_names, is set on the primary. It holds back logical\r\n > replication until the listed physical standbys have caught up. That\r\n > way, when failover is necessary, the promoted standby is not behind the\r\n > logical replication consumers.\r\n >\r\n > In principle, this works now, I think. I haven't made much progress in\r\n > creating more test cases for this; that's something that needs more\r\n > attention.\r\n >\r\n > It's worth pondering what the configuration language for\r\n > standby_slot_names should be. Right now, it's just a list of slots that\r\n > all need to be caught up. More complicated setups are conceivable.\r\n > Maybe you have standbys S1 and S2 that are potential failover targets\r\n > for logical replication consumers L1 and L2, and also standbys S3 and S4\r\n > that are potential failover targets for logical replication consumers L3\r\n > and L4. Viewed like that, this setting could be a replication slot\r\n > setting. The setting might also have some relationship with\r\n > synchronous_standby_names. Like, if you have synchronous_standby_names\r\n > set, then that's a pretty good indication that you also want some or all\r\n > of those standbys in standby_slot_names. (But note that one is slots\r\n > and one is application names.) So there are a variety of possibilities.\r\n\r\n",
"msg_date": "Tue, 18 Jan 2022 23:30:37 +0000",
"msg_from": "\"Hsu, John\" <hsuchen@amazon.com>",
"msg_from_op": true,
"msg_subject": "Synchronizing slots from primary to standby"
}
] |
[
{
"msg_contents": "Hi.\n\nI stumbled onto a small quirk/bug in the tab-complete code.\n\nThere are some places that suggest tab completions using the current\ntable columns. These are coded like:\nCOMPLETE_WITH_ATTR(prev2_wd, \"\");\n\nThe assumption is that the prev2_wd represents the table to select from.\n\nNormally, this works fine. However, there are also cases where a\ntable-list can be specified (not just a single table) and in this\nscenario, the 'prev2_wd' can sometimes become confused about what is\ntable name to use.\n\ne.g.\n\nIf there are spaces in the table-list like \"t1, t2\" then the word is\nrecognized as \"t2\" and it works as expected.\n\nBut, if there are no spaces in the table-list like \"t1,t2\" then the\nword is recognized as \"t1,t2\", and since that is no such table name\nthe COMPLETE_WITH_ATTR does nothing.\n\n~~\n\nExamples (press <tab> after the \"(\")\n\n// setup\n\ntest=# create table t1(a int, b int, c int);\ntest=# create table t2(d int, e int, f int);\n\n// test single table --> OK\n\ntest=# analyze t1 (\na b c\ntest=# analyze t2 (\nd e f\n\n// test table-list with spaces --> OK\n\ntest=# analyze t1, t2 (\nd e f\ntest=# analyze t2, t1 (\na b c\n\n// test table-list without spaces --> does not work\n\ntest=# analyze t2,t1 (\n\n~~\n\nI found that this is easily fixed just by adding a comma to the\nWORD_BREAKS. Then words all get tokenized properly and so 'prev2_wd'\nis what you'd like it to be.\n\n /* word break characters */\n-#define WORD_BREAKS \"\\t\\n@$><=;|&{() \"\n+#define WORD_BREAKS \"\\t\\n,@$><=;|&{() \"\n\nOTOH, this seemed a pretty fundamental change to the 12-year-old (!!)\ncode so I don't know if it may be too risky and/or could adversely\naffect something else?\n\nThe tests are all still passing, but there aren't so many tab-complete\ntests anyway so that might not mean much.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 19 Jan 2022 11:16:20 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "tab-complete COMPLETE_WITH_ATTR can become confused by table-lists."
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> If there are spaces in the table-list like \"t1, t2\" then the word is\n> recognized as \"t2\" and it works as expected.\n> But, if there are no spaces in the table-list like \"t1,t2\" then the\n> word is recognized as \"t1,t2\", and since that is no such table name\n> the COMPLETE_WITH_ATTR does nothing.\n\nHmm, another variant is\n\n=# create table foobar(z int);\nCREATE TABLE\n=# analyze foo<TAB> --> completes \"foobar\"\n=# analyze foobar,foo<TAB> --> nothing\n\n> I found that this is easily fixed just by adding a comma to the\n> WORD_BREAKS. Then words all get tokenized properly and so 'prev2_wd'\n> is what you'd like it to be.\n> /* word break characters */\n> -#define WORD_BREAKS \"\\t\\n@$><=;|&{() \"\n> +#define WORD_BREAKS \"\\t\\n,@$><=;|&{() \"\n\nNice catch. Now that I look at it, that WORD_BREAKS list seems quite\nrandom -- for example, why \"{\" but not \"}\", and why neither of \"[\"\nor \"]\"? If we have \"><=\", why not \"+-*/\", to say nothing of other\noperator characters?\n\nI can see reasons for not listing ' \" or :, because those are handled\nelsewhere. But I'm not sure about the other omissions.\n\nExperimenting a bit, I see that\n\n=# create table \"amp&sand\" (f int);\nCREATE TABLE\n=# insert into \"amp<TAB> --> completes \"amp&sand\"\n=# insert into \"amp&<TAB> --> nothing\n\nSo populating WORD_BREAKS more fully would tend to break completion\nof names using the added characters. But probably the answer for\nthat is to have less ad-hoc handling of quoted names. (See also\nmy screed at [1].) Anyway, optimizing for weird quoted names is\nprobably not what we want to do here.\n\nI feel like we should populate WORD_BREAKS more fully and document\nthe reasons for omissions, rather than leave future hackers to\nguess about it.\n\n> OTOH, this seemed a pretty fundamental change to the 12-year-old (!!)\n> code so I don't know if it may be too risky and/or could adversely\n> affect something else?\n\nIt's a bit scary, and I wouldn't consider back-patching it, but\nTBH tab-complete.c is all chewing gum and baling wire anyway.\nWhat I'd *really* like to do is nuke the whole thing and build\nsomething that's mechanically derived from the actual backend\ngrammar. But I don't know how to make that happen. In the\nmeantime, incrementally moving it towards the real SQL parsing\nrules seems like it ought to be an improvement.\n\n> The tests are all still passing, but there aren't so many tab-complete\n> tests anyway so that might not mean much.\n\nYeah, we certainly haven't got coverage for these sorts of details.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3547066.1642272686%40sss.pgh.pa.us\n\n\n",
"msg_date": "Tue, 18 Jan 2022 20:11:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tab-complete COMPLETE_WITH_ATTR can become confused by\n table-lists."
}
] |
[
{
"msg_contents": "\nTo use exclusion constraints in practice, you often need to install the \nbtree_gist extension, so that you can combine for example a range type \ncheck and normal scalar key columns into one constraint.\n\nThe currently proposed \"application time\" feature [0] (also known more \ngenerally as temporal database) is in many ways essentially an \nSQL-syntax wrapper around typical use cases involving ranges, \nmultiranges, and exclusion constraints. As such, it also needs \nbtree_gist installed in most (all?) cases. I have argued over in that \nthread that it would be weird to have a built-in SQL feature that relied \non an extension to work at all. So I think the way forward would be to \nmove btree_gist into core, and I'm starting this new thread here to give \nthis topic a bit more attention.\n\nSo, first of all, would people agree with this course of action?\n\nI don't have a lot of experience with this module, so I don't know if \nthere are any lingering concerns about whether it is mature enough as a \nbuilt-in feature.\n\nIf we were to do it, then additional discussions could be had about how \nto arrange the code. I suspect we wouldn't just want to copy the files \nas is under utils/adt/, since that's a lot of files.\n\nThere are also of course questions about how to smoothly arrange \nupgrades from extensions to the built-in situations.\n\nThoughts?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com\n\n\n",
"msg_date": "Wed, 19 Jan 2022 09:30:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "btree_gist into core?"
},
{
"msg_contents": "On 1/19/22 09:30, Peter Eisentraut wrote:\n> So, first of all, would people agree with this course of action?\n> \n> I don't have a lot of experience with this module, so I don't know if \n> there are any lingering concerns about whether it is mature enough as a \n> built-in feature.\n\nWhile it I like the idea on a conceptual level I worry about the code \nquality of the module. I know that the btree/cidr code is pretty broken. \nBut I am not sure if there are any issues with other data types.\n\nSee \nhttps://www.postgresql.org/message-id/7891efc1-8378-2cf2-617b-4143848ec895%40proxel.se\n\nAndreas\n\n\n\n",
"msg_date": "Tue, 25 Jan 2022 00:10:30 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: btree_gist into core?"
},
{
"msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 1/19/22 09:30, Peter Eisentraut wrote:\n>> I don't have a lot of experience with this module, so I don't know if \n>> there are any lingering concerns about whether it is mature enough as a \n>> built-in feature.\n\n> While it I like the idea on a conceptual level I worry about the code \n> quality of the module. I know that the btree/cidr code is pretty broken. \n> But I am not sure if there are any issues with other data types.\n\nYeah :-(. We just fixed an issue with its char(n) support too\n(54b1cb7eb), so I don't have a terribly warm feeling about the\nquality of the lesser-used code paths. Still, maybe we could\ndo some code review/testing rather than a blind copy & paste.\n\nI'd also opine that we don't have to preserve on-disk compatibility\nwhile migrating into core, which'd help get out of the sticky problem\nfor inet/cidr. This'd require being able to run the contrib module\nalongside the core version for awhile (to support existing indexes),\nbut I think we could manage that if we tried. IIRC we did something\nsimilar when we migrated tsearch into core.\n\nOne thing I don't know anything about is how good are btree_gist\nindexes performance-wise. Do they have problems that we'd really\nneed to fix to consider them core-quality?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jan 2022 18:29:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: btree_gist into core?"
},
{
"msg_contents": "> Thoughts?\n\nI think it'd be really nice to do this without btree_gist.\n\nI imagine something like this:\n\nCREATE INDEX ON tbl USING gist (\n range_col,\n int_col USING btree\n)\n\nI think this would make the index access methods interface more\nuseful. Index access method developers wouldn't need to provide\noperator classes for all data types. We could extend ACCESS METHOD\ndefinition to allow this:\n\nCREATE ACCESS METHOD my_hash_index\n TYPE INDEX\n IMPLEMENTS hash\n HANDLER my_hash_index_handler\n\nI realise this is a difficult project.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 12:35:17 +0100",
"msg_from": "Emre Hasegeli <emre@hasegeli.com>",
"msg_from_op": false,
"msg_subject": "Re: btree_gist into core?"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jan 19, 2022 at 09:30:11AM +0100, Peter Eisentraut wrote:\n>\n> To use exclusion constraints in practice, you often need to install the\n> btree_gist extension, so that you can combine for example a range type check\n> and normal scalar key columns into one constraint.\n>\n> [...]\n>\n> There are also of course questions about how to smoothly arrange upgrades\n> from extensions to the built-in situations.\n\nI'm not sure if that's what you were thinking of, but I know at least one\nextension (that I'm maintaining) that explicitly relies on btree_gist\nextension, as in \"requires = [...], btree_gist\" in the .control file.\n\nSince you can't really tweak the control file on a per-major-version basis,\nthis will require some extra thoughts to make sure that people can release\nextensions without having to tweak this file in some make rule or something\nlike that.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 13:32:41 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: btree_gist into core?"
}
] |
[
{
"msg_contents": "Hey.\n\nI am facing an issue when I try to run the following command\n\nCOPY <table_name> FROM <file> WITH DELIMITER E',’;\n\nThis file, is rather large, it's around 178GBs. \n\nWhen I try to run this COPY command I get the following error:\n\nERROR: out of memory\nDETAIL: Failed on request of size 2048 in memory context \"AfterTriggerEvents\".\nCONTEXT: COPY ssbm300_lineorder, line 50796791\n\nClearly a memory allocation function is failing but I have no clue how to fix it.\n\nI have tried experimenting with shared_buffers value in postgresql.conf file but after searching a bit I quickly realized that I do not know what I am doing there so I left it with default value. Same with work_mem value.\n\nDid you face this issue before? Can you help me resolve it?\n\nThanks in advance!\n\n\n\n",
"msg_date": "Wed, 19 Jan 2022 15:01:25 +0200",
"msg_from": "Kostas Chasialis <koschasialis@gmail.com>",
"msg_from_op": true,
"msg_subject": "[ERROR] Copy from CSV fails due to memory error."
},
{
"msg_contents": "\n\nOn 1/19/22 14:01, Kostas Chasialis wrote:\n> Hey.\n> \n> I am facing an issue when I try to run the following command\n> \n> COPY <table_name> FROM <file> WITH DELIMITER E',’;\n> \n> This file, is rather large, it's around 178GBs.\n> \n> When I try to run this COPY command I get the following error:\n> \n> ERROR: out of memory\n> DETAIL: Failed on request of size 2048 in memory context \"AfterTriggerEvents\".\n> CONTEXT: COPY ssbm300_lineorder, line 50796791\n> \n> Clearly a memory allocation function is failing but I have no clue how to fix it.\n> \n> I have tried experimenting with shared_buffers value in postgresql.conf file but after searching a bit I quickly realized that I do not know what I am doing there so I left it with default value. Same with work_mem value.\n> \n> Did you face this issue before? Can you help me resolve it?\n> \n\nWell, it's clearly related to \"after\" triggers - do you have anything \nsuch triggers on the table? AFAIK it might be related to deferred \nconstraints (like unique / foreign keys). Do you have anything like that?\n\nIf yes, I guess the only solution is to make the constraints not \ndeferred or split the copy into smaller chunks.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 19 Jan 2022 15:47:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [ERROR] Copy from CSV fails due to memory error."
}
] |
[
{
"msg_contents": "Hi, hackers\n\nWhen I read the code of COPY ... FROM ..., I find there is a redundant\nMemoryContextSwith() in BeginCopyFrom(). In BeginCopyFrom, it creates\na COPY memory context and then switches to it, in the middle of this\nfunction, it switches to the oldcontext and immediately switches back to\nCOPY memory context, IMO, this is redundant, and can be removed safely.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Wed, 19 Jan 2022 22:20:58 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Remove redundant MemoryContextSwith in BeginCopyFrom"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 11:21 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> Hi, hackers\n>\n> When I read the code of COPY ... FROM ..., I find there is a redundant\n> MemoryContextSwith() in BeginCopyFrom(). In BeginCopyFrom, it creates\n> a COPY memory context and then switches to it, in the middle of this\n> function, it switches to the oldcontext and immediately switches back to\n> COPY memory context, IMO, this is redundant, and can be removed safely.\n>\n\nLGTM (it passed all regression without any issue)\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Wed, Jan 19, 2022 at 11:21 AM Japin Li <japinli@hotmail.com> wrote:>>> Hi, hackers>> When I read the code of COPY ... FROM ..., I find there is a redundant> MemoryContextSwith() in BeginCopyFrom(). In BeginCopyFrom, it creates> a COPY memory context and then switches to it, in the middle of this> function, it switches to the oldcontext and immediately switches back to> COPY memory context, IMO, this is redundant, and can be removed safely.>LGTM (it passed all regression without any issue)-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Wed, 19 Jan 2022 12:34:32 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant MemoryContextSwith in BeginCopyFrom"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 7:51 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> Hi, hackers\n>\n> When I read the code of COPY ... FROM ..., I find there is a redundant\n> MemoryContextSwith() in BeginCopyFrom(). In BeginCopyFrom, it creates\n> a COPY memory context and then switches to it, in the middle of this\n> function, it switches to the oldcontext and immediately switches back to\n> COPY memory context, IMO, this is redundant, and can be removed safely.\n\n+1. It looks like a thinko from c532d15d. There's no code in between,\nso switching to oldcontext doesn't make sense.\n\n MemoryContextSwitchTo(oldcontext);\n << no code here >>\n oldcontext = MemoryContextSwitchTo(cstate->copycontext);\n\nI think we also need to remove MemoryContextSwitchTo(oldcontext); at\nthe end of BeginCopyTo in copyto.c, because we are not changing memory\ncontexts in between.\n\ndiff --git a/src/backend/commands/copyto.c b/src/backend/commands/copyto.c\nindex 34c8b80593..5182048e4f 100644\n--- a/src/backend/commands/copyto.c\n+++ b/src/backend/commands/copyto.c\n@@ -742,8 +742,6 @@ BeginCopyTo(ParseState *pstate,\n\n cstate->bytes_processed = 0;\n\n- MemoryContextSwitchTo(oldcontext);\n-\n return cstate;\n }\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 19 Jan 2022 21:05:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant MemoryContextSwith in BeginCopyFrom"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> +1. It looks like a thinko from c532d15d. There's no code in between,\n> so switching to oldcontext doesn't make sense.\n\nAgreed.\n\n> I think we also need to remove MemoryContextSwitchTo(oldcontext); at\n> the end of BeginCopyTo in copyto.c, because we are not changing memory\n> contexts in between.\n\nHmm, I think it'd be a better idea to remove the one in the middle of\nBeginCopyTo. The code after that is still doing setup of the cstate,\nso the early switch back looks to me like trouble waiting to happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jan 2022 11:38:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant MemoryContextSwith in BeginCopyFrom"
},
{
"msg_contents": "\nOn Thu, 20 Jan 2022 at 00:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> +1. It looks like a thinko from c532d15d. There's no code in between,\n>> so switching to oldcontext doesn't make sense.\n>\n> Agreed.\n>\n>> I think we also need to remove MemoryContextSwitchTo(oldcontext); at\n>> the end of BeginCopyTo in copyto.c, because we are not changing memory\n>> contexts in between.\n>\n> Hmm, I think it'd be a better idea to remove the one in the middle of\n> BeginCopyTo. The code after that is still doing setup of the cstate,\n> so the early switch back looks to me like trouble waiting to happen.\n>\n\nAgreed\n\nI see you have already push this patch on master (89f059bdf52), why not\nremove MemoryContextSwitchTo in the middle of BeginCopyTo in this commit?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 20 Jan 2022 08:44:17 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove redundant MemoryContextSwith in BeginCopyFrom"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> I see you have already push this patch on master (89f059bdf52), why not\n> remove MemoryContextSwitchTo in the middle of BeginCopyTo in this commit?\n\nThat was a different suggestion from a different person, so I didn't\nwant to muddle the credit. Also, it requires a bit of testing,\nwhile 89f059bdf52 was visibly perfectly safe.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jan 2022 19:50:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove redundant MemoryContextSwith in BeginCopyFrom"
},
{
"msg_contents": "\nOn Thu, 20 Jan 2022 at 08:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> I see you have already push this patch on master (89f059bdf52), why not\n>> remove MemoryContextSwitchTo in the middle of BeginCopyTo in this commit?\n>\n> That was a different suggestion from a different person, so I didn't\n> want to muddle the credit. Also, it requires a bit of testing,\n> while 89f059bdf52 was visibly perfectly safe.\n>\n\nThanks for your explanation!\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 20 Jan 2022 08:55:45 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove redundant MemoryContextSwith in BeginCopyFrom"
}
] |
[
{
"msg_contents": "Hello,\n\nFor test purposes I need to compile PostgreSQL 14.1 using a 16kb blocksize.\n\nCFLAGS=\"-D WINVER=0x0600 -D _WIN32_WINNT=0x0600\" LIBS=\"-ladvapi32\"\n ./configure --host=x86_64-w64-mingw32 --with-blocksize=16\n--with-wal-blocksize=16 --with-openssl --with-libxml\n--prefix=/c/postgresql/pg14/ 2>&1 | tee configure.log\n\nBelow is the beginning of my configure.log, with no errors: make and make\ninstall ok also.\n\nconfigure: loading site script /etc/config.site\nchecking build system type... x86_64-pc-msys\nchecking host system type... x86_64-w64-mingw32\nchecking which template to use... win32\nchecking whether NLS is wanted... no\nchecking for default port number... 5432\nchecking for block size... 16kB\nchecking for segment size... 1GB\nchecking for WAL block size... 16kB\nchecking for x86_64-w64-mingw32-gcc... x86_64-w64-mingw32-gcc\nchecking whether the C compiler works... yes\n...\n\nDB created successfully using initdb.\n\nUnfortunately my blocksize is still 8kb when checking in DB.\n\npostgres=# show block_size;\n block_size\n------------\n 8192\n(1 row)\n\n\npostgres=# select version();\n version\n-----------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 14.1 on x86_64-w64-mingw32, compiled by\nx86_64-w64-mingw32-gcc.exe (Rev9, Built by MSYS2 project) 10.2.0, 64-bit\n(1 row)\n\nIs there anything additional step I'm missing?\n\nThanks in advance for your help!\nYannick\n\nHello,For test purposes I need to compile PostgreSQL 14.1 using a 16kb blocksize.CFLAGS=\"-D WINVER=0x0600 -D _WIN32_WINNT=0x0600\" LIBS=\"-ladvapi32\" ./configure --host=x86_64-w64-mingw32 --with-blocksize=16 --with-wal-blocksize=16 --with-openssl --with-libxml --prefix=/c/postgresql/pg14/ 2>&1 | tee configure.logBelow is the beginning of my configure.log, with no errors: make and make install ok also.configure: loading site script /etc/config.sitechecking build system type... x86_64-pc-msyschecking host system type... x86_64-w64-mingw32checking which template to use... win32checking whether NLS is wanted... nochecking for default port number... 5432checking for block size... 16kBchecking for segment size... 1GBchecking for WAL block size... 16kBchecking for x86_64-w64-mingw32-gcc... x86_64-w64-mingw32-gccchecking whether the C compiler works... yes...DB created successfully using initdb.Unfortunately my blocksize is still 8kb when checking in DB.postgres=# show block_size; block_size------------ 8192(1 row)postgres=# select version(); version----------------------------------------------------------------------------------------------------------------------------- PostgreSQL 14.1 on x86_64-w64-mingw32, compiled by x86_64-w64-mingw32-gcc.exe (Rev9, Built by MSYS2 project) 10.2.0, 64-bit(1 row)Is there anything additional step I'm missing?Thanks in advance for your help!Yannick",
"msg_date": "Wed, 19 Jan 2022 12:07:50 -0500",
"msg_from": "Yannick Collette <yannickcollette@gmail.com>",
"msg_from_op": true,
"msg_subject": "Compiling PostgreSQL for WIndows with 16kb blocksize"
},
{
"msg_contents": "Yannick Collette <yannickcollette@gmail.com> writes:\n> For test purposes I need to compile PostgreSQL 14.1 using a 16kb blocksize.\n\n> CFLAGS=\"-D WINVER=0x0600 -D _WIN32_WINNT=0x0600\" LIBS=\"-ladvapi32\"\n> ./configure --host=x86_64-w64-mingw32 --with-blocksize=16\n> --with-wal-blocksize=16 --with-openssl --with-libxml\n> --prefix=/c/postgresql/pg14/ 2>&1 | tee configure.log\n\nI don't know anything about the Windows-specific details here,\nbut the --with-blocksize option looks fine, and it works for me:\n\nregression=# show block_size;\n block_size \n------------\n 16384\n(1 row)\n\n(Worth noting here is that a lot of our regression tests fail\nat non-default blocksizes, because plans change, or rows no\nlonger need toasting, or the like.)\n\n> Unfortunately my blocksize is still 8kb when checking in DB.\n> postgres=# show block_size;\n> block_size\n> ------------\n> 8192\n> (1 row)\n\nI think you must be connecting to the wrong server.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jan 2022 12:22:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Compiling PostgreSQL for WIndows with 16kb blocksize"
},
{
"msg_contents": "Hi,\n\nOn January 19, 2022 9:07:50 AM PST, Yannick Collette <yannickcollette@gmail.com> wrote:\n>Hello,\n>\n>For test purposes I need to compile PostgreSQL 14.1 using a 16kb blocksize.\n>\n>CFLAGS=\"-D WINVER=0x0600 -D _WIN32_WINNT=0x0600\" LIBS=\"-ladvapi32\"\n> ./configure --host=x86_64-w64-mingw32 --with-blocksize=16\n>--with-wal-blocksize=16 --with-openssl --with-libxml\n>--prefix=/c/postgresql/pg14/ 2>&1 | tee configure.log\n>\n>Below is the beginning of my configure.log, with no errors: make and make\n>install ok also.\n>\n>configure: loading site script /etc/config.site\n>checking build system type... x86_64-pc-msys\n>checking host system type... x86_64-w64-mingw32\n>checking which template to use... win32\n>checking whether NLS is wanted... no\n>checking for default port number... 5432\n>checking for block size... 16kB\n>checking for segment size... 1GB\n>checking for WAL block size... 16kB\n>checking for x86_64-w64-mingw32-gcc... x86_64-w64-mingw32-gcc\n>checking whether the C compiler works... yes\n>...\n>\n>DB created successfully using initdb.\n>\n>Unfortunately my blocksize is still 8kb when checking in DB.\n>\n>postgres=# show block_size;\n> block_size\n>------------\n> 8192\n>(1 row)\n>\n>\n>postgres=# select version();\n> version\n>-----------------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 14.1 on x86_64-w64-mingw32, compiled by\n>x86_64-w64-mingw32-gcc.exe (Rev9, Built by MSYS2 project) 10.2.0, 64-bit\n>(1 row)\n>\n>Is there anything additional step I'm missing?\n\nAny chance this is from an older build? You might need to make clean, if you previously ./configured differently.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 19 Jan 2022 09:44:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Compiling PostgreSQL for WIndows with 16kb blocksize"
}
] |
[
{
"msg_contents": "autovacuum, as it exists today, has some reasonable - though imperfect\n- heuristics for deciding which tables need to be vacuumed or\nanalyzed. However, it has nothing to help it decide which tables need\nto be vacuumed or analyzed first. As a result, when it's configured\naggressively enough, and has sufficient hardware resources, to do all\nthe things that need to get done, the system holds together reasonably\nwell. When it's not configured sufficiently aggressively, or when\nhardware resources are lacking, the situation deteriorates. Some\ndeterioration is inevitable in that kind of scenario, because if you\ncan't do all of the things that need to be done, something will have\nto get postponed, and that can cause problems. However, autovacuum\nmakes it worse than it needs to be by deciding which things to do\nfirst more or less at random. When autovacuum chooses to vacuum a\ntable that is a little bit bloated in preference to one that is\nseverely bloated, or one that is bloated in place of one that is about\nto cause a wraparound shutdown, users are understandably upset, and\nsome of them post on this mailing list about it. I am not going to try\nto review in detail the history of previous threads looking for action\nin this area, but there are quite a few.\n\nIn my view, previous efforts in this area have been too simplistic.\nFor example, it's been proposed that a table that is perceived to be\nin any kind of wraparound danger ought to get top priority, but I find\nthat implausible. A mostly-quiescent table that is one XID past\nautovacuum_freeze_max_age is not likely to be a more urgent problem\nthan a table that is absorbing 20,000 row updates per second. There is\na long way between the default value of autovacuum_freeze_max_age and\na wraparound shutdown, but if you don't vacuum a hotly-updated table\npromptly, you will get irreversible bloat and force an application\nshutdown to run VACUUM FULL. It's also been proposed that we provide\nsome way to let the user fix priorities, which was rightly criticized\nas foisting on the user what the system ought to be figuring out for\nitself. Perhaps a user-controllable prioritization system is a good\nidea, but only as a way of overriding some built-in system in cases\nwhere that goes wrong. A few people have proposed scoring systems,\nwhich I think is closer to the right idea, because our basic goal is\nto start vacuuming any given table soon enough that we finish\nvacuuming it before some catastrophe strikes. The more imminent the\ncatastrophe, the more urgent it is to start vacuuming right away.\nAlso, and I think this is very important, the longer vacuum is going\nto take, the more urgent it is to start vacuuming right away. If table\nA will cause wraparound in 2 hours and take 2 hours to vacuum, and\ntable B will cause wraparound in 1 hour and take 10 minutes to vacuum,\ntable A is more urgent even though the catastrophe is further out.\n\nSo at a high level, I think that what we ought to do is, first, for\neach table, estimate the time at which we think something bad will\noccur. There are several bad events: too much bloat, XID wraparound,\nMXID wraparound. We need to estimate the time at which we think each\nof those things will occur, and then take the earliest of those\nestimates. That's the time by which we need to have finished vacuuming\nthe table. Then, make an estimate of how long it will take to complete\na vacuum of the table, subtract that from the time at which we need to\nbe done, and that's the time by which we need to start. The earliest\nneed-to-start time is the highest priority table.\n\nThere are a number of problems here. One is that we actually need to\nbe able to estimate all the things that I just described, which will\nprobably require tracking statistics that we don't capture today, such\nas the rate at which the system is consuming XIDs, and the rate at\nwhich a table is bloating, rather than just the current state of\nthings. Unfortunately, proposing that the statistics system should\nstore more per-table information is a well-known way to get your patch\nforcefully rejected. Maybe if we were storing and updating the\nstatistics data in a better way it wouldn't be an issue - so perhaps\nshared memory stats collector stuff will resolve this issue somehow.\nOr maybe it's not an issue anyway, since the big problem with the\nstatistics files is that they have to be constantly rewritten. If we\ntook snapshots of the values in even a relatively dumb way, they'd be\nkinda bug, but they'd also be write-once. Maybe that would keep the\nexpense reasonable.\n\nA second problem is that, if the earliest need-to-start time is in the\npast, then we definitely are in trouble and had better get to work at\nonce, but if it's in the future, that doesn't necessarily mean we're\nsafe. If there are three tables with a need-to-finish time that is 12\nhours in the future and each of them will take 11 hours to vacuum,\nthen every need-to-start time computed according to the algorithm\nabove is in the future, but in fact we're in a lot of trouble. If the\nestimates are accurate, we need 3 autovacuum workers to be available\nto start within 1 hour, or we're doomed. The very last thing we want\nto do is wait another hour before doing anything. It's not impossible\nto factor this into the calculation of need-to-start times, assuming\nwe know how many workers we have. For instance, if we've got tables\nwhose need-to-finish times are 30, 50, and 70 minutes in the future,\nwe can see that if each one takes 20 minutes or less to vacuum, then\nthe need-to-start times can just be computed by subtraction. But the\ntables with 50 or 70 minute deadlines are going to take more than 20\nminutes to vacuum, then we've got to back up the need-to-start times\nso that we finish each table in time to start on the next one. I\nhaven't looked into what algorithms exist for this kind of scheduling\nproblem, but I feel like a literature search, or pulling out my\ncollege textbooks, would probably turn up some useful ideas.\n\nAs soon as you see that you can't decide when you need to start on a\nparticular table without knowing what's going on with all the other\ntables on the system, a third problem becomes apparent: you can't\nfigure anything out with confidence by looking at a single database,\nbut must rather gather information from all databases and decide to\nwhich databases the workers should connect and what they ought to do\nwhen they get there. And then, as tasks finish and system activity\nprogresses, you need to continuously update your notion of what needs\nto be done next and move workers around to accommodate it. This gets\nquite complicated, but that doesn't mean that it's unimportant.\nThere's a pretty well-known \"thundering herd\" type effect when every\ntable in the system crosses autovacuum_freeze_max_age around the same\ntime, and suddenly we go from not much vacuuming to a whole ton of\nvacuuming all at once. A system like this could give us enough\ninformation to spread that out in an intelligent way: we could see the\nstorm coming and start a single autovacuum worker working on the\nproblem well in advance, and then ramp up to multiple workers only if\nit looks like that's not going to be enough to get the job done. I'm\nnot sure I want to deal with all that complexity on day one, but I\nthink it's important to do something about it eventually.\n\nIn the meantime, I think a sensible place to start would be to figure\nout some system that makes sensible estimates of how soon we need to\naddress bloat, XID wraparound, and MXID wraparound for each table, and\nsome system that estimates how long each one will take to vacuum.\nThen, even if the workers aren't talking to each other, each\nindividual worker can make an effort to deal with the most urgent\ntasks first. I think that estimating how long it will take to vacuum a\ntable shouldn't be too bad: examining the visibility map and the index\nsizes and thinking about the autovacuum cost limits should give us at\nleast some notion of what's likely to happen. Also, I think estimating\nthe time until XID age becomes critical isn't too bad either. First,\nfix a threshold (perhaps autovacuum_max_freeze_age, maybe with a\nhigher-than-current value, or maybe some other threshold entirely)\nthat represents the target below which we always want to remain.\nSecond, know something about how fast the system is consuming XIDs.\nThen just divide. I thought for a while that it would be too hard to\nunderstand the XID consumption rate, because it might be very uneven,\nbut I think we can mitigate that problem somewhat by averaging over\nrelatively long time intervals. For instance, we might measure the\nnumber of XIDs consumed in each 24 hour period, keep 7 days of data,\nand then take the average of those values, or maybe better, the\nhighest or second-highest. That's a very small amount of data to store\nand in combination with the relfrozenxid for each table, it's all we\nneed. We could also give the user a way to override our estimate of\nthe XID consumption rate, in case they have very brief, very severe\nspikes. All of this can also be done for MXID age. It's estimating\nthat the time at which table bloat will exceed some threshold that\nseems most difficult, because that seems to require measuring trends\non a per-table basis, as opposed to the XID consumption rate, which is\nglobal.\n\nI know that this email is kind of a giant wall of text, so my thanks\nif you've read this far, and even more if you feel inspired to write\nback with your own thoughts.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 Jan 2022 14:23:45 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "autovacuum prioritization"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 11:24 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> In my view, previous efforts in this area have been too simplistic.\n> For example, it's been proposed that a table that is perceived to be\n> in any kind of wraparound danger ought to get top priority, but I find\n> that implausible.\n\nI agree that it doesn't follow that table A should be more of a\npriority than table B, either because it has a greater age, or because\nits age happens to exceed some actually-arbitrary threshold. But I\nwill point out that my ongoing work on freezing does make something\nalong these lines much more plausible. As I said over on that thread,\nthere is now a kind of \"natural variation\" among tables, in terms of\nrelfrozenxid, as a result of tracking the actual oldest XID, and using\nthat (plus the emphasis on advancing relfrozenxid wherever possible).\nAnd so we'll have a much better idea of what's going on with each\ntable -- it's typically a precise XID value from the table, from the\nrecent past.\n\nAs of today, on HEAD, the picture is rather fuzzy. If a table has a\nreally old relminmxid, then which is more likely: 1. there are lots of\nremaining MultiXactIds references by the old table, or 2. it has been\na while since that table was last aggressively vacuumed, and it\nactually has exactly zero MultiXactId references? I would guess 2\nmyself, but right now I could never be too sure. But, in a world where\nwe consistently advance relfrozenxid and relminmxid, then *not*\nadvancing them (or advancing either by relatively little in one\nparticular table) becomes a strong signal, in a way that it just isn't\ncurrently.\n\nThis is a negative signal, not a positive signal. And as you yourself\ngo on to say, that's what any new heuristics for this stuff ought to\nbe exclusively concerned with -- what not to allow to happen, ever.\nThere is a great deal of diversity among healthy databases; they're\nhard to make generalizations about that work. But unhealthy (really\nvery unhealthy) states are *far* easier to recognize and understand,\nwithout really needing to understand the workload itself at all.\n\nSince we now have the failsafe, the scheduling algorithm can afford to\nnot give too much special attention to table age until we're maybe\nover the 1 billion age mark -- or even 1.5 billion+. But once the\nscheduling stuff starts to give table age special attention, it should\nprobably become the dominant consideration, by far, completely\ndrowning out any signals about bloat. It's kinda never really supposed\nto get that high, so when we do end up there it is reasonable to fully\nfreak out. Unlike the bloat criteria, the wraparound safety criteria\ndoesn't seem to have much recognizable space between not worrying at\nall, and freaking out.\n\n> A second problem is that, if the earliest need-to-start time is in the\n> past, then we definitely are in trouble and had better get to work at\n> once, but if it's in the future, that doesn't necessarily mean we're\n> safe.\n\nThere is a related problem that you didn't mention:\nautovacuum_max_workers controls how many autovacuum workers can run at\nonce, but there is no particular concern for whether or not running\nthat many workers actually makes sense, in any given scenario. As a\ngeneral rule, the system should probably be *capable* of running a\nlarge number of autovacuums at the same time, but never actually do\nthat (because it just doesn't ever prove necessary). Better to have\nthe option and never use it than need it and not have it.\n\n> In the meantime, I think a sensible place to start would be to figure\n> out some system that makes sensible estimates of how soon we need to\n> address bloat, XID wraparound, and MXID wraparound for each table, and\n> some system that estimates how long each one will take to vacuum.\n\nI think that it's going to be hard to model how long index vacuuming\nwill take accurately. And harder still to model which indexes will\nadversely impact the user in some way if we delay vacuuming some more.\nMight be more useful to start off by addressing how to spread out the\nburden of vacuuming over time. The needs of queries matters, but\ncontrolling costs matters too.\n\nOne of the most effective techniques is to manually VACUUM when the\nsystem is naturally idle, like at night time. If that could be\nquasi-automated, or if the criteria used by autovacuum scheduling gave\njust a little weight to how busy the system is right now, then we\nwould have more slack when the system becomes very busy.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 20 Jan 2022 15:54:23 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 6:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I agree that it doesn't follow that table A should be more of a\n> priority than table B, either because it has a greater age, or because\n> its age happens to exceed some actually-arbitrary threshold. But I\n> will point out that my ongoing work on freezing does make something\n> along these lines much more plausible. As I said over on that thread,\n> there is now a kind of \"natural variation\" among tables, in terms of\n> relfrozenxid, as a result of tracking the actual oldest XID, and using\n> that (plus the emphasis on advancing relfrozenxid wherever possible).\n> And so we'll have a much better idea of what's going on with each\n> table -- it's typically a precise XID value from the table, from the\n> recent past.\n\nI agree.\n\n> Since we now have the failsafe, the scheduling algorithm can afford to\n> not give too much special attention to table age until we're maybe\n> over the 1 billion age mark -- or even 1.5 billion+. But once the\n> scheduling stuff starts to give table age special attention, it should\n> probably become the dominant consideration, by far, completely\n> drowning out any signals about bloat. It's kinda never really supposed\n> to get that high, so when we do end up there it is reasonable to fully\n> freak out. Unlike the bloat criteria, the wraparound safety criteria\n> doesn't seem to have much recognizable space between not worrying at\n> all, and freaking out.\n\nI do not agree with all of this. First, on general principle, I think\nsharp edges are bad. If a table had priority 0 for autovacuum 10\nminutes ago, it can't now have priority one million bazillion. If\nyou're saying that the priority of wraparound needs to, in the limit,\nbecome higher than any bloat-based priority, that is reasonable. Bloat\nnever causes a hard stop in the way that wraparound does, even if the\npractical effects are not much different. However, if you're saying\nthat the priority should shoot up to the maximum all at once, I don't\nagree with that at all. Second, I think it is good and appropriate to\nleave a lot of slop in the mechanism. As you point out later, we don't\nreally know whether any of our estimates for how long things will take\nare accurate, and therefore we don't know whether the time we've\nbudgeted will be sufficient. We need to leave lots of slop so that\neven if we turn out to be quite wrong, we don't hit a wall.\n\nAlso, it's worth keeping in mind that waiting longer to freak out is\nnot necessarily an advantage. It may well be that the only way the\nproblem will ever get resolved is by human intervention - going in and\nfixing whatever dumb thing somebody did - e.g. resolving the pending\nprepared transaction. In that sense, we might be best off freaking\nout after a relatively small number of transactions, because that\nmight get some human being's attention. In a very real sense, if old\nprepared transactions shut down the system after 100 million\ntransactions, users would probably be better off on average, because\nthe problems would get fixed before so much damage is done. I'm not\nseriously proposing that as a design, but I think it's a mistake to\nthink that pushing off the day of reckoning is necessarily better.\n\nAll that being said, I do agree that trying to keep the table age\nbelow 300 million is too conservative. I think we need to be\nconservative now because we don't take the time that the table will\ntake to vacuum into account, and I think if we start thinking about it\nas a target to finish vacuuming rather than a target to start\nvacuuming, it can go significantly higher. But I would be disinclined\nto go to say, 1.5 billion. If the user hasn't taken any action when we\nhit the 1 billion transaction mark, or really probably a lot sooner,\nthey're unlikely to wake up any time soon. I don't think there are\nmany systems out there where vacuum ages >1b are the result of the\nsystem trying frantically to keep up and not having enough juice.\nThere are probably some, but most such cases are the result of\nmisconfiguration, user error, software failure, etc.\n\n> There is a related problem that you didn't mention:\n> autovacuum_max_workers controls how many autovacuum workers can run at\n> once, but there is no particular concern for whether or not running\n> that many workers actually makes sense, in any given scenario. As a\n> general rule, the system should probably be *capable* of running a\n> large number of autovacuums at the same time, but never actually do\n> that (because it just doesn't ever prove necessary). Better to have\n> the option and never use it than need it and not have it.\n\nI agree. And related to that, the more workers we have, the slower\neach one goes, which I think is often counterintuitive for people, and\nalso often counterproductive. I'm sure there are cases where table A\nis really big and needs to be vacuumed but not terribly urgently, and\ntable B is really small but needs to be vacuumed right now, and I/O\nbandwidth is really tight. In that case, slowing down the vacuum on\ntable A so that the vacuum on table B can do its thing is the right\ncall. But what I think is more common is that we get more workers\nbecause the first one is not getting the job done. And if they all get\nslower then we're still not getting the job done, but at greater\nexpense.\n\n> > In the meantime, I think a sensible place to start would be to figure\n> > out some system that makes sensible estimates of how soon we need to\n> > address bloat, XID wraparound, and MXID wraparound for each table, and\n> > some system that estimates how long each one will take to vacuum.\n>\n> I think that it's going to be hard to model how long index vacuuming\n> will take accurately. And harder still to model which indexes will\n> adversely impact the user in some way if we delay vacuuming some more.\n\nThose are fair concerns. I assumed that if we knew the number of pages\nin the index, which we do, it wouldn't be too hard to make an estimate\nlike this ... but you know more about this than I do, so tell me why\nyou think that won't work. It's perhaps worth noting that even a\nsomewhat poor estimate could be a big improvement over what we have\nnow.\n\n> Might be more useful to start off by addressing how to spread out the\n> burden of vacuuming over time. The needs of queries matters, but\n> controlling costs matters too.\n>\n> One of the most effective techniques is to manually VACUUM when the\n> system is naturally idle, like at night time. If that could be\n> quasi-automated, or if the criteria used by autovacuum scheduling gave\n> just a little weight to how busy the system is right now, then we\n> would have more slack when the system becomes very busy.\n\nI have thought about this approach but I'm not very hopeful about it\nas a development direction. One problem is that we don't necessarily\nknow when the quiet times are, and another is that there might not\neven be any quiet times. Still, neither of those problems by itself\nwould discourage me from attempting something in this area. The thing\nthat does discourage me is: if you have a quiet period, you can take\nadvantage of that to do vacuuming without any code changes at all.\nYou can just crontab a vacuum that runs with a reduced setting for\nvacuum_freeze_table_age and vacuum_freeze_min_age during your nightly\nquiet period and call it good.\n\nThe problem that I'm principally concerned about here is the case\nwhere somebody had a system that was basically OK and then at some\npoint, bad things started to happen. At some point they realize\nthey're in trouble and try to get back on track. Very often,\nautovacuum is actually the enemy in that situation: it insists on\nconsuming resources to vacuum the wrong stuff. Whatever we can do to\navoid such disastrous situations is all to the good, but since we\ncan't realistically expect to avoid them entirely, we need to improve\nthe behavior in the cases where they do happen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 Jan 2022 19:43:11 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 4:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Since we now have the failsafe, the scheduling algorithm can afford to\n> > not give too much special attention to table age until we're maybe\n> > over the 1 billion age mark -- or even 1.5 billion+. But once the\n> > scheduling stuff starts to give table age special attention, it should\n> > probably become the dominant consideration, by far, completely\n> > drowning out any signals about bloat. It's kinda never really supposed\n> > to get that high, so when we do end up there it is reasonable to fully\n> > freak out. Unlike the bloat criteria, the wraparound safety criteria\n> > doesn't seem to have much recognizable space between not worrying at\n> > all, and freaking out.\n>\n> I do not agree with all of this. First, on general principle, I think\n> sharp edges are bad. If a table had priority 0 for autovacuum 10\n> minutes ago, it can't now have priority one million bazillion. If\n> you're saying that the priority of wraparound needs to, in the limit,\n> become higher than any bloat-based priority, that is reasonable.\n\nI'm definitely saying considerations about wraparound need to swamp\neverything else out at the limit. But I'm also making the point that\n(at least with the ongoing relfrozenxid/freezing work) the system does\nremarkably well at avoiding all aggressive anti-wraparound VACUUMs in\nmost individual tables, with most workloads. And so having an\naggressive anti-wraparound VACUUM at all now becomes a pretty strong\nsignal.\n\nAs we discussed on the other thread recently, you're still only going\nto get anti-wraparound VACUUMs in a minority of tables with the\npatches in place -- for tables that won't ever get an autovacuum for\nany other reason. And so having an anti-wraparound probably just\nsignals that we have such a table, which is totally inconsequential.\nBut what about when there is an anti-wraparound VACUUM (or a need for\none) on a table whose age is already (say) 2x the value of\nautovacuum_freeze_max_age? That really is an incredibly strong signal\nthat something is very much amiss. Since the relfrozenxid/freezing\npatch series actually makes each VACUUM able to advance relfrozenxid\nin a way that's really robust when the system is not under great\npressure, the failure of that strategy becomes a really strong signal.\n\nSo it's not that table age signals something that we can generalize\nabout too much, without context. The context is important. The\nrelationship between table age and autovacuum_freeze_max_age with the\nnew strategy from my patch series becomes an important negative\nsignal, about something that we reasonably expected to be quite stable\nnot actually being stable.\n\n(Sorry to keep going on about my work, but it really seems relevant.)\n\n> Also, it's worth keeping in mind that waiting longer to freak out is\n> not necessarily an advantage. It may well be that the only way the\n> problem will ever get resolved is by human intervention - going in and\n> fixing whatever dumb thing somebody did - e.g. resolving the pending\n> prepared transaction.\n\nIn that case we ought to try to alert the user earlier.\n\n> Those are fair concerns. I assumed that if we knew the number of pages\n> in the index, which we do, it wouldn't be too hard to make an estimate\n> like this ... but you know more about this than I do, so tell me why\n> you think that won't work. It's perhaps worth noting that even a\n> somewhat poor estimate could be a big improvement over what we have\n> now.\n\nI can construct a plausible, totally realistic counter-example that\nbreaks a heuristic like that, unless it focuses on extremes only, like\nno index growth at all since the last VACUUM (which didn't leave\nbehind any deleted pages). I think that such a model can work well,\nbut only if it's designed to matter less and less as our uncertainty\ngrows. It seems as if the uncertainty grows very sharply, once you\nbegin to generalize past the extremes.\n\nWe have to be totally prepared for the model to be wrong, except\nperhaps as a way of prioritizing things when there is real urgency,\nand we don't have a choice about choosing. All models are wrong, some\nare useful.\n\n> The problem that I'm principally concerned about here is the case\n> where somebody had a system that was basically OK and then at some\n> point, bad things started to happen.\n\nIt seems necessary to distinguish between the case where things really\nwere okay for a time, and the case where they merely appeared to be\nokay to somebody whose understanding of the system isn't impossibly\ndeep and sophisticated. You'd have to be an all-knowing oracle to be\nable to tell the difference, because the system itself has no\nsophisticated notion of how far it is into debt. There are things that\nwe can do to address this gap directly (that's what I have been doing\nmyself), but that can only go so far.\n\nISTM that the higher the amount of debt that the system is actually\nin, the greater the uncertainty about the total amount of debt. In\nother words, the advantage of paying down debt isn't limited to the\nobvious stuff; there is also the advantage of gaining confidence about\nhow far into debt the system really is. The longer it's been since the\nlast real VACUUM, the more your model of debt/bloat is likely to have\ndiverged from reality.\n\nAnd that's why I bring costs into it. Vacuuming at night because you\nknow that the cost will be relatively low, even if the benefits might\nnot be quite as high as you'd usually expect makes sense on its own\nterms, and also has the advantage of making the overall picture\nclearer to the system/your model.\n\n> At some point they realize\n> they're in trouble and try to get back on track. Very often,\n> autovacuum is actually the enemy in that situation: it insists on\n> consuming resources to vacuum the wrong stuff.\n\nTo some degree this is because the statistics that autovacuum has\naccess to are flat out wrong, even though we could do better. For\nexample, the issue that I highlighted a while back about ANALYZE's\ndead tuples accounting. Or the issue that I pointed out on this thread\nalready, about relfrozenxid being a very bad indicator of what's\nactually going on with XIDs in the table (at least without my\nrelfrozenxid patches in place).\n\nAnother idea centered on costs: with my freezing/relfrozenxid patch\nseries, strict append-only tables like pgbench_history will only ever\nneed to have VACUUM process each heap page once. That's good, but it\ncould be even better if we didn't have to rely on the autovacuum\nscheduling and autovacuum_vacuum_insert_scale_factor to drive\neverything. This is technically a special case, but it's a rather\nimportant one -- it's both very common and not that hard to do a lot\nbetter on. We ought to be aiming to only dirty each page exactly once,\nby *dynamically* deciding to VACUUM much more often than the current\nmodel supposes makes sense.\n\nI think that this would require a two-way dialog between autovacuum.c\nand vacuumlazy.c. At a high level, vacuumlazy.c would report back\n\"turns out that that table looks very much like an append-only table\".\nThat feedback would cause the autovacuum.c scheduling to eagerly\nlaunch another autovacuum worker, ignoring the usual criteria -- just\nwait (say) another 60 seconds, and then launch a new autovacuum worker\non the same table if it became larger by some smallish fixed amount\n(stop caring about percentage table growth). Constant mini-vacuums\nagainst such a table make sense, since costs are almost exactly\nproportional to the number of heap pages appended since the last\nVACUUM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 22 Jan 2022 11:48:35 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 12:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> So at a high level, I think that what we ought to do is, first, for\n> each table, estimate the time at which we think something bad will\n> occur. There are several bad events: too much bloat, XID wraparound,\n> MXID wraparound. We need to estimate the time at which we think each\n> of those things will occur, and then take the earliest of those\n> estimates. That's the time by which we need to have finished vacuuming\n> the table. Then, make an estimate of how long it will take to complete\n> a vacuum of the table, subtract that from the time at which we need to\n> be done, and that's the time by which we need to start. The earliest\n> need-to-start time is the highest priority table.\n\n\nI think we need some more parameters to compare bloat vs wraparound.\nI mean in one of your examples in the 2nd paragraph we can say that\nthe need-to-start of table A is earlier than table B so it's kind of\nsimple. But when it comes to wraparound vs bloat we need to add some\nweightage to compute how much bloat is considered as bad as\nwraparound. I think the amount of bloat can not be an absolute number\nbut it should be relative w.r.t the total database size or so. I\ndon't think it can be computed w.r.t to the table size because if the\ntable is e.g. just 1 GB size and it is 5 times bloated then it is not\nas bad as another 1 TB table which is just 2 times bloated.\n\n>\n> A second problem is that, if the earliest need-to-start time is in the\n> past, then we definitely are in trouble and had better get to work at\n> once, but if it's in the future, that doesn't necessarily mean we're\n> safe. If there are three tables with a need-to-finish time that is 12\n> hours in the future and each of them will take 11 hours to vacuum,\n> then every need-to-start time computed according to the algorithm\n> above is in the future, but in fact we're in a lot of trouble. If the\n> estimates are accurate, we need 3 autovacuum workers to be available\n> to start within 1 hour, or we're doomed. The very last thing we want\n> to do is wait another hour before doing anything. It's not impossible\n> to factor this into the calculation of need-to-start times, assuming\n> we know how many workers we have. For instance, if we've got tables\n> whose need-to-finish times are 30, 50, and 70 minutes in the future,\n> we can see that if each one takes 20 minutes or less to vacuum, then\n> the need-to-start times can just be computed by subtraction. But the\n> tables with 50 or 70 minute deadlines are going to take more than 20\n> minutes to vacuum, then we've got to back up the need-to-start times\n> so that we finish each table in time to start on the next one. I\n> haven't looked into what algorithms exist for this kind of scheduling\n> problem, but I feel like a literature search, or pulling out my\n> college textbooks, would probably turn up some useful ideas.\n\n\nI think we should be thinking of dynamically adjusting priority as\nwell. Because it is possible that when autovacuum started we\nprioritize the table based on some statistics and estimation but\nvacuuming process can take long time and during that some priority\nmight change so during the start of the autovacuum if we push all\ntable to some priority queue and simply vacuum in that order then we\nmight go wrong somewhere. I think we need to make different priority\nqueues based on different factors, for example 1 queue for wraparound\nrisk and another for bloat risk. Even though there would be multiple\nqueue we would have need_to_start time with each item so that we\nexactly know from which queue we pick the next item but dynamically\nwhenever picking the item we can recheck the priority of the item at\nthe head of the queue and always assume that the queue is arranged in\norder of need_to_start time. So now what if the item back in the\nqueue becomes more important than the item at the queue head based on\nsome statistics? I don't think it is wise to compute the\nneed_to_start time for all the items before picking any new item. But\nI think we need to have multiple queues based on different factors\n(not only just wraparound and bloat) to reduce the risk of items in\nthe back of the queue becoming higher priority than items in front of\nthe queue. I mean this can not completely be avoided but this can be\nreduced by creating multiple work queues based on more factors which\ncan dynamically change.\n\n>\n> I know that this email is kind of a giant wall of text, so my thanks\n> if you've read this far, and even more if you feel inspired to write\n> back with your own thoughts.\n\n\nYeah it is a long email but quite interesting.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jan 2022 09:44:35 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 11:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I think we need some more parameters to compare bloat vs wraparound.\n> I mean in one of your examples in the 2nd paragraph we can say that\n> the need-to-start of table A is earlier than table B so it's kind of\n> simple. But when it comes to wraparound vs bloat we need to add some\n> weightage to compute how much bloat is considered as bad as\n> wraparound. I think the amount of bloat can not be an absolute number\n> but it should be relative w.r.t the total database size or so. I\n> don't think it can be computed w.r.t to the table size because if the\n> table is e.g. just 1 GB size and it is 5 times bloated then it is not\n> as bad as another 1 TB table which is just 2 times bloated.\n\nThanks for writing back.\n\nI don't think that I believe the last part of this argument, because\nit seems to suppose that the big problem with bloat is that it might\nuse up disk space, whereas in my experience the big problem with bloat\nis that it slows down access to your data. Yet the dead space in some\nother table will not have much impact on the speed of access to the\ncurrent table. In fact, if most accesses to the table are index scans,\neven dead space in the current table may not have much effect, but\nsequential scans are bound to notice. It's true that, on a\ncluster-wide basis, every dead page is one more page that can\npotentially take up space in cache, so in that sense the performance\nconsequences are global to the whole cluster. However, that effect is\nmore indirect and takes a long time to become a big problem. The\ndirect effect of having to read more pages to execute the same query\nplan causes problems a lot sooner.\n\nBut your broader point that we need to consider how much bloat\nrepresents a problem is a really good one. In the past, one rule that\nI've thought about is: if we're vacuuming a table and we're not going\nto finish before it needs to be vacuumed again, then we should vacuum\nfaster (i.e. in effect, increase the cost limit on the fly). That\nmight still not result in good behavior, but it would at least result\nin behavior that is less bad. However, it doesn't really answer the\nquestion of how we decide when to start the very first VACUUM. I don't\nreally know the answer to that question. The current heuristics result\nin estimates of acceptable bloat that are too high in some cases and\ntoo low in others. I've seen tables that got bloated vastly beyond\nwhat autovacuum is configured to tolerate before they caused any real\ndifficulty, and I know there are other cases where users start to\nsuffer long before those thresholds are reached.\n\nAt the moment, the best idea I have is to use something like the\ncurrent algorithm, but treat it as a deadline (keep bloat below this\namount) rather than an initiation criteria (start when you reach this\namount). But I think that idea is a bit weak; maybe there's something\nbetter out there.\n\n> I think we should be thinking of dynamically adjusting priority as\n> well. Because it is possible that when autovacuum started we\n> prioritize the table based on some statistics and estimation but\n> vacuuming process can take long time and during that some priority\n> might change so during the start of the autovacuum if we push all\n> table to some priority queue and simply vacuum in that order then we\n> might go wrong somewhere.\n\nYep. I think we should reassess what to do next after each table.\nPossibly making some exception for really small tables - e.g. if we\nlast recomputed priorities less than 1 minute ago, don't do it again.\n\n> I think we need to make different priority\n> queues based on different factors, for example 1 queue for wraparound\n> risk and another for bloat risk.\n\nI don't see why we want multiple queues. We have to answer the\nquestion \"what should we do next?\" which requires us, in some way, to\nfunnel everything into a single prioritization.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jan 2022 14:30:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 11:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But your broader point that we need to consider how much bloat\n> represents a problem is a really good one. In the past, one rule that\n> I've thought about is: if we're vacuuming a table and we're not going\n> to finish before it needs to be vacuumed again, then we should vacuum\n> faster (i.e. in effect, increase the cost limit on the fly).\n\nThat seems reasonable, but I doubt that that's a huge issue in\npractice, now that the default cost limits are more sensible.\n\n> That might still not result in good behavior, but it would at least result\n> in behavior that is less bad. However, it doesn't really answer the\n> question of how we decide when to start the very first VACUUM. I don't\n> really know the answer to that question. The current heuristics result\n> in estimates of acceptable bloat that are too high in some cases and\n> too low in others. I've seen tables that got bloated vastly beyond\n> what autovacuum is configured to tolerate before they caused any real\n> difficulty, and I know there are other cases where users start to\n> suffer long before those thresholds are reached.\n\nISTM that the easiest thing that could be done to improve this is to\ngive some consideration to page-level characteristics. For example, a\npage that has 5 dead heap-only tuples is vastly different to a similar\npage that has 5 LP_DEAD items instead -- and yet our current approach\nmakes no distinction. Chances are very high that if the only dead\ntuples are heap-only tuples, then things are going just fine on that\npage -- opportunistic pruning is actually keeping up. Page-level\nstability over time seems to be the thing that matters most -- we must\nmake sure that the same \"logical rows\" that were inserted around the\nsame time remain on the same block for as long as possible, without\nmixing in other unrelated tuples needlessly. In other words, preserve\nnatural locality.\n\nThis is related to the direction of things, and the certain knowledge\nthat VACUUM alone can deal with line pointer bloat. The current state\nof individual pages hints at the direction of things even without\ntracking how things change directly. But tracking the change over time\nin ANALYZE seems better still: if successive ANALYZE operations notice\na consistent pattern where pages that had a non-zero number of LP_DEAD\nitems last time now have a significantly higher number, then it's a\ngood idea to err in the direction of more aggressive vacuuming.\n*Growing* concentrations of LP_DEAD items signal chaos. I think that\nplacing a particular emphasis on pages with non-zero LP_DEAD items as\na qualitatively distinct category of page might well make sense --\nrelatively few blocks with a growing number of LP_DEAD items seems\nlike it should be enough to make autovacuum run aggressively.\n\nAs I pointed out not long ago, ANALYZE does a terrible job of\naccurately counting dead tuples/LP_DEAD items when they aren't\nuniformly distributed in the table -- which is often a hugely\nimportant factor, with a table that is append-mostly with updates and\ndeletes. That's why I suggested bringing the visibility map into it.\nIn general I think that the statistics that drive autovacuum are\ncurrently often quite wrong, even on their own simplistic,\nquantitative terms.\n\n> I don't see why we want multiple queues. We have to answer the\n> question \"what should we do next?\" which requires us, in some way, to\n> funnel everything into a single prioritization.\n\nEven busy production DBs should usually only be vacuuming one large\ntable at a time. Also might make sense to strategically align the work\nwith the beginning of a new checkpoint.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 25 Jan 2022 12:31:50 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 2:30 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 24, 2022 at 11:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > I think we need to make different priority\n> > queues based on different factors, for example 1 queue for wraparound\n> > risk and another for bloat risk.\n>\n> I don't see why we want multiple queues. We have to answer the\n> question \"what should we do next?\" which requires us, in some way, to\n> funnel everything into a single prioritization.\n\nI was thinking along the same lines as Dilip: If the anti-wraparound\nrisk is really far in the future, there might not be much eligible\nfreezing work to do. Dead tuples can be removed as soon as visibility\nrules allow it. With a separate bloat queue, there might always be\nsome work to do. Maybe \"bloat queue\" is too specific, because\ninsert-only tables can use more vacuuming for the VM even if they have\nnot reached the configured threshold.\n\nSo a worker would check the wraparound queue, and if nothing's there\ngrab something from the other queue. Maybe low-priority work would\nhave a low cost limit.\n\nProbably the true best way to do schedule, at least at first, is\nwhat's the least complex. I'm not yet sure what that is...\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jan 2022 15:34:34 -0500",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 3:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> For example, a\n> page that has 5 dead heap-only tuples is vastly different to a similar\n> page that has 5 LP_DEAD items instead -- and yet our current approach\n> makes no distinction. Chances are very high that if the only dead\n> tuples are heap-only tuples, then things are going just fine on that\n> page -- opportunistic pruning is actually keeping up.\n\nHmm, I think that's a really good insight. Perhaps we ought to forget\nabout counting updates and deletes and instead count dead line\npointers. Or maybe we can estimate the number of dead line pointers by\nknowing how many updates and deletes there were, as long as we can\ndistinguish hot updates from non-HOT updates, which I think we can.\n\n> if successive ANALYZE operations notice\n> a consistent pattern where pages that had a non-zero number of LP_DEAD\n> items last time now have a significantly higher number, then it's a\n> good idea to err in the direction of more aggressive vacuuming.\n> *Growing* concentrations of LP_DEAD items signal chaos. I think that\n> placing a particular emphasis on pages with non-zero LP_DEAD items as\n> a qualitatively distinct category of page might well make sense --\n> relatively few blocks with a growing number of LP_DEAD items seems\n> like it should be enough to make autovacuum run aggressively.\n\nI think measuring the change over time here might be fraught with\nperil. If vacuum makes a single pass over the indexes, it can retire\nas many dead line pointers as we have, or as will fit in memory, and\nthe effort doesn't really depend too much on exactly how many dead\nline pointers we're trying to find. (I hear that it does depend more\nthan you'd think ... but I still don't think that should be the\ndominant consideration here.) So to be efficient, we want to do that\npass over the indexes when we have a suitably large batch of dead line\npointers. I don't think it really depends on how long it took the\nbatch to get to that size. I don't want to vacuum a terabyte of\nindexes with a much-smaller-than-normal batch of dead TIDs just\nbecause the number of dead TIDs seems to be increasing quickly at the\nmoment: it's hard to imagine that the results will be worth the\nresources I'll have to expend to get there. On the other hand I also\ndon't think I want to postpone vacuuming the indexes because the\nnumber is really big but not growing that fast.\n\nI feel like my threshold for the number of dead TIDs that ought to\ntrigger a vacuum grows as the table gets bigger, capped by how much\nmemory I've got. But I don't feel like the rate at which it's changing\nnecessarily matters. Like if I create a million dead line pointers\nreally quickly, wait a month, and then create another million dead\nline pointers, I feel like I want the system to respond just as\naggressively as if the month-long delay were omitted.\n\nMaybe my feelings are wrong here. I'm just saying that, to me, it\ndoesn't feel like the rate of change is all that relevant.\n\n> Even busy production DBs should usually only be vacuuming one large\n> table at a time. Also might make sense to strategically align the work\n> with the beginning of a new checkpoint.\n\nI'm not sure that either of those statements are correct. But on the\nother hand, I am also not sure that either of those statements are\nincorrect.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 Jan 2022 13:54:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 3:34 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I was thinking along the same lines as Dilip: If the anti-wraparound\n> risk is really far in the future, there might not be much eligible\n> freezing work to do. Dead tuples can be removed as soon as visibility\n> rules allow it. With a separate bloat queue, there might always be\n> some work to do.\n\nIsn't the same thing true of bloat, though? If the XID threshold\nhasn't advanced that much, then there may be nothing that's worth\ndoing about XID wraparound in the short term. If there aren't many\ndead tuples in any of the tables, then there may be nothing that's\nworth doing about bloat. Then we should just do nothing. On the other\nhand, we may have a relatively urgent problem in one of those areas\nbut not the other. Then we should work on that one. Or we may have\nproblems in both areas, and then we need to somehow decide which one\nis more urgent -- that's the situation in which I feel like we need to\nunify the prioritization or ordering in some way.\n\nIt is an interesting point that we could have low priority work with a\nlow cost limit and high priority work with a higher cost limit, or as\nI think Peter suggested, just use a single process for low-priority\nstuff but allow multiple processes when there's high-priority stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 Jan 2022 15:17:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "> On Wed, Jan 26, 2022 at 10:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jan 25, 2022 at 3:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > For example, a\n> > page that has 5 dead heap-only tuples is vastly different to a similar\n> > page that has 5 LP_DEAD items instead -- and yet our current approach\n> > makes no distinction. Chances are very high that if the only dead\n> > tuples are heap-only tuples, then things are going just fine on that\n> > page -- opportunistic pruning is actually keeping up.\n>\n> Hmm, I think that's a really good insight. Perhaps we ought to forget\n> about counting updates and deletes and instead count dead line\n> pointers. Or maybe we can estimate the number of dead line pointers by\n> knowing how many updates and deletes there were, as long as we can\n> distinguish hot updates from non-HOT updates, which I think we can.\n\nAll that we have to go on is a bunch of observations in any case,\nthough -- the map is not the territory. And so it seems to me that the\nsensible thing to do is just embrace that we won't ever really exactly\nknow what's going on in a given database, at any given time.\nFortunately, we don't really have to know. We should be able to get\naway with only having roughly the right idea, by focussing on the few\nthings that we are sure of -- things like the difference between\nLP_DEAD items and dead heap-only tuples, which are invariant to\nworkload characteristics.\n\nI recently said (on the ANALYZE related thread) that we should be\nthinking probabilistically here [1]. Our understanding of the amount\nof bloat could very usefully be framed that way. Maybe the model we\nuse is a probability density function (maybe not formally, not sure).\nA PDF has an exact expectation, which for us might be the most\nprobable number of dead tuples in total in a given table right now\n(let's just assume it's still dead tuples, ignoring the problems with\nthat metric for now).\n\nThis is a useful basis for making better decisions by weighing\ncompeting considerations -- which might themselves be another PDF.\nExample: For a given table that is approaching the point where the\nmodel says \"time to VACUUM\", we may very well spend hours, days, or\neven weeks approaching the crossover point. The exact expectation\nisn't truly special here -- there is actually zero practical reason to\nhave special reverence for that precise point (with a good model,\nwithin certain reasonable bounds). If our model says that there is\nonly a noise-level difference between doing a VACUUM on a given table\ntoday, tomorrow, or next week, why not take advantage? For example,\nwhy not do the VACUUM when the system appears to not be busy at all\n(typically in the dead of night), just because it'll definitely be\nboth cheaper in absolute terms (FPIs can be avoided by spreading\nthings out over multiple checkpoints), and less disruptive?\n\nThere are many opportunities like that, I believe. It's hard for me to\nsuppress the urge to blurt out 17 more ideas like that. What are the\nchances that you won't have at least a few real winners among all of\nthe ideas that everybody will come up with, in the end?\n\n> > if successive ANALYZE operations notice\n> > a consistent pattern where pages that had a non-zero number of LP_DEAD\n> > items last time now have a significantly higher number, then it's a\n> > good idea to err in the direction of more aggressive vacuuming.\n> > *Growing* concentrations of LP_DEAD items signal chaos. I think that\n> > placing a particular emphasis on pages with non-zero LP_DEAD items as\n> > a qualitatively distinct category of page might well make sense --\n> > relatively few blocks with a growing number of LP_DEAD items seems\n> > like it should be enough to make autovacuum run aggressively.\n>\n> I think measuring the change over time here might be fraught with\n> peril.\n\nI'd say that that depends on how you define the problem we're trying\nto solve. If you define the problem as coming up with a significantly\nimproved statistical model that determines (say) how many dead tuples\nthere are in the table right now, given a set of observations made by\nANALYZE in the past, then yes, it's fraught with peril. But why would\nyou define it that way? It seems far easier to improve things by\nputting model error and *actual* exposure to real known issues (e.g.\nline pointer bloat) front and center.\n\nIt doesn't necessarily matter if we're *usually* wrong with a good\nmodel. But with a bad model we may need to consistently get the\ncorrect answer. And so the model that is the most accurate\nquantitatively is probably *not* the best available model, all things\nconsidered. Most of the time we shouldn't VACUUM right this second,\nand so a model that consists of \"return false\" is very frequently\ncorrect. But that doesn't mean it's a good model. You get the idea.\n\n> If vacuum makes a single pass over the indexes, it can retire\n> as many dead line pointers as we have, or as will fit in memory, and\n> the effort doesn't really depend too much on exactly how many dead\n> line pointers we're trying to find.\n\nLine pointer bloat is something that displays hysteresis; once it\nhappens (past some significant threshold) then there is no reversing\nthe damage. This makes the behavior very non-linear. In other words,\nit makes it incredibly hard to model mathematically [2] -- once you\ncross a certain hard to define threshold, it's total chaos, even in a\nclosed well-specified system (i.e. a highly constrained workload),\nbecause you have all these feedback loops.\n\nOn top of all that, even with a perfect model we're still forced to\nmake a go/no-go decision for the entire table, moment to moment. So\neven a mythical perfect model runs into the problem that it is\nsimultaneously much too early and much too late at the level of the\ntable. Which is even more reason to just focus on not going totally\noff the rails, in any particular direction. Note that this includes\ngoing off the rails by vacuuming in a way that's unsustainably\naggressive -- sometimes you have to cut your losses at that level as\nwell.\n\nThere is usually some bigger picture to consider when things do go\nwrong -- there is usually some much worse fate that must be avoided.\nLike with VACUUM's failsafe. Sure, controlling index bloat is\nextremely important. But it's also much less important than keeping\nthe system online and responsive. That's another level up. (The level\nup *after that* is \"at least we didn't lose data\", or maybe something\nabout limiting the amount of downtime, not going out of business,\nwhatever.)\n\n> I feel like my threshold for the number of dead TIDs that ought to\n> trigger a vacuum grows as the table gets bigger, capped by how much\n> memory I've got.\n\nI thought of another PDF related idea when I read this, without even\ntrying: we could account for the discontinuity from multiple index\nscans in a single VACUUM operation (instead of just one) by erring in\nthe direction of doing the VACUUM sooner rather than later, when the\nmodel says that doing so will make very little difference in terms of\nextra costs incurred (extra costs from vacuuming sooner rather than\nlater, conservatively assuming that our concern about TIDs not fitting\nin memory is basically unfounded).\n\n> But I don't feel like the rate at which it's changing\n> necessarily matters. Like if I create a million dead line pointers\n> really quickly, wait a month, and then create another million dead\n> line pointers, I feel like I want the system to respond just as\n> aggressively as if the month-long delay were omitted.\n>\n> Maybe my feelings are wrong here. I'm just saying that, to me, it\n> doesn't feel like the rate of change is all that relevant.\n\nIt's not that they're wrong, exactly -- I wouldn't say that. It's more\nlike this: you as a Postgres user actually care about a great many\nthings, not just one thing. Some of these things might be somewhat in\ntension, from time to time. And so it seems wise to find a way to live\nwith any tension that may crop up -- by acknowledging the tension, we\nget the chance to honor the preferences of the user to the greatest\nextent possible.\n\n[1] https://postgr.es/m/CAH2-WzmvXXEKtEph7U360umZ5pN3d18RBfu=nyPg9neBLDUWdw@mail.gmail.com\n[2] https://en.wikipedia.org/wiki/Hysteretic_model\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 26 Jan 2022 14:19:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Thu, 20 Jan 2022 at 14:31, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> In my view, previous efforts in this area have been too simplistic.\n>\n\nOne thing I've been wanting to do something about is I think\nautovacuum needs to be a little cleverer about when *not* to vacuum a\ntable because it won't do any good.\n\nI've seen a lot of cases where autovacuum kicks off a vacuum of a\ntable even though the globalxmin hasn't really advanced significantly\nover the oldest frozen xid. When it's a large table this really hurts\nbecause it could be hours or days before it finishes and at that point\nthere's quite a bit of bloat.\n\nThis isn't a common occurrence, it happens when the system is broken\nin some way. Either there's an idle-in-transaction session or\nsomething else keeping the global xmin held back.\n\nWhat it does though is make things *much* worse and *much* harder for\na non-expert to hit on the right remediation. It's easy enough to tell\nthem to look for these idle-in-transaction sessions or set timeouts.\nIt's much harder to determine whether it's a good idea for them to go\nand kill the vacuum that's been running for days. And it's not a great\nthing for people to be getting in the habit of doing either.\n\nI want to be able to stop telling people to kill vacuums kicked off by\nautovacuum. I feel like it's a bad thing for someone to ever have to\ndo and I know some fraction of the time I'm telling them to do it\nit'll have been a terrible thing to have done (but we'll never know\nwhich times those were). Determining whether a running vacuum is\nactually doing any good is pretty hard and on older versions probably\nimpossible.\n\nI was thinking of just putting a check in before kicking off a vacuum\nand if the globalxmin is a significant fraction of the distance to the\nrelfrozenxid then instead log a warning. Basically it means \"we can't\nkeep the bloat below the threshold due to the idle transactions et al,\nnot because there's insufficient i/o bandwidth\".\n\nAt the same time it would be nice if autovacuum could recognize when\nthe i/o bandwidth is insufficient. If it finishes a vacuum it could\nrecheck whether the table is eligible for vacuuming and log that it's\nunable to keep up with the vacuuming requirements -- but right now\nthat would be a lie much of the time when it's not a lack of bandwidth\npreventing it from keeping up.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 26 Jan 2022 18:46:00 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 3:46 PM Greg Stark <stark@mit.edu> wrote:\n> One thing I've been wanting to do something about is I think\n> autovacuum needs to be a little cleverer about when *not* to vacuum a\n> table because it won't do any good.\n\nThere was a thread about this exact thing not too long ago:\n\nhttps://postgr.es/m/CAH2-Wzmx6+PrfpmmFw8JZbxD+kkwhQWPOhE5RUBy6S4_Jwty=Q@mail.gmail.com\n\nIf everything goes according to plan, then Postgres 15 will have my\nwork on freezing and dynamically advancing relfrozenxid. Meaning that\nyou'll be able to see (in autovacuum log output and in VACUUM VERBOSE\noutput) how much relfrozenxid has been advanced by, if at all. You'll\nalso directly see how far behind the VACUUM operation's OldestXmin\nthat is (and how far behind the OldestXmin is at the end of the VACUUM\noperation).\n\nIt seems as if this offers you exactly what you need. You'll be able\nto notice the inherent futility of an anti-wraparound VACUUM that runs\nagainst a table whose relfrozenxid is already exactly equal to the\nVACUUM's OldestXmin (say because of a leaked replication slot --\nanything that makes vacuuming fundamentally unable to advance\nrelfrozenxid, really).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 26 Jan 2022 15:54:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Wed, 26 Jan 2022 at 18:46, Greg Stark <stark@mit.edu> wrote:\n>\n> On Thu, 20 Jan 2022 at 14:31, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > In my view, previous efforts in this area have been too simplistic.\n> >\n>\n> One thing I've been wanting to do something about is I think\n> autovacuum needs to be a little cleverer about when *not* to vacuum a\n> table because it won't do any good.\n>\n> I've seen a lot of cases where autovacuum kicks off a vacuum of a\n> table even though the globalxmin hasn't really advanced significantly\n> over the oldest frozen xid. When it's a large table this really hurts\n> because it could be hours or days before it finishes and at that point\n> there's quite a bit of bloat.\n\n\nAnother case I would like to see autovacuum get clever about is when\nthere is a wide disparity in the size of tables. If you have a few\nlarge tables and a few small tables there could be enough bandwidth\nfor everyone but you can get in trouble if the workers are all tied up\nvacuuming the large tables.\n\nThis is a case where autovacuum scheduling can create a problem where\nthere shouldn't be one. It often happens when you have a set of large\ntables that were all loaded with data around the same time and you\nhave your busy tables that are well designed small tables receiving\nlots of updates. They can happily be getting vacuumed every 15-30min\nand finishing promptly maintaining a nice steady state until one day\nall the large tables suddenly hit the freeze threshold and suddenly\nall your workers are busy vacuuming huge tables that take hours or\ndays to vacuum and your small tables bloat by orders of magnitude.\n\nI was thinking of dividing the eligible tables up into ntiles based on\nsize and then making sure one worker was responsible for each ntile.\nI'm not sure that would actually be quite right though.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 26 Jan 2022 18:56:07 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 6:56 PM Greg Stark <stark@mit.edu> wrote:\n>\n> On Wed, 26 Jan 2022 at 18:46, Greg Stark <stark@mit.edu> wrote:\n> >\n> > On Thu, 20 Jan 2022 at 14:31, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > In my view, previous efforts in this area have been too simplistic.\n> > >\n> >\n> > One thing I've been wanting to do something about is I think\n> > autovacuum needs to be a little cleverer about when *not* to vacuum a\n> > table because it won't do any good.\n> >\n> > I've seen a lot of cases where autovacuum kicks off a vacuum of a\n> > table even though the globalxmin hasn't really advanced significantly\n> > over the oldest frozen xid. When it's a large table this really hurts\n> > because it could be hours or days before it finishes and at that point\n> > there's quite a bit of bloat.\n>\n>\n> Another case I would like to see autovacuum get clever about is when\n> there is a wide disparity in the size of tables. If you have a few\n> large tables and a few small tables there could be enough bandwidth\n> for everyone but you can get in trouble if the workers are all tied up\n> vacuuming the large tables.\n>\n> This is a case where autovacuum scheduling can create a problem where\n> there shouldn't be one. It often happens when you have a set of large\n> tables that were all loaded with data around the same time and you\n> have your busy tables that are well designed small tables receiving\n> lots of updates. They can happily be getting vacuumed every 15-30min\n> and finishing promptly maintaining a nice steady state until one day\n> all the large tables suddenly hit the freeze threshold and suddenly\n> all your workers are busy vacuuming huge tables that take hours or\n> days to vacuum and your small tables bloat by orders of magnitude.\n>\n> I was thinking of dividing the eligible tables up into ntiles based on\n> size and then making sure one worker was responsible for each ntile.\n> I'm not sure that would actually be quite right though.\n>\n\nI've been working off and on some external vacuum scheduling tools the\npast yearish and one thing that seems to be an issue is a lack of\nobservability into the various cost delay/limit mechanisms, like how\nmuch does a vacuum contribute towards the limit or how much was it\ndelayed during a given run. One theory was if we are seeing a lot of\nslow down due to cost limiting, we should more heavily weight smaller\ntables in our priority list for which tables to vacuum vs larger\ntables which we expect to exacerbate the situation.\n\nI've also thought it'd be nice for users to have an easy way to\nguesstimate % of frozen tables (like live vs dead tuples in\npg_stat_all_tables), but this seems difficult to maintain accurately.\nHad a similar thing with tracking clock time of vacuums; just keeping\nthe duration of the last vacuum ended up being insufficient for some\ncases, so we ended up tracking it historically... we haven't quite yet\ndesigned a pg_stat_vacuums a la pg_stat_statements, but it has crossed\nour minds.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Tue, 1 Feb 2022 12:35:41 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum prioritization"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 6:46 PM Greg Stark <stark@mit.edu> wrote:\n> One thing I've been wanting to do something about is I think\n> autovacuum needs to be a little cleverer about when *not* to vacuum a\n> table because it won't do any good.\n\nI agree.\n\n> I was thinking of just putting a check in before kicking off a vacuum\n> and if the globalxmin is a significant fraction of the distance to the\n> relfrozenxid then instead log a warning. Basically it means \"we can't\n> keep the bloat below the threshold due to the idle transactions et al,\n> not because there's insufficient i/o bandwidth\".\n\nUnfortunately, XID distances don't tell us much, because the tuples\nneed not be uniformly distributed across the XID space. In fact, it\nseems highly likely that they will be very non-uniformly distributed,\nwith a few transactions having created a lot of dead tuples and most\nhaving created none. Therefore, it's pretty plausible that a vacuum\nthat permits relfrozenxid++ could solve every problem we have. If we\nknew something about the distribution of dead XIDs in the table, then\nwe could make an intelligent judgement about whether vacuuming would\nbe useful. But otherwise I feel like we're just guessing, so instead\nof really fixing the problem we'll just be making it happen in a set\nof cases that's even harder to grasp.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Feb 2022 11:32:55 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum prioritization"
}
] |
[
{
"msg_contents": "Greetings -hackers,\n\nOur beloved Google Summer of Code is back for 2022! They have once again \nchanged some of how GSoC is going to work for this year, for a variety \nof reasons, so please be sure to read this email and posts linked for \nthe updates if you're interested! In short, this year both medium and \nlarge sized projects can be proposed, with more flexibility on end dates.\n\nEveryone interested in suggesting projects or mentoring should review \nthe blog post here regarding the changes:\n\nhttps://opensource.googleblog.com/2021/11/expanding-google-summer-of-code-in-2022.html\n\nNow is the time to work on getting together a set of projects we'd like \nto have GSoC students work on over the summer. Similar to last year, we \nneed to have a good set of projects for students to choose from in \nadvance of the deadline for mentoring organizations.\n\nHOWEVER, as noted in the blog post above, project length expectations \nhave changed. Please decide accordingly based on your requirements and \navailability! Also, there is going to be only one intermediate \nevaluation, similarly to last year.\n\nGSoC timeline: https://developers.google.com/open-source/gsoc/timeline\n\nOne other thing to note is that anyone over the age of 18 will be \neligible in 2022 in addition to students, broadening the pool of \npotential applicants and changing the terminology of applicants to \n\"contributors\".\n\nThe deadline for Mentoring organizations to apply is: February 21. The \nlist of accepted organization will be published around March 7.\n\nUnsurprisingly, we'll need to have an Ideas page again, so I've gone \nahead and created one (copying last year's):\n\nhttps://wiki.postgresql.org/wiki/GSoC_2022\n\nGoogle discusses what makes a good \"Ideas\" list here:\n\nhttps://google.github.io/gsocguides/mentor/defining-a-project-ideas-list.html\n\nAll the entries are marked with '2021' to indicate they were pulled from \nlast year. If the project from last year is still relevant, please \nupdate it to be '2022' and make sure to update all of the information \n(in particular, make sure to list yourself as a mentor and remove the \nother mentors, as appropriate). Please also be sure to update the \nproject's scope to be appropriate for the new guidelines.\n\nNew entries are certainly welcome and encouraged, just be sure to note \nthem as '2022' when you add them. Projects from last year which were \nworked on but have significant follow-on work to be completed are \nabsolutely welcome as well - simply update the description appropriately \nand mark it as being for '2022'.\n\nWhen we get closer to actually submitting our application, I'll clean \nout the '2021' entries that didn't get any updates. Also - if there are \nany projects that are no longer appropriate (maybe they were completed, \nfor example and no longer need work), please feel free to remove them. \nThe page is still work in progress, so it's entirely possible I missed \nsome updates where a GSoC project was completed independently of GSoC \n(and if I removed any that shouldn't have been - feel free to add them \nback by copying from the 2021 page).\n\nAs a reminder, each idea on the page should be in the format that the \nother entries are in and should include:\n\n- Project title/one-line description\n- Brief, 2-5 sentence, description of the project\n- Description of programming skills needed and estimation of the \ndifficulty level\n- Project size\n- List of potential mentors\n- Expected Outcomes\n\nAs with last year, please consider PostgreSQL to be an \"Umbrella\" \nproject and that anything which would be considered \"PostgreSQL Family\" \nper the News/Announce policy [1] is likely to be acceptable as a \nPostgreSQL GSoC project.\n\nIn other words, if you're a contributor or developer on WAL-G, barman, \npgBackRest, the PostgreSQL website (pgweb), the PgEU/PgUS website code \n(pgeu-system), pgAdmin4, pgbouncer, pldebugger, the PG RPMs (pgrpms), \nthe JDBC driver, the ODBC driver, or any of the many other PG Family \nprojects, please feel free to add a project for consideration! If we get \nquite a few, we can organize the page further based on which project or \nmaybe what skills are needed or similar.\n\nLet's have another great year of GSoC with PostgreSQL!\n\nThanks!\n\nIlaria & Stephen\n\n[1]: https://www.postgresql.org/about/policies/news-and-events/\n\n\n",
"msg_date": "Thu, 20 Jan 2022 20:33:28 +0100",
"msg_from": "Ilaria Battiston <ilaria.battiston@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC 2022"
},
{
"msg_contents": "On Thu, 20 Jan 2022 at 14:32, Ilaria Battiston <ilaria.battiston@gmail.com>\nwrote:\n\n> Greetings -hackers,\n>\n> Our beloved Google Summer of Code is back for 2022! They have once again\n> changed some of how GSoC is going to work for this year, for a variety\n> of reasons, so please be sure to read this email and posts linked for\n> the updates if you're interested! In short, this year both medium and\n> large sized projects can be proposed, with more flexibility on end dates.\n>\n> Everyone interested in suggesting projects or mentoring should review\n> the blog post here regarding the changes:\n>\n>\n> https://opensource.googleblog.com/2021/11/expanding-google-summer-of-code-in-2022.html\n>\n> Now is the time to work on getting together a set of projects we'd like\n> to have GSoC students work on over the summer. Similar to last year, we\n> need to have a good set of projects for students to choose from in\n> advance of the deadline for mentoring organizations.\n>\n> HOWEVER, as noted in the blog post above, project length expectations\n> have changed. Please decide accordingly based on your requirements and\n> availability! Also, there is going to be only one intermediate\n> evaluation, similarly to last year.\n>\n> GSoC timeline: https://developers.google.com/open-source/gsoc/timeline\n>\n> One other thing to note is that anyone over the age of 18 will be\n> eligible in 2022 in addition to students, broadening the pool of\n> potential applicants and changing the terminology of applicants to\n> \"contributors\".\n>\n> The deadline for Mentoring organizations to apply is: February 21. The\n> list of accepted organization will be published around March 7.\n>\n> Unsurprisingly, we'll need to have an Ideas page again, so I've gone\n> ahead and created one (copying last year's):\n>\n> https://wiki.postgresql.org/wiki/GSoC_2022\n>\n> Google discusses what makes a good \"Ideas\" list here:\n>\n>\n> https://google.github.io/gsocguides/mentor/defining-a-project-ideas-list.html\n>\n> All the entries are marked with '2021' to indicate they were pulled from\n> last year. If the project from last year is still relevant, please\n> update it to be '2022' and make sure to update all of the information\n> (in particular, make sure to list yourself as a mentor and remove the\n> other mentors, as appropriate). Please also be sure to update the\n> project's scope to be appropriate for the new guidelines.\n>\n> New entries are certainly welcome and encouraged, just be sure to note\n> them as '2022' when you add them. Projects from last year which were\n> worked on but have significant follow-on work to be completed are\n> absolutely welcome as well - simply update the description appropriately\n> and mark it as being for '2022'.\n>\n> When we get closer to actually submitting our application, I'll clean\n> out the '2021' entries that didn't get any updates. Also - if there are\n> any projects that are no longer appropriate (maybe they were completed,\n> for example and no longer need work), please feel free to remove them.\n> The page is still work in progress, so it's entirely possible I missed\n> some updates where a GSoC project was completed independently of GSoC\n> (and if I removed any that shouldn't have been - feel free to add them\n> back by copying from the 2021 page).\n>\n> As a reminder, each idea on the page should be in the format that the\n> other entries are in and should include:\n>\n> - Project title/one-line description\n> - Brief, 2-5 sentence, description of the project\n> - Description of programming skills needed and estimation of the\n> difficulty level\n> - Project size\n> - List of potential mentors\n> - Expected Outcomes\n>\n> As with last year, please consider PostgreSQL to be an \"Umbrella\"\n> project and that anything which would be considered \"PostgreSQL Family\"\n> per the News/Announce policy [1] is likely to be acceptable as a\n> PostgreSQL GSoC project.\n>\n> In other words, if you're a contributor or developer on WAL-G, barman,\n> pgBackRest, the PostgreSQL website (pgweb), the PgEU/PgUS website code\n> (pgeu-system), pgAdmin4, pgbouncer, pldebugger, the PG RPMs (pgrpms),\n> the JDBC driver, the ODBC driver, or any of the many other PG Family\n> projects, please feel free to add a project for consideration! If we get\n> quite a few, we can organize the page further based on which project or\n> maybe what skills are needed or similar.\n>\n> Let's have another great year of GSoC with PostgreSQL!\n>\n> Thanks!\n>\n> Ilaria & Stephen\n>\n> [1]: https://www.postgresql.org/about/policies/news-and-events/\n\n\nI've added a project to improve the JDBC website\n\nThanks,\n\nDave\n\nOn Thu, 20 Jan 2022 at 14:32, Ilaria Battiston <ilaria.battiston@gmail.com> wrote:Greetings -hackers,\n\nOur beloved Google Summer of Code is back for 2022! They have once again \nchanged some of how GSoC is going to work for this year, for a variety \nof reasons, so please be sure to read this email and posts linked for \nthe updates if you're interested! In short, this year both medium and \nlarge sized projects can be proposed, with more flexibility on end dates.\n\nEveryone interested in suggesting projects or mentoring should review \nthe blog post here regarding the changes:\n\nhttps://opensource.googleblog.com/2021/11/expanding-google-summer-of-code-in-2022.html\n\nNow is the time to work on getting together a set of projects we'd like \nto have GSoC students work on over the summer. Similar to last year, we \nneed to have a good set of projects for students to choose from in \nadvance of the deadline for mentoring organizations.\n\nHOWEVER, as noted in the blog post above, project length expectations \nhave changed. Please decide accordingly based on your requirements and \navailability! Also, there is going to be only one intermediate \nevaluation, similarly to last year.\n\nGSoC timeline: https://developers.google.com/open-source/gsoc/timeline\n\nOne other thing to note is that anyone over the age of 18 will be \neligible in 2022 in addition to students, broadening the pool of \npotential applicants and changing the terminology of applicants to \n\"contributors\".\n\nThe deadline for Mentoring organizations to apply is: February 21. The \nlist of accepted organization will be published around March 7.\n\nUnsurprisingly, we'll need to have an Ideas page again, so I've gone \nahead and created one (copying last year's):\n\nhttps://wiki.postgresql.org/wiki/GSoC_2022\n\nGoogle discusses what makes a good \"Ideas\" list here:\n\nhttps://google.github.io/gsocguides/mentor/defining-a-project-ideas-list.html\n\nAll the entries are marked with '2021' to indicate they were pulled from \nlast year. If the project from last year is still relevant, please \nupdate it to be '2022' and make sure to update all of the information \n(in particular, make sure to list yourself as a mentor and remove the \nother mentors, as appropriate). Please also be sure to update the \nproject's scope to be appropriate for the new guidelines.\n\nNew entries are certainly welcome and encouraged, just be sure to note \nthem as '2022' when you add them. Projects from last year which were \nworked on but have significant follow-on work to be completed are \nabsolutely welcome as well - simply update the description appropriately \nand mark it as being for '2022'.\n\nWhen we get closer to actually submitting our application, I'll clean \nout the '2021' entries that didn't get any updates. Also - if there are \nany projects that are no longer appropriate (maybe they were completed, \nfor example and no longer need work), please feel free to remove them. \nThe page is still work in progress, so it's entirely possible I missed \nsome updates where a GSoC project was completed independently of GSoC \n(and if I removed any that shouldn't have been - feel free to add them \nback by copying from the 2021 page).\n\nAs a reminder, each idea on the page should be in the format that the \nother entries are in and should include:\n\n- Project title/one-line description\n- Brief, 2-5 sentence, description of the project\n- Description of programming skills needed and estimation of the \ndifficulty level\n- Project size\n- List of potential mentors\n- Expected Outcomes\n\nAs with last year, please consider PostgreSQL to be an \"Umbrella\" \nproject and that anything which would be considered \"PostgreSQL Family\" \nper the News/Announce policy [1] is likely to be acceptable as a \nPostgreSQL GSoC project.\n\nIn other words, if you're a contributor or developer on WAL-G, barman, \npgBackRest, the PostgreSQL website (pgweb), the PgEU/PgUS website code \n(pgeu-system), pgAdmin4, pgbouncer, pldebugger, the PG RPMs (pgrpms), \nthe JDBC driver, the ODBC driver, or any of the many other PG Family \nprojects, please feel free to add a project for consideration! If we get \nquite a few, we can organize the page further based on which project or \nmaybe what skills are needed or similar.\n\nLet's have another great year of GSoC with PostgreSQL!\n\nThanks!\n\nIlaria & Stephen\n\n[1]: https://www.postgresql.org/about/policies/news-and-events/I've added a project to improve the JDBC websiteThanks,Dave",
"msg_date": "Thu, 20 Jan 2022 16:48:19 -0500",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2022"
},
{
"msg_contents": "Hi,\n\nOn 01/20/22 14:33, Ilaria Battiston wrote:\n> Unsurprisingly, we'll need to have an Ideas page again, so I've gone ahead\n> and created one (copying last year's):\n> \n> https://wiki.postgresql.org/wiki/GSoC_2022\n\nI've added a project idea about the ongoing PL/Java refactoring.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sun, 6 Feb 2022 17:33:38 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: GSoC 2022"
}
] |
[
{
"msg_contents": "Hi folks,\n\nWe are struggling to figure out what is going on. We are migrating from PostgreSQL 9.6 to PostgreSQL 13 w/ PostGIS. Our 9.6 version was compiled from source and the new version (13) was installed using Yum. BTW, the new version is on a VM that has 16GB of memory, two cores, and 500 GB of disk. In addition, we are using MapServer as our mapping engine and OpenLayers as the client side interface. Once we switch over to the new version of PostgreSQL, the performance takes a big nose dive. We have being tweaking and tuning the database and it appears to be happy but the response times from mapfile requests are 3 -7 seconds. Previously, the response time was below a second.\n\nAnother point is that we populated the new database from the old (9.6), using pg_dump. Could this be causing issues? Should we load the data from scratch? We use ogr2ogr (GDAL) to help assist with loading of spatial data. Anyway, not really sure what the problem is.\n\nLastly, why am I seeing so many requests as to the PostGIS version. It appears that every map request sends the following query \"SELECT PostGIS_Version();\", which in turn takes up a connection.\n\nAny suggestions would be greatly appreciated?\n\n __:)\n _ \\<,_\n (*)/ (*)\nJames Lugosi\nClackamas County GISP\nIS Software Specialist, Senior\n121 Library Court, Oregon City OR 97045\n503-723-4829\n\n\n\n\n\n\n\n\n\n\nHi folks,\n \nWe are struggling to figure out what is going on. We are migrating from PostgreSQL 9.6 to PostgreSQL 13 w/ PostGIS. Our 9.6 version was compiled from source and the new version (13) was installed using Yum. BTW, the new version is on\n a VM that has 16GB of memory, two cores, and 500 GB of disk. In addition, we are using MapServer as our mapping engine and OpenLayers as the client side interface. Once we switch over to the new version of PostgreSQL, the performance takes a big nose dive. \n We have being tweaking and tuning the database and it appears to be happy but the response times from mapfile requests are 3 -7 seconds. Previously, the response time was below a second.\n \nAnother point is that we populated the new database from the old (9.6), using pg_dump. Could this be causing issues? Should we load the data from scratch? We use ogr2ogr (GDAL) to help assist with loading of spatial data. Anyway, not\n really sure what the problem is.\n \nLastly, why am I seeing so many requests as to the PostGIS version. It appears that every map request sends the following query “SELECT PostGIS_Version();”, which in turn takes up a connection.\n \nAny suggestions would be greatly appreciated? \n \n __J\n _ \\<,_\n (*)/ (*)\n\nJames Lugosi\nClackamas County GISP\nIS Software Specialist, Senior\n121 Library Court, Oregon City OR 97045\n503-723-4829",
"msg_date": "Thu, 20 Jan 2022 22:31:15 +0000",
"msg_from": "\"Lugosi, Jim\" <JimLug@clackamas.us>",
"msg_from_op": true,
"msg_subject": "Poor performance PostgreSQL13/PostGIS 3.x"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 10:31:15PM +0000, Lugosi, Jim wrote:\n> We are struggling to figure out what is going on. We are migrating from PostgreSQL 9.6 to PostgreSQL 13 w/ PostGIS. Our 9.6 version was compiled from source and the new version (13) was installed using Yum. BTW, the new version is on a VM that has 16GB of memory, two cores, and 500 GB of disk. In addition, we are using MapServer as our mapping engine and OpenLayers as the client side interface. Once we switch over to the new version of PostgreSQL, the performance takes a big nose dive. We have being tweaking and tuning the database and it appears to be happy but the response times from mapfile requests are 3 -7 seconds. Previously, the response time was below a second.\n> \n> Another point is that we populated the new database from the old (9.6), using pg_dump. Could this be causing issues? Should we load the data from scratch? We use ogr2ogr (GDAL) to help assist with loading of spatial data. Anyway, not really sure what the problem is.\n> \n> Lastly, why am I seeing so many requests as to the PostGIS version. It appears that every map request sends the following query \"SELECT PostGIS_Version();\", which in turn takes up a connection.\n\nThis list is for postgres development and bug reports.\n\nI suggest to post here, instead:\nhttps://www.postgresql.org/list/pgsql-performance/\n\nHere's a list of needed info:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nIt's important to include the slow query itself.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 20 Jan 2022 16:43:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance PostgreSQL13/PostGIS 3.x"
}
] |
[
{
"msg_contents": "Respected sir\\mamI am Vrund V Shah, a computer science undergrad. I have just completed my 3rd semester at G H Patel College of Engineering & Technology. I am new to open source contribution but I am well aware of C/C++, SQL and I will learn Python before the end of the first week of February. I would love to contribute to your organization but don’t know how!!Could you please guide me on how and from where to start?Hope to hear from you soon Regards Vrund V Shah Sent from Mail for Windows \n",
"msg_date": "Fri, 21 Jan 2022 11:39:22 +0530",
"msg_from": "vrund v shah <vrund3008@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to get started with contribution"
},
{
"msg_contents": "Greetings,\n\n* vrund v shah (vrund3008@gmail.com) wrote:\n> I am Vrund V Shah, a computer science undergrad. I have just completed my\n> 3^rd semester at G H Patel College of Engineering & Technology. I am new\n> to open source contribution but I am well aware of C/C++, SQL and I will\n> learn Python before the end of the first week of February. I would love to\n> contribute to your organization but don’t know how!!\n> \n> Could you please guide me on how and from where to start?\n\nI'd suggest you start with patch reviews if you're interested in working\non the core PostgreSQL server code. Information on that is available\nhere:\n\nhttps://wiki.postgresql.org/wiki/Reviewing_a_Patch\n\nThanks,\n\nStephen",
"msg_date": "Fri, 21 Jan 2022 15:28:32 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "On 1/21/22 21:28, Stephen Frost wrote:\n> Greetings,\n> \n> * vrund v shah (vrund3008@gmail.com) wrote:\n>> I am Vrund V Shah, a computer science undergrad. I have just completed my\n>> 3^rd semester at G H Patel College of Engineering & Technology. I am new\n>> to open source contribution but I am well aware of C/C++, SQL and I will\n>> learn Python before the end of the first week of February. I would love to\n>> contribute to your organization but don’t know how!!\n>>\n>> Could you please guide me on how and from where to start?\n> \n> I'd suggest you start with patch reviews if you're interested in working\n> on the core PostgreSQL server code. Information on that is available\n> here:\n> \n> https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> \n\nYeah, that's what I recommend people who ask me this question.\n\nHowever, that wiki page is more about the process than about \"what\" to \ndo, so my advice to the OP would be to first go to the current CF [1] \nand look for patches that would be genuinely useful for him/her (e.g. \nbecause of work). And do the review by following the wiki page.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 21 Jan 2022 21:41:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 1/21/22 21:28, Stephen Frost wrote:\n> >* vrund v shah (vrund3008@gmail.com) wrote:\n> >> I am Vrund V Shah, a computer science undergrad. I have just completed my\n> >> 3^rd semester at G H Patel College of Engineering & Technology. I am new\n> >> to open source contribution but I am well aware of C/C++, SQL and I will\n> >> learn Python before the end of the first week of February. I would love to\n> >> contribute to your organization but don’t know how!!\n> >>\n> >> Could you please guide me on how and from where to start?\n> >\n> >I'd suggest you start with patch reviews if you're interested in working\n> >on the core PostgreSQL server code. Information on that is available\n> >here:\n> >\n> >https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> >\n> \n> Yeah, that's what I recommend people who ask me this question.\n> \n> However, that wiki page is more about the process than about \"what\" to do,\n> so my advice to the OP would be to first go to the current CF [1] and look\n> for patches that would be genuinely useful for him/her (e.g. because of\n> work). And do the review by following the wiki page.\n\nYeah. There's also this:\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\n\nwhere the first topic is about getting involved in PG development, and\nthere's:\n\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n\nwhich covers a bit more about mailing lists and such.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 21 Jan 2022 15:53:35 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "Thank you for your valuable guidance.\nI will surely look at the links and if have any queries then I will contact\nyou.\n\nregards\nVrund V Shah\n\nOn Sat, Jan 22, 2022 at 2:23 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> > On 1/21/22 21:28, Stephen Frost wrote:\n> > >* vrund v shah (vrund3008@gmail.com) wrote:\n> > >> I am Vrund V Shah, a computer science undergrad. I have just\n> completed my\n> > >> 3^rd semester at G H Patel College of Engineering & Technology. I\n> am new\n> > >> to open source contribution but I am well aware of C/C++, SQL and\n> I will\n> > >> learn Python before the end of the first week of February. I would\n> love to\n> > >> contribute to your organization but don’t know how!!\n> > >>\n> > >> Could you please guide me on how and from where to start?\n> > >\n> > >I'd suggest you start with patch reviews if you're interested in working\n> > >on the core PostgreSQL server code. Information on that is available\n> > >here:\n> > >\n> > >https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> > >\n> >\n> > Yeah, that's what I recommend people who ask me this question.\n> >\n> > However, that wiki page is more about the process than about \"what\" to\n> do,\n> > so my advice to the OP would be to first go to the current CF [1] and\n> look\n> > for patches that would be genuinely useful for him/her (e.g. because of\n> > work). And do the review by following the wiki page.\n>\n> Yeah. There's also this:\n>\n> https://wiki.postgresql.org/wiki/Developer_FAQ\n>\n> where the first topic is about getting involved in PG development, and\n> there's:\n>\n> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n>\n> which covers a bit more about mailing lists and such.\n>\n> Thanks,\n>\n> Stephen\n>\n\nThank you for your valuable guidance. I will surely look at the links and if have any queries then I will contact you.regards Vrund V ShahOn Sat, Jan 22, 2022 at 2:23 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 1/21/22 21:28, Stephen Frost wrote:\n> >* vrund v shah (vrund3008@gmail.com) wrote:\n> >> I am Vrund V Shah, a computer science undergrad. I have just completed my\n> >> 3^rd semester at G H Patel College of Engineering & Technology. I am new\n> >> to open source contribution but I am well aware of C/C++, SQL and I will\n> >> learn Python before the end of the first week of February. I would love to\n> >> contribute to your organization but don’t know how!!\n> >>\n> >> Could you please guide me on how and from where to start?\n> >\n> >I'd suggest you start with patch reviews if you're interested in working\n> >on the core PostgreSQL server code. Information on that is available\n> >here:\n> >\n> >https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> >\n> \n> Yeah, that's what I recommend people who ask me this question.\n> \n> However, that wiki page is more about the process than about \"what\" to do,\n> so my advice to the OP would be to first go to the current CF [1] and look\n> for patches that would be genuinely useful for him/her (e.g. because of\n> work). And do the review by following the wiki page.\n\nYeah. There's also this:\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\n\nwhere the first topic is about getting involved in PG development, and\nthere's:\n\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n\nwhich covers a bit more about mailing lists and such.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 22 Jan 2022 08:12:07 +0530",
"msg_from": "vrund shah <vrund3008@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "Respected Sir\\Mam\n\nI am already using PostgreSQL for my college purpose and for learning SQL.\nI have learned SQL from udemy courses with instructor Jose Portilla. and I\nam well aware of PostgreSQL and PGAdmin.\n\nRegards\nVrund V Shah\n\nOn Sat, Jan 22, 2022 at 8:12 AM vrund shah <vrund3008@gmail.com> wrote:\n\n> Thank you for your valuable guidance.\n> I will surely look at the links and if have any queries then I will\n> contact you.\n>\n> regards\n> Vrund V Shah\n>\n> On Sat, Jan 22, 2022 at 2:23 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n>> Greetings,\n>>\n>> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>> > On 1/21/22 21:28, Stephen Frost wrote:\n>> > >* vrund v shah (vrund3008@gmail.com) wrote:\n>> > >> I am Vrund V Shah, a computer science undergrad. I have just\n>> completed my\n>> > >> 3^rd semester at G H Patel College of Engineering & Technology. I\n>> am new\n>> > >> to open source contribution but I am well aware of C/C++, SQL and\n>> I will\n>> > >> learn Python before the end of the first week of February. I\n>> would love to\n>> > >> contribute to your organization but don’t know how!!\n>> > >>\n>> > >> Could you please guide me on how and from where to start?\n>> > >\n>> > >I'd suggest you start with patch reviews if you're interested in\n>> working\n>> > >on the core PostgreSQL server code. Information on that is available\n>> > >here:\n>> > >\n>> > >https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n>> > >\n>> >\n>> > Yeah, that's what I recommend people who ask me this question.\n>> >\n>> > However, that wiki page is more about the process than about \"what\" to\n>> do,\n>> > so my advice to the OP would be to first go to the current CF [1] and\n>> look\n>> > for patches that would be genuinely useful for him/her (e.g. because of\n>> > work). And do the review by following the wiki page.\n>>\n>> Yeah. There's also this:\n>>\n>> https://wiki.postgresql.org/wiki/Developer_FAQ\n>>\n>> where the first topic is about getting involved in PG development, and\n>> there's:\n>>\n>> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n>>\n>> which covers a bit more about mailing lists and such.\n>>\n>> Thanks,\n>>\n>> Stephen\n>>\n>\n\nRespected Sir\\MamI am already using PostgreSQL for my college purpose and for learning SQL.I have learned SQL from udemy courses with instructor Jose Portilla. and I am well aware of PostgreSQL and PGAdmin.Regards Vrund V ShahOn Sat, Jan 22, 2022 at 8:12 AM vrund shah <vrund3008@gmail.com> wrote:Thank you for your valuable guidance. I will surely look at the links and if have any queries then I will contact you.regards Vrund V ShahOn Sat, Jan 22, 2022 at 2:23 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 1/21/22 21:28, Stephen Frost wrote:\n> >* vrund v shah (vrund3008@gmail.com) wrote:\n> >> I am Vrund V Shah, a computer science undergrad. I have just completed my\n> >> 3^rd semester at G H Patel College of Engineering & Technology. I am new\n> >> to open source contribution but I am well aware of C/C++, SQL and I will\n> >> learn Python before the end of the first week of February. I would love to\n> >> contribute to your organization but don’t know how!!\n> >>\n> >> Could you please guide me on how and from where to start?\n> >\n> >I'd suggest you start with patch reviews if you're interested in working\n> >on the core PostgreSQL server code. Information on that is available\n> >here:\n> >\n> >https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> >\n> \n> Yeah, that's what I recommend people who ask me this question.\n> \n> However, that wiki page is more about the process than about \"what\" to do,\n> so my advice to the OP would be to first go to the current CF [1] and look\n> for patches that would be genuinely useful for him/her (e.g. because of\n> work). And do the review by following the wiki page.\n\nYeah. There's also this:\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\n\nwhere the first topic is about getting involved in PG development, and\nthere's:\n\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n\nwhich covers a bit more about mailing lists and such.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 22 Jan 2022 08:17:23 +0530",
"msg_from": "vrund shah <vrund3008@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "Respected Sir\\Mam\nThis year I am planning to take part in GSOC 2022 in the PostgreSQL\norganization.\n\nRegards\nVrund V Shah\n\nOn Sat, Jan 22, 2022 at 8:17 AM vrund shah <vrund3008@gmail.com> wrote:\n\n> Respected Sir\\Mam\n>\n> I am already using PostgreSQL for my college purpose and for learning SQL.\n> I have learned SQL from udemy courses with instructor Jose Portilla. and I\n> am well aware of PostgreSQL and PGAdmin.\n>\n> Regards\n> Vrund V Shah\n>\n> On Sat, Jan 22, 2022 at 8:12 AM vrund shah <vrund3008@gmail.com> wrote:\n>\n>> Thank you for your valuable guidance.\n>> I will surely look at the links and if have any queries then I will\n>> contact you.\n>>\n>> regards\n>> Vrund V Shah\n>>\n>> On Sat, Jan 22, 2022 at 2:23 AM Stephen Frost <sfrost@snowman.net> wrote:\n>>\n>>> Greetings,\n>>>\n>>> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>>> > On 1/21/22 21:28, Stephen Frost wrote:\n>>> > >* vrund v shah (vrund3008@gmail.com) wrote:\n>>> > >> I am Vrund V Shah, a computer science undergrad. I have just\n>>> completed my\n>>> > >> 3^rd semester at G H Patel College of Engineering & Technology.\n>>> I am new\n>>> > >> to open source contribution but I am well aware of C/C++, SQL\n>>> and I will\n>>> > >> learn Python before the end of the first week of February. I\n>>> would love to\n>>> > >> contribute to your organization but don’t know how!!\n>>> > >>\n>>> > >> Could you please guide me on how and from where to start?\n>>> > >\n>>> > >I'd suggest you start with patch reviews if you're interested in\n>>> working\n>>> > >on the core PostgreSQL server code. Information on that is available\n>>> > >here:\n>>> > >\n>>> > >https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n>>> > >\n>>> >\n>>> > Yeah, that's what I recommend people who ask me this question.\n>>> >\n>>> > However, that wiki page is more about the process than about \"what\" to\n>>> do,\n>>> > so my advice to the OP would be to first go to the current CF [1] and\n>>> look\n>>> > for patches that would be genuinely useful for him/her (e.g. because of\n>>> > work). And do the review by following the wiki page.\n>>>\n>>> Yeah. There's also this:\n>>>\n>>> https://wiki.postgresql.org/wiki/Developer_FAQ\n>>>\n>>> where the first topic is about getting involved in PG development, and\n>>> there's:\n>>>\n>>> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n>>>\n>>> which covers a bit more about mailing lists and such.\n>>>\n>>> Thanks,\n>>>\n>>> Stephen\n>>>\n>>\n\nRespected Sir\\MamThis year I am planning to take part in GSOC 2022 in the PostgreSQL organization.Regards Vrund V ShahOn Sat, Jan 22, 2022 at 8:17 AM vrund shah <vrund3008@gmail.com> wrote:Respected Sir\\MamI am already using PostgreSQL for my college purpose and for learning SQL.I have learned SQL from udemy courses with instructor Jose Portilla. and I am well aware of PostgreSQL and PGAdmin.Regards Vrund V ShahOn Sat, Jan 22, 2022 at 8:12 AM vrund shah <vrund3008@gmail.com> wrote:Thank you for your valuable guidance. I will surely look at the links and if have any queries then I will contact you.regards Vrund V ShahOn Sat, Jan 22, 2022 at 2:23 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 1/21/22 21:28, Stephen Frost wrote:\n> >* vrund v shah (vrund3008@gmail.com) wrote:\n> >> I am Vrund V Shah, a computer science undergrad. I have just completed my\n> >> 3^rd semester at G H Patel College of Engineering & Technology. I am new\n> >> to open source contribution but I am well aware of C/C++, SQL and I will\n> >> learn Python before the end of the first week of February. I would love to\n> >> contribute to your organization but don’t know how!!\n> >>\n> >> Could you please guide me on how and from where to start?\n> >\n> >I'd suggest you start with patch reviews if you're interested in working\n> >on the core PostgreSQL server code. Information on that is available\n> >here:\n> >\n> >https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> >\n> \n> Yeah, that's what I recommend people who ask me this question.\n> \n> However, that wiki page is more about the process than about \"what\" to do,\n> so my advice to the OP would be to first go to the current CF [1] and look\n> for patches that would be genuinely useful for him/her (e.g. because of\n> work). And do the review by following the wiki page.\n\nYeah. There's also this:\n\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\n\nwhere the first topic is about getting involved in PG development, and\nthere's:\n\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n\nwhich covers a bit more about mailing lists and such.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 22 Jan 2022 08:53:00 +0530",
"msg_from": "vrund shah <vrund3008@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
},
{
"msg_contents": "Greetings,\n\n* vrund shah (vrund3008@gmail.com) wrote:\n> Thank you for your valuable guidance.\n> I will surely look at the links and if have any queries then I will contact\n> you.\n\nOn these mailing lists, we prefer that you reply 'in-line', as I'm doing\nhere, and not use 'top-posting' (as you did in your replies).\n\n* vrund shah (vrund3008@gmail.com) wrote:\n> I am already using PostgreSQL for my college purpose and for learning SQL.\n> I have learned SQL from udemy courses with instructor Jose Portilla. and I\n> am well aware of PostgreSQL and PGAdmin.\n\nGreat. Being familiar with SQL will certainly help.\n\n* vrund shah (vrund3008@gmail.com) wrote:\n> This year I am planning to take part in GSOC 2022 in the PostgreSQL\n> organization.\n\nGlad to hear that. Note that while we do intend to submit for GSoC\n2022, there's no guarantee that we will be selected.\n\nThat said, this is a great way to get started. If you already have a\nproject idea in mind, I encourage you to post to this list what that\nidea is and ask for feedback. If you don't have a project idea already\nthen you could review the project ideas page:\n\nhttps://wiki.postgresql.org/wiki/GSoC_2022\n\nNote that the current page lists projects from last year and will\ncontinue to be updated between now and the GSoC 2022 organization\nsubmission deadline. Still, hopefully reviewing the ones there will\ngive you some thoughts about what you might be interested in working on.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 22 Jan 2022 12:28:06 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: How to get started with contribution"
}
] |
[
{
"msg_contents": "Thomas Munro pointed out this failure to me on fairywren:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-01-21%2020%3A10%3A22\n\nHe theorizes that I need some perl2host magic in there, which may well\nbe true. But I also noticed this:\n\n# Running: pg_basebackup --no-sync -cfast --target\nserver:/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/tmp_test_Ag8r/backuponserver\n-X none\npg_basebackup: error: could not initiate base backup: ERROR:\nunrecognized target: \"server;C\"\n\n\"server\" is a valid backup target, but \"server;C\" is not. And I think\nthis must be a bug on the client side, because the server logs the\ngenerated query:\n\n2022-01-21 20:53:11.618 UTC [8404:10] 010_pg_basebackup.pl LOG:\nreceived replication command: BASE_BACKUP ( LABEL 'pg_basebackup base\nbackup', PROGRESS, CHECKPOINT 'fast', MANIFEST 'yes',\nTABLESPACE_MAP, TARGET 'server;C', TARGET_DETAIL\n'\\\\tools\\\\msys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\src\\\\bin\\\\pg_basebackup\\\\tmp_check\\\\tmp_test_Ag8r\\\\backuponserver')\n\nSo it's not that the server parses the query and introduces gibberish\n-- the client has already introduced gibberish when constructing the\nquery. But the code to do that is pretty straightforward -- we just\ncall strchr to find the colon in the backup target, and then\npnstrdup() the part before the colon and use the latter part as-is. If\npnstrdup were failing to add a terminating \\0 then this would be quite\nplausible, but I think it shouldn't. Unless the operating sytem's\nstrnlen() is buggy? That seems like a stretch, so feel free to tell me\nwhat obvious stupid thing I did here and am not seeing...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Jan 2022 16:42:24 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> \"server\" is a valid backup target, but \"server;C\" is not. And I think\n> this must be a bug on the client side, because the server logs the\n> generated query:\n\n> 2022-01-21 20:53:11.618 UTC [8404:10] 010_pg_basebackup.pl LOG:\n> received replication command: BASE_BACKUP ( LABEL 'pg_basebackup base\n> backup', PROGRESS, CHECKPOINT 'fast', MANIFEST 'yes',\n> TABLESPACE_MAP, TARGET 'server;C', TARGET_DETAIL\n> '\\\\tools\\\\msys64\\\\home\\\\pgrunner\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\src\\\\bin\\\\pg_basebackup\\\\tmp_check\\\\tmp_test_Ag8r\\\\backuponserver')\n\n> So it's not that the server parses the query and introduces gibberish\n> -- the client has already introduced gibberish when constructing the\n> query. But the code to do that is pretty straightforward -- we just\n> call strchr to find the colon in the backup target, and then\n> pnstrdup() the part before the colon and use the latter part as-is. If\n> pnstrdup were failing to add a terminating \\0 then this would be quite\n> plausible, but I think it shouldn't. Unless the operating sytem's\n> strnlen() is buggy? That seems like a stretch, so feel free to tell me\n> what obvious stupid thing I did here and am not seeing...\n\nI think the backup_target string was already corrupted that way when\npg_basebackup absorbed it from optarg. It's pretty hard to believe that\nthe strchr/pnstrdup stanza got it wrong. However, comparing the\nTARGET_DETAIL to what the TAP test says it issued:\n\n# Running: pg_basebackup --no-sync -cfast --target server:/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/tmp_test_Ag8r/backuponserver -X none\npg_basebackup: error: could not initiate base backup: ERROR: unrecognized target: \"server;C\"\n\nit's absolutely clear that something decided to munge the target string.\nGiven that colon is reserved in Windows filename syntax, it's not\nso surprising if it munged it wrong, or at least more aggressively\nthan you expected.\n\nI kinda wonder if this notation for the target was well-chosen.\nKeeping the file name strictly separate from the \"type\" keyword\nmight be a wiser move. Quite aside from Windows-isms, there\nare going to be usages where this is hard to tell from a URL.\n(If memory serves, double leading slash is significant to some\nnetworked file systems.)\n\nWhile we're on the subject of ill-chosen option syntax: \"-cfast\"\nwith non double dashes? Really? That's horribly ambiguous.\nMost programs would parse something like that as five single-letter\noptions, and most users who know Unix would expect it to mean that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:09:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 10:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> # Running: pg_basebackup --no-sync -cfast --target\n> server:/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/tmp_test_Ag8r/backuponserver\n> -X none\n> pg_basebackup: error: could not initiate base backup: ERROR:\n> unrecognized target: \"server;C\"\n>\n> \"server\" is a valid backup target, but \"server;C\" is not. And I think\n> this must be a bug on the client side, because the server logs the\n> generated query:\n\nIt looks a bit like msys perl could be recognising\n\"server:/home/pgrunner/...\" and converting it to\n\"server;C:\\tools\\msys64\\home\\pgrunner\\...\". From a bit of light\ngoogling I see that such conversions happen in msys perl's system()\nunless you turn them off with MSYS_NO_PATHCONV, and then we'd have to\ndo it ourselves in the right places.\n\n\n",
"msg_date": "Sat, 22 Jan 2022 11:10:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 5:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think the backup_target string was already corrupted that way when\n> pg_basebackup absorbed it from optarg. It's pretty hard to believe that\n> the strchr/pnstrdup stanza got it wrong. However, comparing the\n> TARGET_DETAIL to what the TAP test says it issued:\n>\n> # Running: pg_basebackup --no-sync -cfast --target server:/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/tmp_test_Ag8r/backuponserver -X none\n> pg_basebackup: error: could not initiate base backup: ERROR: unrecognized target: \"server;C\"\n>\n> it's absolutely clear that something decided to munge the target string.\n> Given that colon is reserved in Windows filename syntax, it's not\n> so surprising if it munged it wrong, or at least more aggressively\n> than you expected.\n\nNothing in the Perl code tells it that the particular argument in\nquestion is a path rather than anything else, so it must be applying\nsome heuristic to decide whether to munge it. That's horrible.\n\n> I kinda wonder if this notation for the target was well-chosen.\n> Keeping the file name strictly separate from the \"type\" keyword\n> might be a wiser move. Quite aside from Windows-isms, there\n> are going to be usages where this is hard to tell from a URL.\n> (If memory serves, double leading slash is significant to some\n> networked file systems.)\n\nMaybe. I think it's important that the notation is not ridiculously\nverbose, and -t server --target-detail $PATH is a LOT more typing.\n\n> While we're on the subject of ill-chosen option syntax: \"-cfast\"\n> with non double dashes? Really? That's horribly ambiguous.\n> Most programs would parse something like that as five single-letter\n> options, and most users who know Unix would expect it to mean that.\n\nI'm not sure whether you're complaining that we accept that syntax or\nusing it, but AFAIK I'm responsible for neither. I think the syntax\nhas been accepted since pg_basebackup was added in 2011, and Andres\nadded it to this test case earlier this week (with -cfast in the\nsubject line of the commit message). FWIW, though, I've been aware of\nthat syntax for a long time and never thought it was a problem. I\nusually spell the option in exactly that way when I use it, and I'm\nrelatively sure that things I've given to customers would break if we\nremoved support for it. I don't know how we'd do that anyway, since\nall that's happening here is a call to getopt_long().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:26:32 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/21/22 17:10, Thomas Munro wrote:\n> On Sat, Jan 22, 2022 at 10:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> # Running: pg_basebackup --no-sync -cfast --target\n>> server:/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/tmp_test_Ag8r/backuponserver\n>> -X none\n>> pg_basebackup: error: could not initiate base backup: ERROR:\n>> unrecognized target: \"server;C\"\n>>\n>> \"server\" is a valid backup target, but \"server;C\" is not. And I think\n>> this must be a bug on the client side, because the server logs the\n>> generated query:\n> It looks a bit like msys perl could be recognising\n> \"server:/home/pgrunner/...\" and converting it to\n> \"server;C:\\tools\\msys64\\home\\pgrunner\\...\". From a bit of light\n> googling I see that such conversions happen in msys perl's system()\n> unless you turn them off with MSYS_NO_PATHCONV, and then we'd have to\n> do it ourselves in the right places.\n\n\n\nc.f. src/bin/pg_verifybackup/t/003_corruption.pl which says:\n\n\n my $source_ts_prefix = $source_ts_path;\n $source_ts_prefix =~ s!(^[A-Z]:/[^/]*)/.*!$1!;\n ...\n\n # See https://www.msys2.org/wiki/Porting/#filesystem-namespaces\n local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;\n\n\nProbably in this case just setting it to 'server:' would do the trick.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:35:16 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jan 21, 2022 at 5:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While we're on the subject of ill-chosen option syntax: \"-cfast\"\n>> with non double dashes? Really? That's horribly ambiguous.\n\n> I'm not sure whether you're complaining that we accept that syntax or\n> using it, but AFAIK I'm responsible for neither. I think the syntax\n> has been accepted since pg_basebackup was added in 2011, and Andres\n> added it to this test case earlier this week (with -cfast in the\n> subject line of the commit message).\n\npg_basebackup's help defines the syntax as\n\n -c, --checkpoint=fast|spread\n set fast or spread checkpointing\n\nwhich I'd read as requiring a space (or possibly equal sign)\nbetween \"-c\" and \"fast\". If it works as written in this test,\nthat's an accident of the particular getopt implementation,\nand I'll bet it won't be too long before we come across\na getopt that doesn't like it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:35:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> c.f. src/bin/pg_verifybackup/t/003_corruption.pl which says:\n> my $source_ts_prefix = $source_ts_path;\n> $source_ts_prefix =~ s!(^[A-Z]:/[^/]*)/.*!$1!;\n> ...\n\n> # See https://www.msys2.org/wiki/Porting/#filesystem-namespaces\n> local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;\n\n> Probably in this case just setting it to 'server:' would do the trick.\n\nThe point I was trying to make is that if we have to jump through\nthat sort of hoop in the test scripts, then real users are going\nto have to jump through it as well, and they won't like that\n(and we will get bug reports about it). It'd be better to design\nthe option syntax to avoid such requirements.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:42:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-21 17:42:45 -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > c.f. src/bin/pg_verifybackup/t/003_corruption.pl which says:\n> > ��� my $source_ts_prefix = $source_ts_path;\n> > ��� $source_ts_prefix =~ s!(^[A-Z]:/[^/]*)/.*!$1!;\n> > ��� ...\n>\n> > ��� # See https://www.msys2.org/wiki/Porting/#filesystem-namespaces\n> > ��� local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;\n>\n> > Probably in this case just setting it to 'server:' would do the trick.\n>\n> The point I was trying to make is that if we have to jump through\n> that sort of hoop in the test scripts, then real users are going\n> to have to jump through it as well, and they won't like that\n> (and we will get bug reports about it). It'd be better to design\n> the option syntax to avoid such requirements.\n\nNormal users aren't going to invoke a \"native\" basebackup from inside msys. I\nassume the translation happens because an \"msys world\" perl invokes\na \"native\" pg_basebackup via msys system(), right? If pg_basebackup instead is\n\"normally\" invoked from a windows terminal, or anything else \"native\" windows,\nthe problem won't exist, no?\n\nAs we're building a \"native\" postgres in this case, none of our tools should\ninternally have such translations happening. So I don't think it'll be a huge\nissue for users themselves?\n\nNot that I think that there are all that many users of mingw built postgres on\nwindows... I think it's mostly msvc built postgres in that world?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 Jan 2022 15:04:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 5:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The point I was trying to make is that if we have to jump through\n> that sort of hoop in the test scripts, then real users are going\n> to have to jump through it as well, and they won't like that\n> (and we will get bug reports about it). It'd be better to design\n> the option syntax to avoid such requirements.\n\nWell, as Andrew points out, pg_basebackup's -T foo=bar syntax causes\nthe same issue, so you'll need to redesign that, too. But even that is\nnot really a proper fix. The real root of this problem is that the\noperating system's notion of a valid path differs from PostgreSQL's\nnotion of a valid path on this platform, and I imagine that fixing\nthat is a rather large project.\n\nISTM that you're basically just complaining about options syntax that\nyou don't like, but I think there's nothing particularly worse about\nthis syntax than lots of other things we type all the time. psql -v\nVAR=VALUE -P OTHERKINDOFVAR=OTHERVALUE? curl -u USER:PASSWORD?\npg_basebackup -T OLD=NEW? perl -d[t]:MODULE=OPT,OPT? I mean, that last\none actually seems kinda horrible and if this were as bad as that I'd\nsay yeah, it should be redesigned. But I don't think it is. There's\nplenty of precedent for bundling closely-related values into a single\ncommand-line option, which is all I've done here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Jan 2022 18:21:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-21 17:26:32 -0500, Robert Haas wrote:\n> I think the syntax has been accepted since pg_basebackup was added in 2011,\n> and Andres added it to this test case earlier this week (with -cfast in the\n> subject line of the commit message).\n\nThe reason I used -cfast instead of -c fast or --checkpoint=fast is that the\nway perltidy formats leads to very wide lines already, and making them even\nlonger seemed unattractive...\n\nGiven the -cfast syntax successfully passed tests on at least AIX, freebsd,\nlinux, macos, netbsd, openbsd, windows msvc, windows msys, I'm not too worried\nabout portability either.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 Jan 2022 15:28:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 5:35 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> # See https://www.msys2.org/wiki/Porting/#filesystem-namespaces\n> local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;\n> Probably in this case just setting it to 'server:' would do the trick.\n\nOh, thanks for the tip. Do you want to push a commit that does that,\nor ... should I do it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Jan 2022 21:55:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 3:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jan 21, 2022 at 5:35 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > # See https://www.msys2.org/wiki/Porting/#filesystem-namespaces\n> > local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;\n> > Probably in this case just setting it to 'server:' would do the trick.\n>\n> Oh, thanks for the tip. Do you want to push a commit that does that,\n> or ... should I do it?\n\nJust a thought: Would it prevent the magic path translation and all\njust work if the path were already in Windows form? So, if we did\njust this change at the top:\n\n-my $tempdir = PostgreSQL::Test::Utils::tempdir;\n+my $tempdir = PostgreSQL::Test::Utils::perl2host(PostgreSQL::Test::Utils::tempdir);\n\n\n",
"msg_date": "Sat, 22 Jan 2022 16:43:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/21/22 18:04, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-21 17:42:45 -0500, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> c.f. src/bin/pg_verifybackup/t/003_corruption.pl which says:\n>>> my $source_ts_prefix = $source_ts_path;\n>>> $source_ts_prefix =~ s!(^[A-Z]:/[^/]*)/.*!$1!;\n>>> ...\n>>> # See https://www.msys2.org/wiki/Porting/#filesystem-namespaces\n>>> local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;\n>>> Probably in this case just setting it to 'server:' would do the trick.\n>> The point I was trying to make is that if we have to jump through\n>> that sort of hoop in the test scripts, then real users are going\n>> to have to jump through it as well, and they won't like that\n>> (and we will get bug reports about it). It'd be better to design\n>> the option syntax to avoid such requirements.\n> Normal users aren't going to invoke a \"native\" basebackup from inside msys. I\n> assume the translation happens because an \"msys world\" perl invokes\n> a \"native\" pg_basebackup via msys system(), right? If pg_basebackup instead is\n> \"normally\" invoked from a windows terminal, or anything else \"native\" windows,\n> the problem won't exist, no?\n>\n> As we're building a \"native\" postgres in this case, none of our tools should\n> internally have such translations happening. So I don't think it'll be a huge\n> issue for users themselves?\n\n\nAll true. This is purely an issue for our testing regime, and not for\nend users.\n\n\n>\n> Not that I think that there are all that many users of mingw built postgres on\n> windows... I think it's mostly msvc built postgres in that world?\n>\n\nThe vast majority use the installer which is built with MSVC. Very few\nin my experience build their own.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 22 Jan 2022 11:00:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On 1/21/22 22:43, Thomas Munro wrote:\n> On Sat, Jan 22, 2022 at 3:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Fri, Jan 21, 2022 at 5:35 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> # See https://www.msys2.org/wiki/Porting/#filesystem-namespaces\n>>> local $ENV{MSYS2_ARG_CONV_EXCL} = $source_ts_prefix;\n>>> Probably in this case just setting it to 'server:' would do the trick.\n>> Oh, thanks for the tip. Do you want to push a commit that does that,\n>> or ... should I do it?\n> Just a thought: Would it prevent the magic path translation and all\n> just work if the path were already in Windows form? So, if we did\n> just this change at the top:\n>\n> -my $tempdir = PostgreSQL::Test::Utils::tempdir;\n> +my $tempdir = PostgreSQL::Test::Utils::perl2host(PostgreSQL::Test::Utils::tempdir);\n\n\nIt's not as simple as that :-( But you're on the right track. My\nsuggestion above doesn't work.\n\nThe rule for paths is: when you're passing a path to an external program\nthat's not msys aware (typically, one of our build artefacts like psql\nor pg_basebackup) it needs to be a native path. But when you're passing\nit to a perl function (e.g. mkdir) or to an external program that's msys\naware it needs to be a virtual path, i.e. one not mangled by perl2host.\n\nSome recent commits to this file especially have not obeyed this rule.\nHere's a patch that does it consistently for the whole file. I have\ntested it on a system very like fairywren, and the test passes.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 23 Jan 2022 12:20:05 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 12:20 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> It's not as simple as that :-( But you're on the right track. My\n> suggestion above doesn't work.\n>\n> The rule for paths is: when you're passing a path to an external program\n> that's not msys aware (typically, one of our build artefacts like psql\n> or pg_basebackup) it needs to be a native path. But when you're passing\n> it to a perl function (e.g. mkdir) or to an external program that's msys\n> aware it needs to be a virtual path, i.e. one not mangled by perl2host.\n>\n> Some recent commits to this file especially have not obeyed this rule.\n> Here's a patch that does it consistently for the whole file. I have\n> tested it on a system very like fairywren, and the test passes.\n\nI can't understand how this would prevent server:/what/ever from\ngetting turned into server;c:\\what\\ever. But if it does, great!\n\nMaybe we need to have a README in the tree somewhere that tries to\nexplain this. Or maybe we should make our build artifacts msys-aware,\nif that's possible, so that this just works. Or maybe supporting msys\nis not worth the trouble.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 23 Jan 2022 14:48:55 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Maybe we need to have a README in the tree somewhere that tries to\n> explain this. Or maybe we should make our build artifacts msys-aware,\n> if that's possible, so that this just works. Or maybe supporting msys\n> is not worth the trouble.\n\nI've been wondering that last myself. Supporting Windows-native is\nalready a huge amount of work, which we put up with because there\nare a lot of users. If msys is going to add another large chunk of\nwork, has it got enough users to justify that?\n\nThe recent argument that this behavior isn't user-visible doesn't do\nanything to mollify me on that point; it appears to me to be tantamount\nto a concession that no real users actually care about msys.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Jan 2022 15:07:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/23/22 15:07, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Maybe we need to have a README in the tree somewhere that tries to\n>> explain this. Or maybe we should make our build artifacts msys-aware,\n>> if that's possible, so that this just works. Or maybe supporting msys\n>> is not worth the trouble.\n> I've been wondering that last myself. Supporting Windows-native is\n> already a huge amount of work, which we put up with because there\n> are a lot of users. If msys is going to add another large chunk of\n> work, has it got enough users to justify that?\n>\n> The recent argument that this behavior isn't user-visible doesn't do\n> anything to mollify me on that point; it appears to me to be tantamount\n> to a concession that no real users actually care about msys.\n\n\nMsys is a unix-like environment that is useful to build Postgres. It's\nnot intended as a general runtime environment. We therefore don't build\nmsys-aware Postgres. We use msys to build standalone Postgres binaries\nthat don't need or use any msys runtime. There is nothing in the least\nbit new about this - that's the way it's been since day one of the\nWindows port nearly 20 years ago.\n\nSpeaking as someone who (for my sins) regularly deals with problems on\nWindows, I find msys much easier to deal with than VisualStudio,\nprobably because it's so much like what I use elsewhere. So I think\ndropping msys support would be a serious mistake.\n\nThe most common issues we get are around this issue of virtualized paths\nin the TAP tests. If people followed the rule I suggested upthread, 99%\nof those problems would go away. I realize it's annoying - I've been\ncaught by it myself on more than one occasion. Maybe there's a way to\navoid it, but if there is I'm unaware of it. But I don't think it's in\nany way a good reason to drop msys support.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 23 Jan 2022 16:09:01 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The most common issues we get are around this issue of virtualized paths\n> in the TAP tests. If people followed the rule I suggested upthread, 99%\n> of those problems would go away. I realize it's annoying - I've been\n> caught by it myself on more than one occasion. Maybe there's a way to\n> avoid it, but if there is I'm unaware of it. But I don't think it's in\n> any way a good reason to drop msys support.\n\nWell, let's go back to Robert's other suggestion: some actual\ndocumentation of these rules, in the source tree, might help\npeople to follow them. src/test/perl/README seems like an\nappropriate place.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Jan 2022 16:15:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-23 16:09:01 -0500, Andrew Dunstan wrote:\n> Msys is a unix-like environment that is useful to build Postgres. It's\n> not intended as a general runtime environment. We therefore don't build\n> msys-aware Postgres. We use msys to build standalone Postgres binaries\n> that don't need or use any msys runtime. There is nothing in the least\n> bit new about this - that's the way it's been since day one of the\n> Windows port nearly 20 years ago.\n>\n> Speaking as someone who (for my sins) regularly deals with problems on\n> Windows, I find msys much easier to deal with than VisualStudio,\n> probably because it's so much like what I use elsewhere. So I think\n> dropping msys support would be a serious mistake.\n\nI agree that msys support is useful (although configure is *so* slow that I\nfind its usefullness reduced substantially).\n\n\n> The most common issues we get are around this issue of virtualized paths\n> in the TAP tests. If people followed the rule I suggested upthread, 99%\n> of those problems would go away. I realize it's annoying - I've been\n> caught by it myself on more than one occasion. Maybe there's a way to\n> avoid it, but if there is I'm unaware of it. But I don't think it's in\n> any way a good reason to drop msys support.\n\nNeeding to sprinkle perl2host and MSYS2_ARG_CONV_EXCL over a good number of\ntests, getting weird errors when failing, etc IMO isn't a scalable approach,\nfor a platform that most of use never use.\n\nCan't we solve this in a generic way? E.g. by insisting that the test run with\na native perl and normalizing the few virtual paths we get invoked with\ncentrally? Making the msys initial setup a bit more cumbersome would IMO be\nan OK price to pay for making it maintainable / predictable from other\nplatforms, as long as we have some decent docs and decent error messages.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 23 Jan 2022 13:31:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/23/22 16:31, Andres Freund wrote:\n>> The most common issues we get are around this issue of virtualized paths\n>> in the TAP tests. If people followed the rule I suggested upthread, 99%\n>> of those problems would go away. I realize it's annoying - I've been\n>> caught by it myself on more than one occasion. Maybe there's a way to\n>> avoid it, but if there is I'm unaware of it. But I don't think it's in\n>> any way a good reason to drop msys support.\n> Needing to sprinkle perl2host and MSYS2_ARG_CONV_EXCL over a good number of\n> tests, getting weird errors when failing, etc IMO isn't a scalable approach,\n> for a platform that most of use never use.\n>\n> Can't we solve this in a generic way? E.g. by insisting that the test run with\n> a native perl and normalizing the few virtual paths we get invoked with\n> centrally? Making the msys initial setup a bit more cumbersome would IMO be\n> an OK price to pay for making it maintainable / predictable from other\n> platforms, as long as we have some decent docs and decent error messages.\n>\n\nNice idea. I have a suspicion that it's going to be harder than you\nthink, but I'll be very happy to be proved wrong.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 23 Jan 2022 17:38:26 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-23 17:38:26 -0500, Andrew Dunstan wrote:\n> Nice idea. I have a suspicion that it's going to be harder than you\n> think, but I'll be very happy to be proved wrong.\n\nFWIW, a manual invocation of the pg_basebackup tests works via a ucrt perl\n(ucrt64/mingw-w64-ucrt-x86_64-perl package).\n\nIt did require installing IPC::Run via cpan though, or at least I think it\ndid. That's a bit annoying. But not all that onerous if we document it?\n\nI did make sure that the ucrt perl deals with windows paths. For good measure\nI forced an early return in perl2host.\n\n\nI think we would have to modify the values for a few environment\nvariables. Stuff like PG_REGRESS, TAR etc. Either where we specify them, or\nUtils.pm or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 23 Jan 2022 16:13:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 4:09 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> The most common issues we get are around this issue of virtualized paths\n> in the TAP tests. If people followed the rule I suggested upthread, 99%\n> of those problems would go away.\n\nWell, that makes it sound like it's the fault of people for not\nfollowing the rules, but I don't think that's really a fair\nassessment. Even your first guess as to how to solve this particular\nproblem wasn't correct, and if you can't guess right on the first try,\nI don't know how anyone else is supposed to do it. I still don't even\nunderstand why your first guess wasn't right. I feel like every time I\ntry to add a TAP test Msys blows up, and I can't figure out how to fix\nit myself. Which is not a great feeling.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 23 Jan 2022 22:52:14 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/23/22 22:52, Robert Haas wrote:\n> On Sun, Jan 23, 2022 at 4:09 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> The most common issues we get are around this issue of virtualized paths\n>> in the TAP tests. If people followed the rule I suggested upthread, 99%\n>> of those problems would go away.\n> Well, that makes it sound like it's the fault of people for not\n> following the rules, but I don't think that's really a fair\n> assessment. Even your first guess as to how to solve this particular\n> problem wasn't correct, and if you can't guess right on the first try,\n> I don't know how anyone else is supposed to do it. I still don't even\n> understand why your first guess wasn't right. I feel like every time I\n> try to add a TAP test Msys blows up, and I can't figure out how to fix\n> it myself. Which is not a great feeling.\n>\n\n\nWell if we can get Andres' suggestion to work all this might go away,\nwhich would keep everyone happy, especially me. You're right that I was\na little careless upthread. Mea culpa. Meanwhile I am committing a\nminimal one line fix.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 24 Jan 2022 14:01:37 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 2:01 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Well if we can get Andres' suggestion to work all this might go away,\n> which would keep everyone happy, especially me. You're right that I was\n> a little careless upthread. Mea culpa. Meanwhile I am committing a\n> minimal one line fix.\n\nI in no way intended to accuse you of being careless. I was just\npointing out that this stuff seems to be hard to get right, even for\nsmart people.\n\nI really hate committing stuff that turns out to be broken. It's such\na fire drill when the build farm turns red.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 14:27:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 2:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I really hate committing stuff that turns out to be broken. It's such\n> a fire drill when the build farm turns red.\n\nAnd there's a good chance it's about to break again, because I just\ncommitted the next patch in the series which, shockingly, also\nincludes tests.\n\nI'd like to tell you I believe I got it right this time ... but I don't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 15:17:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/24/22 15:17, Robert Haas wrote:\n> On Mon, Jan 24, 2022 at 2:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I really hate committing stuff that turns out to be broken. It's such\n>> a fire drill when the build farm turns red.\n> And there's a good chance it's about to break again, because I just\n> committed the next patch in the series which, shockingly, also\n> includes tests.\n>\n> I'd like to tell you I believe I got it right this time ... but I don't.\n\n\n\nI'll just keep playing whackamole :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 24 Jan 2022 16:13:05 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-24 14:01:37 -0500, Andrew Dunstan wrote:\n> Well if we can get Andres' suggestion to work all this might go away,\n> which would keep everyone happy, especially me.\n\nI successfully tried it for a few tests. But I see tests hanging a lot\nindependent of the way I run the tests, presumably due to the issues discussed\nin [1]. So we need to do something about that.\n\nI don't have the cycles to finish changing over to that way of running tests -\ndo you have some time to work on it, if I clean up the bit I have?\n\n- Andres\n\n[1] https://postgr.es/m/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 13:39:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/24/22 16:39, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-24 14:01:37 -0500, Andrew Dunstan wrote:\n>> Well if we can get Andres' suggestion to work all this might go away,\n>> which would keep everyone happy, especially me.\n> I successfully tried it for a few tests. But I see tests hanging a lot\n> independent of the way I run the tests, presumably due to the issues discussed\n> in [1]. So we need to do something about that.\n>\n> I don't have the cycles to finish changing over to that way of running tests -\n> do you have some time to work on it, if I clean up the bit I have?\n>\n> - Andres\n>\n> [1] https://postgr.es/m/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n\n\n\nGive me what you can and I'll see what I can do. I have a couple of\nmoderately high priority items on my plate, but I will probably be able\nto fit in some testing when those make my eyes completely glaze over.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 24 Jan 2022 16:47:28 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-24 16:47:28 -0500, Andrew Dunstan wrote:\n> Give me what you can and I'll see what I can do. I have a couple of\n> moderately high priority items on my plate, but I will probably be able\n> to fit in some testing when those make my eyes completely glaze over.\n\nSteps:\n\n# install msys from https://www.msys2.org/\n# install dependencies in msys shell\npacman -S git bison flex make ucrt64/mingw-w64-ucrt-x86_64-perl ucrt64/mingw-w64-ucrt-x86_64-gcc ucrt64/mingw-w64-ucrt-x86_64-zlib ucrt64/mingw-w64-ucrt-x86_64-ccache diffutils\n\n\n# start mingw ucrt64 x64 shell\ncpan install -T IPC::Run\nperl -MIPC::Run -e 1 # verify ipc run is installed\n\ncd /c/dev\n# I added --reference postgres to accelerate the cloning\ngit clone https://git.postgresql.org/git/postgresql.git postgres-mingw\ncd /c/dev/postgres-mingw\n\ngit revert ed52c3707bcf8858defb0d9de4b55f5c7f18fed7\ngit revert 6051857fc953a62db318329c4ceec5f9668fd42a\n\n# apply attached patch\n\n# see below why out-of-tree is easier or now\nmkdir build\ncd build\n# path parameters probably not necessary, I thought I needed them earlier, not sure why\n../configure --without-readline --cache cache --enable-tap-tests PROVE=/ucrt64/bin/core_perl/prove PERL=/ucrt64/bin/perl.exe CC=\"ccache gcc\"\nmake -j8 -s world-bin && make -j8 -s -C src/interfaces/ecpg/test\nmake -j8 -s temp-install\n\n# pg_regress' make_temp_socketdir() otherwise picks up the wrong TMPDIR\nmkdir /c/dev/postgres-mingw/build/tmp\n\n# the TAR= ensures that tests pick up a tar accessible with a windows path\n# PG_TEST_USE_UNIX_SOCKETS=1 is required for test concurrency, otherwise there are port conflicts\n\n(make -Otarget -j12 check-world NO_TEMP_INSTALL=1 PG_TEST_USE_UNIX_SOCKETS=1 TMPDIR=C:/dev/postgres-mingw/tmp TAR=\"C:\\Windows\\System32\\tar.exe\" 2>&1 && echo test-world-success || echo test-world-fail) 2>&1 |tee test-world.log\n\n\nTo make tests in \"in-tree\" builds work, a bit more hackery would be\nneeded. The problem is that windows chooses binaries from the current working\ndirectory *before* PATH. That's a problem for things like initdb.exe or\npg_ctl.exe that want to find postgres.exe, as that only works with the program\nin their proper location, rather than CWD.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 24 Jan 2022 18:36:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 1/24/22 21:36, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-24 16:47:28 -0500, Andrew Dunstan wrote:\n>> Give me what you can and I'll see what I can do. I have a couple of\n>> moderately high priority items on my plate, but I will probably be able\n>> to fit in some testing when those make my eyes completely glaze over.\n> Steps:\n>\n> # install msys from https://www.msys2.org/\n> # install dependencies in msys shell\n> pacman -S git bison flex make ucrt64/mingw-w64-ucrt-x86_64-perl ucrt64/mingw-w64-ucrt-x86_64-gcc ucrt64/mingw-w64-ucrt-x86_64-zlib ucrt64/mingw-w64-ucrt-x86_64-ccache diffutils\n>\n>\n> # start mingw ucrt64 x64 shell\n> cpan install -T IPC::Run\n> perl -MIPC::Run -e 1 # verify ipc run is installed\n>\n> cd /c/dev\n> # I added --reference postgres to accelerate the cloning\n> git clone https://git.postgresql.org/git/postgresql.git postgres-mingw\n> cd /c/dev/postgres-mingw\n>\n> git revert ed52c3707bcf8858defb0d9de4b55f5c7f18fed7\n> git revert 6051857fc953a62db318329c4ceec5f9668fd42a\n>\n> # apply attached patch\n>\n> # see below why out-of-tree is easier or now\n> mkdir build\n> cd build\n> # path parameters probably not necessary, I thought I needed them earlier, not sure why\n> ../configure --without-readline --cache cache --enable-tap-tests PROVE=/ucrt64/bin/core_perl/prove PERL=/ucrt64/bin/perl.exe CC=\"ccache gcc\"\n> make -j8 -s world-bin && make -j8 -s -C src/interfaces/ecpg/test\n> make -j8 -s temp-install\n>\n> # pg_regress' make_temp_socketdir() otherwise picks up the wrong TMPDIR\n> mkdir /c/dev/postgres-mingw/build/tmp\n>\n> # the TAR= ensures that tests pick up a tar accessible with a windows path\n> # PG_TEST_USE_UNIX_SOCKETS=1 is required for test concurrency, otherwise there are port conflicts\n>\n> (make -Otarget -j12 check-world NO_TEMP_INSTALL=1 PG_TEST_USE_UNIX_SOCKETS=1 TMPDIR=C:/dev/postgres-mingw/tmp TAR=\"C:\\Windows\\System32\\tar.exe\" 2>&1 && echo test-world-success || echo test-world-fail) 2>&1 |tee test-world.log\n\n\n\nOK, I have all the pieces working and I know what I need to do to adapt\nfairywren. The patch you provided is not necessary any more.\n\n(I think your TMPDIR spec is missing a /build/)\n\nThe recipe worked (mutatis mutandis) for the mingw64 toolchain as well\nas for the ucrt64 toolchain. Is there a reason to prefer ucrt64?\n\nI think the next steps are:\n\n * do those two reverts\n * adjust fairywren\n * get rid of perl2host\n\nAt that stage jacana will no longer be able to run TAP tests. I can do\none of these:\n\n * disable the TAP tests on jacana\n * migrate jacana to msys2\n * kiss jacana goodbye.\n\nThoughts?\n\n\n> To make tests in \"in-tree\" builds work, a bit more hackery would be\n> needed. The problem is that windows chooses binaries from the current working\n> directory *before* PATH. That's a problem for things like initdb.exe or\n> pg_ctl.exe that want to find postgres.exe, as that only works with the program\n> in their proper location, rather than CWD.\n>\n\nYeah, we should do something about that. For example, we could possibly\nuse the new install_path option of PostgreSQL::Test::Cluster::new() so\nit would find these in the right location.\n\n\nHowever, I don't need it as my animals all use vpath builds.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Feb 2022 17:25:51 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-03 17:25:51 -0500, Andrew Dunstan wrote:\n> OK, I have all the pieces working and I know what I need to do to adapt\n> fairywren. The patch you provided is not necessary any more.\n\nCool. Are you going to post that?\n\n\n> (I think your TMPDIR spec is missing a /build/)\n\nI think I went back/forth between in-tree/out-of-tree build...\n\n\n> The recipe worked (mutatis mutandis) for the mingw64 toolchain as well\n> as for the ucrt64 toolchain. Is there a reason to prefer ucrt64?\n\nThere's a lot of oddities in the mingw64 target, due to targetting the much\nolder C runtime library (lots of bugs, missing functionality). MSVC targets\nUCRT by default for quite a few years by now. Targetting msvcrt is basically\non its way out from what I understand.\n\n\n> I think the next steps are:\n> \n> * do those two reverts\n> * adjust fairywren\n> * get rid of perl2host\n> \n> At that stage jacana will no longer be able to run TAP tests. I can do\n> one of these:\n\nI guess because its install is too old?\n\n\n> * disable the TAP tests on jacana\n> * migrate jacana to msys2\n> * kiss jacana goodbye.\n\nHaving a non-server mingw animal seems like it could be useful (I think that's\njust Jacana), even if server / client versions of windows have grown\ncloser. So I think an update to msys2 makes the most sense?\n\n\n> > To make tests in \"in-tree\" builds work, a bit more hackery would be\n> > needed. The problem is that windows chooses binaries from the current working\n> > directory *before* PATH. That's a problem for things like initdb.exe or\n> > pg_ctl.exe that want to find postgres.exe, as that only works with the program\n> > in their proper location, rather than CWD.\n\n> Yeah, we should do something about that. For example, we could possibly\n> use the new install_path option of PostgreSQL::Test::Cluster::new() so\n> it would find these in the right location.\n\nIt'd be easy enough to adjust the central invocations of initdb. I think the\nbigger problem is that there's plenty calls to initdb, pg_ctl \"directly\" in\nthe respective test scripts.\n\n\n> However, I don't need it as my animals all use vpath builds.\n\nI think it'd be fine to just error out in non-vpath builds on msvc. The\nsearch-for-binaries behaviour is just too weird.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 3 Feb 2022 17:51:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 2/3/22 20:51, Andres Freund wrote:\n> Hi,\n>\n> On 2022-02-03 17:25:51 -0500, Andrew Dunstan wrote:\n>> OK, I have all the pieces working and I know what I need to do to adapt\n>> fairywren. The patch you provided is not necessary any more.\n> Cool. Are you going to post that?\n\n\n\nAbout the only thing missing in your recipe is this:\n\n\n# force ucrt64 prove to use the ucrt64 perl rather than whatever is in\nthe path\nsed -i 's,^#!perl,#!/ucrt64/bin/perl,' /ucrt64/bin/core_perl/prove\n\n\nGiven that, you don't need to set PERL, and configure can find the perl\nto build against from the PATH.\n\n\n\n>\n>\n> Is there a reason to prefer ucrt64?\n> There's a lot of oddities in the mingw64 target, due to targetting the much\n> older C runtime library (lots of bugs, missing functionality). MSVC targets\n> UCRT by default for quite a few years by now. Targetting msvcrt is basically\n> on its way out from what I understand.\n\n\nOK.\n\n\n>> I think the next steps are:\n>>\n>> * do those two reverts\n>> * adjust fairywren\n>> * get rid of perl2host\n>>\n>> At that stage jacana will no longer be able to run TAP tests. I can do\n>> one of these:\n> I guess because its install is too old?\n\n\nYeah. fairywren is now running with ucrt64-perl for TAP tests. \n\n\n>> * disable the TAP tests on jacana\n>> * migrate jacana to msys2\n>> * kiss jacana goodbye.\n> Having a non-server mingw animal seems like it could be useful (I think that's\n> just Jacana), even if server / client versions of windows have grown\n> closer. So I think an update to msys2 makes the most sense?\n\n\nWorking on that. There appear to be some issues with third party\nlibraries. I might need to rebuild libxml2 and zlib for example.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 14 Feb 2022 17:32:11 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "On 2022-02-14 17:32:11 -0500, Andrew Dunstan wrote:\n> Working on that. There appear to be some issues with third party\n> libraries. I might need to rebuild libxml2 and zlib for example.\n\nAny reason not to use the ones from msys2?\n\n\n",
"msg_date": "Mon, 14 Feb 2022 14:37:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-14 17:32:11 -0500, Andrew Dunstan wrote:\n> About the only thing missing in your recipe is this:\n\nRe requiring out-of-tree builds: Thomas on IM noted that there's the\nNoDefaultCurrentDirectoryInExePath environment variable. That should avoid the\nproblem leading to requiring out-of-tree builds. But I've not tested it.\n\n\n> # force ucrt64 prove to use the ucrt64 perl rather than whatever is in\n> the path\n> sed -i 's,^#!perl,#!/ucrt64/bin/perl,' /ucrt64/bin/core_perl/prove\n> \n> \n> Given that, you don't need to set PERL, and configure can find the perl\n> to build against from the PATH.\n\nThat shouldn't even be needed from what I understand now. If correctly started\nthe msys shell shoul dhave the right perl in path?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Feb 2022 15:02:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 2/14/22 18:02, Andres Freund wrote:\n> Hi,\n>\n> On 2022-02-14 17:32:11 -0500, Andrew Dunstan wrote:\n>> About the only thing missing in your recipe is this:\n> Re requiring out-of-tree builds: Thomas on IM noted that there's the\n> NoDefaultCurrentDirectoryInExePath environment variable. That should avoid the\n> problem leading to requiring out-of-tree builds. But I've not tested it.\n\n\nGood to know.\n\n\n>\n>\n>> # force ucrt64 prove to use the ucrt64 perl rather than whatever is in\n>> the path\n>> sed -i 's,^#!perl,#!/ucrt64/bin/perl,' /ucrt64/bin/core_perl/prove\n>>\n>>\n>> Given that, you don't need to set PERL, and configure can find the perl\n>> to build against from the PATH.\n> That shouldn't even be needed from what I understand now. If correctly started\n> the msys shell shoul dhave the right perl in path?\n\n\n\nFSVO \"the right perl\". However, jacana is building against a separate\ninstallation of AS perl, and I was trying to preserve that.\n\n\nFor a buildfarm animal, there is one extra package that is needed:\n\n\nperl-LWP-Protocol-https\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 15 Feb 2022 10:12:35 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
},
{
"msg_contents": "\nOn 2/14/22 17:37, Andres Freund wrote:\n> On 2022-02-14 17:32:11 -0500, Andrew Dunstan wrote:\n>> Working on that. There appear to be some issues with third party\n>> libraries. I might need to rebuild libxml2 and zlib for example.\n> Any reason not to use the ones from msys2?\n\n\n\nThat seems to work. Needed to add these 2 packages to the recipe:\n\n\nucrt64/mingw-w64-ucrt-x86_64-libxml2\n\nucrt64/mingw-w64-ucrt-x86_64-libxslt\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 15 Feb 2022 10:13:18 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren is generating bogus BASE_BACKUP commands"
}
] |
[
{
"msg_contents": "In fa66b6dee, Micheal fixed test.sh to work back to v11, so I suppose nobody is\ntrying to run it with older versions, as I was endeavored to do.\n\nWith the attached patch, I'm able to test upgrades back to v9.6.\n\nIn 9.5, there are regression diffs from CONTEXT lines from non-error messages,\nwhich is a v9.5 change (0426f349e). The first \"make check\" fails before even\ngetting to the upgrade part:\n\n| NOTICE: trigger_func(before_ins_stmt) called: action = INSERT, when = BEFORE, level = STATEMENT\n|- CONTEXT: SQL statement \"INSERT INTO main_table VALUES (NEW.a, NEW.b)\"\n|- PL/pgSQL function view_trigger() line 17 at SQL statement\n\nI tried a lot of things but couldn't find one that worked. I just tried this,\nwhich allows the \"make check\" to pass, but then fails due missing symbols in\nlibpq during the upgrade phase. Maybe I'm missing something - Tom must have\ntested psql against old versions somehow before de-supporting old versions (but\nmaybe not like this).\n\n| time make check -C src/bin/pg_upgrade oldsrc=`pwd`/new/95 oldbindir=`pwd`/new/95/tmp_install/usr/local/pgsql/bin with_temp_install=\"LD_LIBRARY_PATH=`pwd`/new/95/tmp_install/usr/local/pgsql/lib\"\n\nI tried installcheck, but then that fails because psql doesn't accept multiple\n-c options (it runs the final -c command only).\n\n| EXTRA_REGRESS_OPTS=\"--bindir `pwd`/new/95/tmp_install/usr/local/pgsql/bin\" LD_LIBRARY_PATH=`pwd`/new/95/tmp_install/usr/local/pgsql/lib PGHOST=/tmp time make installcheck\n| ...\n| ============== creating database \"regression\" ==============\n| ERROR: database \"regression\" does not exist\n| STATEMENT: ALTER DATABASE \"regression\" SET lc_messages TO 'C';ALTER DATABASE \"regression\" SET lc_monetary TO 'C';ALTER DATABASE \"regression\" SET lc_numeric TO 'C';ALTER DATABASE \"regression\" SET lc_time TO 'C';ALTER DATABASE \"regression\" SET bytea_output TO 'hex';ALTER DATABASE \"regression\" SET timezone_abbreviations TO 'Default';\n| ERROR: database \"regression\" does not exist\n| command failed: \"/home/pryzbyj/src/postgres/new/95/tmp_install/usr/local/pgsql/bin/psql\" -X -c \"CREATE DATABASE \\\"regression\\\" TEMPLATE=template0\" -c \"ALTER DATABASE \\\"regression\\\" SET lc_messages TO 'C';ALTER DATABASE \\\"regression\\\" SET lc_monetary TO 'C';ALTER DATABASE \\\"regression\\\" SET lc_numeric TO 'C';ALTER DATABASE \\\"regression\\\" SET lc_time TO 'C';ALTER DATABASE \\\"regression\\\" SET bytea_output TO 'hex';ALTER DATABASE \\\"regression\\\" SET timezone_abbreviations TO 'Default';\" \"postgres\"\n\npg_regress was changed to do that recently:\n\ncommit f45dc59a38cab1d2af6baaedb79559fe2e9b3781\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Oct 20 18:44:37 2021 -0400\n\n Improve pg_regress.c's infrastructure for issuing psql commands.\n\n-- \nJustin",
"msg_date": "Sat, 22 Jan 2022 12:37:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade/test.sh and v9.5"
}
] |
[
{
"msg_contents": "Hi,\n\nThere's a bug in ProcArrayApplyRecoveryInfo, introduced by 8431e296ea, \nwhich may cause failures when starting a replica, making it unusable. \nThe commit message for 8431e296ea is not very clear about what exactly \nis being done and why, but the root cause is that at while processing \nRUNNING_XACTS, the XIDs are sorted like this:\n\n /*\n * Sort the array so that we can add them safely into\n * KnownAssignedXids.\n */\n qsort(xids, nxids, sizeof(TransactionId), xidComparator);\n\nwhere \"safely\" likely means \"not violating the ordering expected by \nKnownAssignedXidsAdd\". Unfortunately, xidComparator compares the values \nas plain uint32 values, while KnownAssignedXidsAdd actually calls \nTransactionIdFollowsOrEquals() and compares the logical XIDs :-(\n\nTriggering this is pretty simple - all you need is two transactions with \nXIDs before/after the 4B limit, and then (re)start a replica. The \nreplica refuses to start with a message like this:\n\n LOG: 9 KnownAssignedXids (num=4 tail=0 head=4) [0]=32705 [1]=32706\n [2]=32707 [3]=32708\n CONTEXT: WAL redo at 0/6000120 for Standby/RUNNING_XACTS: nextXid\n 32715 latestCompletedXid 32714 oldestRunningXid\n 4294967001; 8 xacts: 32708 32707 32706 32705 4294967009\n 4294967008 4294967007 4294967006\n FATAL: out-of-order XID insertion in KnownAssignedXids\n\nClearly, we add the 4 \"younger\" XIDs first (because that's what the XID \ncomparator does), but then KnownAssignedXidsAdd thinks there's some sort \nof corruption because logically 4294967006 is older.\n\nThis does not affect replicas in STANDBY_SNAPSHOT_READY state, because \nin that case ProcArrayApplyRecoveryInfo ignores RUNNING_XACTS messages.\n\n\nThe probability of hitting this in practice is proportional to how long \nyou leave transactions running. The system where we observed this leaves \ntransactions with XIDs open for days, and the age may be ~40M. \nIntuitivelly, that's ~40M/4B (=1%) probability that at any given time \nthere are transactions with contradicting ordering. So most restarts \nworked fine, until one that happened at just the \"right\" time.\n\nThis likely explains why we never got any reports about this - most \nsystems probably don't leave transactions running for this long, so the \nprobability is much lower. And replica restarts are generally not that \ncommon events either.\n\nAttached patch is fixing this by just sorting the XIDs logically. The \nxidComparator is meant for places that can't do logical ordering. But \nthese XIDs come from RUNNING_XACTS, so they actually come from the same \nwraparound epoch (so sorting logically seems perfectly fine).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 23 Jan 2022 01:42:47 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Bug in ProcArrayApplyRecoveryInfo for snapshots crossing 4B, breaking\n replicas"
},
{
"msg_contents": "On 1/22/22, 4:43 PM, \"Tomas Vondra\" <tomas.vondra@enterprisedb.com> wrote:\r\n> There's a bug in ProcArrayApplyRecoveryInfo, introduced by 8431e296ea,\r\n> which may cause failures when starting a replica, making it unusable.\r\n> The commit message for 8431e296ea is not very clear about what exactly\r\n> is being done and why, but the root cause is that at while processing\r\n> RUNNING_XACTS, the XIDs are sorted like this:\r\n>\r\n> /*\r\n> * Sort the array so that we can add them safely into\r\n> * KnownAssignedXids.\r\n> */\r\n> qsort(xids, nxids, sizeof(TransactionId), xidComparator);\r\n>\r\n> where \"safely\" likely means \"not violating the ordering expected by\r\n> KnownAssignedXidsAdd\". Unfortunately, xidComparator compares the values\r\n> as plain uint32 values, while KnownAssignedXidsAdd actually calls\r\n> TransactionIdFollowsOrEquals() and compares the logical XIDs :-(\r\n\r\nWow, nice find.\r\n\r\n> This likely explains why we never got any reports about this - most\r\n> systems probably don't leave transactions running for this long, so the\r\n> probability is much lower. And replica restarts are generally not that\r\n> common events either.\r\n\r\nI'm aware of one report with the same message [0], but I haven't read\r\nclosely enough to determine whether it is the same issue. It looks\r\nlike that particular report was attributed to backup_label being\r\nremoved.\r\n\r\n> Attached patch is fixing this by just sorting the XIDs logically. The\r\n> xidComparator is meant for places that can't do logical ordering. But\r\n> these XIDs come from RUNNING_XACTS, so they actually come from the same\r\n> wraparound epoch (so sorting logically seems perfectly fine).\r\n\r\nThe patch looks reasonable to me.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/1476795473014.15979.2188%40webmail4\r\n\r\n",
"msg_date": "Mon, 24 Jan 2022 21:28:43 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in ProcArrayApplyRecoveryInfo for snapshots crossing 4B,\n breaking replicas"
},
{
"msg_contents": "On 1/24/22 22:28, Bossart, Nathan wrote:\n> On 1/22/22, 4:43 PM, \"Tomas Vondra\" <tomas.vondra@enterprisedb.com> wrote:\n>> There's a bug in ProcArrayApplyRecoveryInfo, introduced by 8431e296ea,\n>> which may cause failures when starting a replica, making it unusable.\n>> The commit message for 8431e296ea is not very clear about what exactly\n>> is being done and why, but the root cause is that at while processing\n>> RUNNING_XACTS, the XIDs are sorted like this:\n>>\n>> /*\n>> * Sort the array so that we can add them safely into\n>> * KnownAssignedXids.\n>> */\n>> qsort(xids, nxids, sizeof(TransactionId), xidComparator);\n>>\n>> where \"safely\" likely means \"not violating the ordering expected by\n>> KnownAssignedXidsAdd\". Unfortunately, xidComparator compares the values\n>> as plain uint32 values, while KnownAssignedXidsAdd actually calls\n>> TransactionIdFollowsOrEquals() and compares the logical XIDs :-(\n> \n> Wow, nice find.\n> \n>> This likely explains why we never got any reports about this - most\n>> systems probably don't leave transactions running for this long, so the\n>> probability is much lower. And replica restarts are generally not that\n>> common events either.\n> \n> I'm aware of one report with the same message [0], but I haven't read\n> closely enough to determine whether it is the same issue. It looks\n> like that particular report was attributed to backup_label being\n> removed.\n> \n\nYeah, I saw that thread too, and I don't think it's the same issue. As \nyou say, it seems to be caused by the backup_label shenanigans, and \nthere's also the RUNNING_XACTS message:\n\nSep 20 15:00:27 ... CONTEXT: xlog redo Standby/RUNNING_XACTS: nextXid \n38585 latestCompletedXid 38571 oldestRunningXid 38572; 14 xacts: 38573 \n38575 38579 38578 38574 38581 38580 38576 38577 38572 38582 38584 38583 \n38583\n\nThe XIDs don't cross the 4B boundary at all, so this seems unrelated.\n\n\n>> Attached patch is fixing this by just sorting the XIDs logically. The\n>> xidComparator is meant for places that can't do logical ordering. But\n>> these XIDs come from RUNNING_XACTS, so they actually come from the same\n>> wraparound epoch (so sorting logically seems perfectly fine).\n> \n> The patch looks reasonable to me.\n> \n\nThanks!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jan 2022 22:45:48 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in ProcArrayApplyRecoveryInfo for snapshots crossing 4B,\n breaking replicas"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 10:45:48PM +0100, Tomas Vondra wrote:\n> On 1/24/22 22:28, Bossart, Nathan wrote:\n>>> Attached patch is fixing this by just sorting the XIDs logically. The\n>>> xidComparator is meant for places that can't do logical ordering. But\n>>> these XIDs come from RUNNING_XACTS, so they actually come from the same\n>>> wraparound epoch (so sorting logically seems perfectly fine).\n>> \n>> The patch looks reasonable to me.\n> \n> Thanks!\n\nCould it be possible to add a TAP test? One idea would be to rely on\npg_resetwal -x and -e close to the 4B limit to set up a node before \nstressing the scenario of this bug, so that would be rather cheap.\n--\nMichael",
"msg_date": "Tue, 25 Jan 2022 12:25:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug in ProcArrayApplyRecoveryInfo for snapshots crossing 4B,\n breaking replicas"
},
{
"msg_contents": "On 1/25/22 04:25, Michael Paquier wrote:\n> On Mon, Jan 24, 2022 at 10:45:48PM +0100, Tomas Vondra wrote:\n>> On 1/24/22 22:28, Bossart, Nathan wrote:\n>>>> Attached patch is fixing this by just sorting the XIDs logically. The\n>>>> xidComparator is meant for places that can't do logical ordering. But\n>>>> these XIDs come from RUNNING_XACTS, so they actually come from the same\n>>>> wraparound epoch (so sorting logically seems perfectly fine).\n>>>\n>>> The patch looks reasonable to me.\n>>\n>> Thanks!\n> \n> Could it be possible to add a TAP test? One idea would be to rely on\n> pg_resetwal -x and -e close to the 4B limit to set up a node before\n> stressing the scenario of this bug, so that would be rather cheap.\n\nI actually tried doing that, but I was not very happy with the result. \nThe test has to call pg_resetwal, but then it also has to fake pg_xact \ndata and so on, which seemed a bit ugly so did not include the test in \nthe patch.\n\nBut maybe there's a better way to do this, so here it is. I've kept it \nseparately, so that it's possible to apply it without the fix, to verify \nit actually triggers the issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 26 Jan 2022 19:31:00 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in ProcArrayApplyRecoveryInfo for snapshots crossing 4B,\n breaking replicas"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 07:31:00PM +0100, Tomas Vondra wrote:\n> I actually tried doing that, but I was not very happy with the result. The\n> test has to call pg_resetwal, but then it also has to fake pg_xact data and\n> so on, which seemed a bit ugly so did not include the test in the patch.\n\nIndeed, the dependency to /dev/zero is not good either. The patch\nlogic looks good to me.\n--\nMichael",
"msg_date": "Thu, 27 Jan 2022 07:54:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug in ProcArrayApplyRecoveryInfo for snapshots crossing 4B,\n breaking replicas"
},
{
"msg_contents": "On 1/26/22 23:54, Michael Paquier wrote:\n> On Wed, Jan 26, 2022 at 07:31:00PM +0100, Tomas Vondra wrote:\n>> I actually tried doing that, but I was not very happy with the result. The\n>> test has to call pg_resetwal, but then it also has to fake pg_xact data and\n>> so on, which seemed a bit ugly so did not include the test in the patch.\n> \n> Indeed, the dependency to /dev/zero is not good either. The patch\n> logic looks good to me.\n\nOK, I've pushed the patch. We may consider adding a TAP test later, if \nwe find a reasonably clean approach.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 27 Jan 2022 20:33:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in ProcArrayApplyRecoveryInfo for snapshots crossing 4B,\n breaking replicas"
}
] |
[
{
"msg_contents": "On a multi tenant server, with hundreds of schemas with same structure, I\nhave an audit table shared with all of them. When any record is deleted I\nadd on this table tenant, table and PK values, just that. Something like\nthis:\n\ndrop table if exists audit;\ncreate table audit(id serial primary key,\ncustomer_schema text, --here is the problem, a text column.\ntable_name text,\nins_datetime timestamp default current_timestamp,\npk integer);\n\n--An index for searching\ndrop index if exists public.audit_customer_table_datetime;\ncreate index audit_customer_table_datetime on\naudit(customer_schema,table_name,ins_datetime);\n\n--A trigger to insert when a customer deletes a record\ncreate function table_delete() returns trigger language plpgsql as $$ begin\ninsert into audit(customer_schema, table_name, pk)\nselect tg_table_schema, tg_table_name,\n(row_to_json(OLD.*)->>(tg_argv[0]))::bigint; return old; end;\n\n--And now I insert some records for testing. My table has some millions,\nbut for now I´m inserting 100.000 only.\ninsert into audit(customer_schema,table_name,ins_datetime,pk)\nselect customer_schema, table_name, current_timestamp +\n(rn||'seconds')::interval, random()*50000 from generate_series(1,5) as g(g)\ninner join (select row_number() over () * random() rn, relname,\nrelnamespace::regnamespace::text\nfrom pg_class where relkind = 'r' and relnamespace::regnamespace::text !~\n'pg_|information_schema') x(rn, customer_schema, table_name) on true;\n\nUntil version 11 my select was using that index correctly. Then I´ve\nupgraded to 14.1, then ...\n\n--Application sets search_path to a schema.\nset search_path to cust_0298, public;\n\nexplain analyze select customer_schema, pk from audit where customer_schema\n= current_schema and table_name =\nany('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[])\nand ins_datetime > '2022/01/22 10:00';\nQUERY PLAN\nGather (cost=1000.00..4167.30 rows=14 width=4) (actual time=24.178..27.117\nrows=0 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n -> Parallel Seq Scan on audit (cost=0.00..3165.90 rows=8 width=4)\n(actual time=21.909..21.909 rows=0 loops=2)\n Filter: ((ins_datetime > '2022-01-22 10:00:00'::timestamp without\ntime zone) AND (customer_schema = CURRENT_SCHEMA) AND (table_name = ANY\n('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[])))\n Rows Removed by Filter: 66262\nPlanning Time: 0.105 ms\nExecution Time: 27.135 ms\n\nhmm, did not use that index. Tried casting current_schema or trying any\nfunction which returns text but has no effect.\nwhere customer_schema = Current_Schema::text\nwhere customer_schema = substring(current_schema from 1 for 50)\nwhere customer_schema = Left(current_schema,50)\n\nThe only way I have success to use that index was when I tried\nwhere customer_schema = split_part(current_setting('search_path'),',',1)\nQUERY PLAN\nBitmap Heap Scan on audit (cost=26.68..78.56 rows=14 width=4) (actual\ntime=0.043..0.043 rows=0 loops=1)\n Recheck Cond: ((customer_schema =\nsplit_part(current_setting('search_path'::text), ','::text, 1)) AND\n(table_name = ANY\n('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[]))\nAND (ins_datetime > '2022-01-22 10:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on audit_customer_table_datetime (cost=0.00..26.67\nrows=14 width=0) (actual time=0.041..0.041 rows=0 loops=1)\n Index Cond: ((customer_schema =\nsplit_part(current_setting('search_path'::text), ','::text, 1)) AND\n(table_name = ANY\n('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[]))\nAND (ins_datetime > '2022-01-22 10:00:00'::timestamp without time zone))\nPlanning Time: 0.111 ms\nExecution Time: 0.065 ms\n\nSo, not using Current_Schema but getting it with current_setting function.\n\nAnd as last test, yes, if I change type of that column, then index is used\nwith my initial query\nalter table audit alter customer_schema type name;\n\nSo, what was changed with current_schema ?\n\nOn a multi tenant server, with hundreds of schemas with same structure, I have an audit table shared with all of them. When any record is deleted I add on this table tenant, table and PK values, just that. Something like this:drop table if exists audit;create table audit(id serial primary key,customer_schema text, --here is the problem, a text column.table_name text,ins_datetime timestamp default current_timestamp,pk integer);--An index for searchingdrop index if exists public.audit_customer_table_datetime;create index audit_customer_table_datetime on audit(customer_schema,table_name,ins_datetime);--A trigger to insert when a customer deletes a recordcreate function table_delete() returns trigger language plpgsql as $$ begininsert into audit(customer_schema, table_name, pk)select tg_table_schema, tg_table_name, (row_to_json(OLD.*)->>(tg_argv[0]))::bigint; return old; end;--And now I insert some records for testing. My table has some millions, but for now I´m inserting 100.000 only.insert into audit(customer_schema,table_name,ins_datetime,pk)select customer_schema, table_name, current_timestamp + (rn||'seconds')::interval, random()*50000 from generate_series(1,5) as g(g) inner join (select row_number() over () * random() rn, relname, relnamespace::regnamespace::textfrom pg_class where relkind = 'r' and relnamespace::regnamespace::text !~ 'pg_|information_schema') x(rn, customer_schema, table_name) on true;Until version 11 my select was using that index correctly. Then I´ve upgraded to 14.1, then ...--Application sets search_path to a schema.set search_path to cust_0298, public;explain analyze select customer_schema, pk from audit where customer_schema = current_schema and table_name = any('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[]) and ins_datetime > '2022/01/22 10:00';QUERY PLANGather (cost=1000.00..4167.30 rows=14 width=4) (actual time=24.178..27.117 rows=0 loops=1) Workers Planned: 1 Workers Launched: 1 -> Parallel Seq Scan on audit (cost=0.00..3165.90 rows=8 width=4) (actual time=21.909..21.909 rows=0 loops=2) Filter: ((ins_datetime > '2022-01-22 10:00:00'::timestamp without time zone) AND (customer_schema = CURRENT_SCHEMA) AND (table_name = ANY ('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[]))) Rows Removed by Filter: 66262Planning Time: 0.105 msExecution Time: 27.135 mshmm, did not use that index. Tried casting current_schema or trying any function which returns text but has no effect.where customer_schema = Current_Schema::text where customer_schema = substring(current_schema from 1 for 50)where customer_schema = Left(current_schema,50)The only way I have success to use that index was when I triedwhere customer_schema = split_part(current_setting('search_path'),',',1)QUERY PLANBitmap Heap Scan on audit (cost=26.68..78.56 rows=14 width=4) (actual time=0.043..0.043 rows=0 loops=1) Recheck Cond: ((customer_schema = split_part(current_setting('search_path'::text), ','::text, 1)) AND (table_name = ANY ('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[])) AND (ins_datetime > '2022-01-22 10:00:00'::timestamp without time zone)) -> Bitmap Index Scan on audit_customer_table_datetime (cost=0.00..26.67 rows=14 width=0) (actual time=0.041..0.041 rows=0 loops=1) Index Cond: ((customer_schema = split_part(current_setting('search_path'::text), ','::text, 1)) AND (table_name = ANY ('{this_table,that_table,other_table,just_table,more_one_table,last_table}'::text[])) AND (ins_datetime > '2022-01-22 10:00:00'::timestamp without time zone))Planning Time: 0.111 msExecution Time: 0.065 msSo, not using Current_Schema but getting it with current_setting function.And as last test, yes, if I change type of that column, then index is used with my initial queryalter table audit alter customer_schema type name;So, what was changed with current_schema ?",
"msg_date": "Sun, 23 Jan 2022 11:00:12 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "current_schema will not use an text index ?"
},
{
"msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> customer_schema text, --here is the problem, a text column.\n\n> Until version 11 my select was using that index correctly. Then I´ve\n> upgraded to 14.1, then ...\n\n> explain analyze select customer_schema, pk from audit where customer_schema\n> = current_schema and table_name =\n\n\"current_schema\" is nowadays considered to have C collation, which is\nappropriate for comparisons to columns in the system catalogs. But that\ncauses your \"customer_schema = current_schema\" comparison to resolve as\nhaving C input collation, which doesn't match the collation of your index\non customer_schema. You could either change the query to look like\n\nwhere customer_schema = current_schema collate \"default\" and ...\n\nor else change the table so that customer_schema has \"C\" collation.\n\nThe reason the behavior changed is that we're less cavalier about\nthe collation of type \"name\" than we used to be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Jan 2022 10:16:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: current_schema will not use an text index ?"
}
] |
[
{
"msg_contents": "While chasing something else, I was surprised to learn that the\nAutoconf project has started to make releases again. There are\n2.70 (2020-12-08) and 2.71 (2021-01-28) versions available at\nhttps://ftp.gnu.org/gnu/autoconf/\n\nRight now, I'm not sure we care; there seems to be more\nenthusiasm for switching to meson. But if that idea falls\nthrough, we should update to a newer autoconf release.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Jan 2022 11:29:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On Sun, Jan 23, 2022, at 17:29, Tom Lane wrote:\n>While chasing something else, I was surprised to learn that the\n>Autoconf project has started to make releases again. There are\n>2.70 (2020-12-08) and 2.71 (2021-01-28) versions available at\n>https://ftp.gnu.org/gnu/autoconf/\n>\n>Right now, I'm not sure we care; there seems to be more\n>enthusiasm for switching to meson. But if that idea falls\n>through, we should update to a newer autoconf release.\n\nSpeaking of autoconf,\n\nI don't have much experience in this area, but I noted there is\nan AC_CACHE_SAVE feature to speed up rerunning ./configure,\nnecessary when it stops with an error due to some missing dependency.\n\nIs there a good reason why AC_CACHE_SAVE is not used?\n\n/Joel\nOn Sun, Jan 23, 2022, at 17:29, Tom Lane wrote:>While chasing something else, I was surprised to learn that the>Autoconf project has started to make releases again. There are>2.70 (2020-12-08) and 2.71 (2021-01-28) versions available at>https://ftp.gnu.org/gnu/autoconf/>>Right now, I'm not sure we care; there seems to be more>enthusiasm for switching to meson. But if that idea falls>through, we should update to a newer autoconf release.Speaking of autoconf,I don't have much experience in this area, but I noted there isan AC_CACHE_SAVE feature to speed up rerunning ./configure,necessary when it stops with an error due to some missing dependency.Is there a good reason why AC_CACHE_SAVE is not used?/Joel",
"msg_date": "Sun, 23 Jan 2022 19:13:41 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> I don't have much experience in this area, but I noted there is\n> an AC_CACHE_SAVE feature to speed up rerunning ./configure,\n> necessary when it stops with an error due to some missing dependency.\n> Is there a good reason why AC_CACHE_SAVE is not used?\n\nDunno ... it looks like that adds cycles to non-error cases,\nwhich seems like optimizing for the wrong thing.\n\nIn any case, right at the moment is probably a bad time to be\nworking on improvements for configure per se. We can come\nback to this if the meson idea crashes and burns.\n\n\t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Sun, 23 Jan 2022 13:35:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On 23.01.22 17:29, Tom Lane wrote:\n> While chasing something else, I was surprised to learn that the\n> Autoconf project has started to make releases again. There are\n> 2.70 (2020-12-08) and 2.71 (2021-01-28) versions available at\n> https://ftp.gnu.org/gnu/autoconf/\n\nI have patches ready for this at \nhttps://github.com/petere/postgresql/tree/autoconf-updates.\n\nMy thinking was to wait until Autoconf 2.71 has trickled down into the \nOS versions that developers are likely to use. To survey that, I'm tracking\n\nhttps://packages.debian.org/sid/autoconf [in testing]\nhttps://packages.ubuntu.com/search?keywords=autoconf [in jammy, will be \n22.04 LTS]\nhttps://src.fedoraproject.org/rpms/autoconf [in Fedora 36, planned \n2022-04-19]\nhttps://formulae.brew.sh/formula/autoconf [done]\n\nCurrently, I think early PG16 might be good time to do this update.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 09:11:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-23 11:29:17 -0500, Tom Lane wrote:\n> Right now, I'm not sure we care; there seems to be more\n> enthusiasm for switching to meson. But if that idea falls\n> through, we should update to a newer autoconf release.\n\nDepending on the number of portability fixes in those releases the\nbackbranches could be reason enough to move to a newer autoconf, even if we\nget to meson in HEAD? Of course only if there's more things fixed than\nbroken...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Jan 2022 00:17:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I have patches ready for this at \n> https://github.com/petere/postgresql/tree/autoconf-updates.\n> My thinking was to wait until Autoconf 2.71 has trickled down into the \n> OS versions that developers are likely to use.\n\nI find that kind of irrelevant, because we expect people to install\nautoconf from source anyway to avoid distro-specific behavior.\nI suppose that waiting for it to get out into the wild might be good\nfrom the standpoint of being sure it's bug-free, though.\n\nDo these versions fix any bugs that affect us (i.e., that we've\nnot already created workarounds for)?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jan 2022 09:14:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On 24.01.22 15:14, Tom Lane wrote:\n> Do these versions fix any bugs that affect us (i.e., that we've\n> not already created workarounds for)?\n\nThe only thing that could be of interest is that the workaround we are \ncarrying in config/check_decls.m4 was originally upstreamed by Noah, but \nwas then later partially reverted and replaced by a different solution. \nFurther explanation is here:\n\nhttps://git.savannah.gnu.org/cgit/autoconf.git/commit/?id=ec90049dfcf4538750e61d675d885157fa5ca7f8\n\nI don't think it has affected us in practice, though.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 16:58:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On 24.01.22 09:11, Peter Eisentraut wrote:\n> On 23.01.22 17:29, Tom Lane wrote:\n>> While chasing something else, I was surprised to learn that the\n>> Autoconf project has started to make releases again. There are\n>> 2.70 (2020-12-08) and 2.71 (2021-01-28) versions available at\n>> https://ftp.gnu.org/gnu/autoconf/\n> \n> I have patches ready for this at \n> https://github.com/petere/postgresql/tree/autoconf-updates.\n\nI have updated this for 16devel and registered it in the commit fest.\n\nTo summarize:\n\n- Autoconf 2.71 has been out for 1.5 years.\n- It is available in many recently updated OSs.\n- It allows us to throw away several workarounds.\n\nAlso:\n\n- The created configure appears to be a bit faster, especially in the \ncached case.\n- It supports checks for C11 features, which is something we might want \nto consider in the fullness of time.\n\nHence:\n\n> Currently, I think early PG16 might be good time to do this update.\n\n\n",
"msg_date": "Thu, 30 Jun 2022 18:52:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> To summarize:\n> - Autoconf 2.71 has been out for 1.5 years.\n> - It is available in many recently updated OSs.\n> - It allows us to throw away several workarounds.\n> Hence:\n>> Currently, I think early PG16 might be good time to do this update.\n\nIn preparation for reviewing this, I tried to install autoconf 2.71\nfrom source locally. All went well on my RHEL8 workstation, but\nautoconf's testsuite falls over rather badly on my macOS laptop [1].\nIt fails differently on another Mac where I have a MacPorts\ninstallation at the head of the search path [2].\n\nAfter sending the requested reports, I tried scanning the bug-autoconf\narchives, and found a similar report that was answered thus [3]:\n\n> I *think* this is the same problem as https://savannah.gnu.org/support/?110492 \n> : current Autoconf doesn't work correctly with the (rather old) version of GNU \n> M4 that ships with MacOS. Please try installing a current version of GNU M4 in \n> your PATH and then retry the build and testsuite.\n\nSo that explains part of it: most of the failures are down to using\nApple's hoary m4 instead of the one from MacPorts. We could usefully\nwarn about that in our own docs, perhaps. But there's still these\nscary failures:\n\n509: AC_CHECK_HEADER_STDBOOL FAILED (acheaders.at:9)\n514: AC_HEADER_STDBOOL FAILED (acheaders.at:14)\n\nThe generated autoconf program builds the same output files as you have\nin your patch, and running the configure script gives the correct answer\nfrom AC_HEADER_STDBOOL, so I'm not sure what these test failures are\nunhappy about. Still, this is not a good look for a mainstream\ndevelopment platform. I wonder if we ought to wait for a fix.\n\n\t\t\tregards, tom lane\n\n[1] https://lists.gnu.org/archive/html/bug-autoconf/2022-07/msg00000.html\n[2] https://lists.gnu.org/archive/html/bug-autoconf/2022-07/msg00001.html\n[3] https://lists.gnu.org/archive/html/bug-autoconf/2022-04/msg00002.html\n\n\n",
"msg_date": "Sat, 02 Jul 2022 12:11:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "I wrote:\n> So that explains part of it: most of the failures are down to using\n> Apple's hoary m4 instead of the one from MacPorts. We could usefully\n> warn about that in our own docs, perhaps.\n\nHmm. I have just spent a very frustrating hour trying, and failing,\nto build any version of GNU m4 from source on either RHEL8 or current\nmacOS. I don't quite understand why: neither the RPM specfile nor\nthe MacPorts recipe for their respective m4 packages seem to contain\nany special hacks, so that it looks like the usual \"configure; make;\nmake check; make install\" procedure ought to work fine. But it doesn't.\nI hit build failures (apparently because the source code is far too much\nin bed with nonstandard aspects of libc), or get an executable that\nSIGABRT's instantly, or if it doesn't do that it still fails some\nself-tests. With the latest 1.4.19 on macOS, the configure script\nhangs up, for crissakes.\n\nI am now feeling *very* hesitant about doing anything where we might\nbe effectively asking people to build m4 for themselves.\n\nOn the whole, I'm questioning the value of messing with our autoconf\ninfrastructure at this stage. We did agree at PGCon that we'd keep\nit going for a couple years more, but it's not real clear to me why\nwe can't limp along with 2.69 until we decide to drop it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Jul 2022 13:42:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> On the whole, I'm questioning the value of messing with our autoconf\n> infrastructure at this stage. We did agree at PGCon that we'd keep\n> it going for a couple years more, but it's not real clear to me why\n> we can't limp along with 2.69 until we decide to drop it.\n\nIf building it on macOS is going to be annoying, then -1 from me for\nupgrading to a new version until that's resolved.\n\nHmm, I also don't know how annoying it's going to be to get the new\nninja/meson stuff working on macOS ... I really hope someone puts a\ngood set of directions on the wiki or in the documentation or\nsomeplace.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 3 Jul 2022 10:41:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Hmm, I also don't know how annoying it's going to be to get the new\n> ninja/meson stuff working on macOS ... I really hope someone puts a\n> good set of directions on the wiki or in the documentation or\n> someplace.\n\nIf you use MacPorts it's just \"install those packages\", and I imagine\nthe same for Homebrew. I've not tried build-from-source on modern\nplatforms.\n\nOne thing I think we lack data on is whether we're going to need a\npolicy similar to everyone-must-use-exactly-this-autoconf-version.\nIf we do, that will greatly raise the importance of building from\nsource.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Jul 2022 10:50:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-03 10:50:49 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Hmm, I also don't know how annoying it's going to be to get the new\n> > ninja/meson stuff working on macOS ... I really hope someone puts a\n> > good set of directions on the wiki or in the documentation or\n> > someplace.\n\nYea, I guess I should start a documentation section...\n\nI've only used homebrew on mac, but with that it should be something along the\nlines of\n\nbrew install meson\nmeson setup --buildtype debug -Dcassert=true build-directory\ncd build-directory\nninja\n\nof course if you want to build against some dependencies and / or run tap\ntests, you need to do something similar to what you have to do for\nconfigure. I.e.\n- install perl modules [1]\n- tell the build about location of homebrew [2]\n\n\n> If you use MacPorts it's just \"install those packages\", and I imagine\n> the same for Homebrew. I've not tried build-from-source on modern\n> platforms.\n\nI've done some semi automated testing (to be turned fully automatic) across\nmeson versions that didn't so far show any need for that. We do require a\ncertain minimum version of meson (indicated in the top-level meson.build,\nraises an error if not met), which in turn requires a minimum version of ninja\n(also errors).\n\nThe windows build with msbuild is slower on older versions of meson that are\nunproblematic on other platforms. But given you're not going to install an\noutdated meson from $package-manager there, I don't think it's worth worrying\nabout.\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/postgres/blob/meson/.cirrus.yml#L638\n[2] https://github.com/anarazel/postgres/blob/meson/.cirrus.yml#L742\n\n\n",
"msg_date": "Sun, 3 Jul 2022 10:17:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On Sun, Jul 3, 2022 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea, I guess I should start a documentation section...\n>\n> I've only used homebrew on mac, but with that it should be something along the\n> lines of\n>\n> brew install meson\n> meson setup --buildtype debug -Dcassert=true build-directory\n> cd build-directory\n> ninja\n>\n> of course if you want to build against some dependencies and / or run tap\n> tests, you need to do something similar to what you have to do for\n> configure. I.e.\n> - install perl modules [1]\n> - tell the build about location of homebrew [2]\n\nSince I'm a macports user I hope at some point we'll have directions\nfor that as well as for homebrew.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:42:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-05 14:42:03 -0400, Robert Haas wrote:\n> On Sun, Jul 3, 2022 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea, I guess I should start a documentation section...\n> >\n> > I've only used homebrew on mac, but with that it should be something along the\n> > lines of\n> >\n> > brew install meson\n> > meson setup --buildtype debug -Dcassert=true build-directory\n> > cd build-directory\n> > ninja\n> >\n> > of course if you want to build against some dependencies and / or run tap\n> > tests, you need to do something similar to what you have to do for\n> > configure. I.e.\n> > - install perl modules [1]\n> > - tell the build about location of homebrew [2]\n> \n> Since I'm a macports user I hope at some point we'll have directions\n> for that as well as for homebrew.\n\nI am not a normal mac user, it looks hard to run macos in a VM, and I'm not\nsure it's wise to mix macports and homebrew on my test box. So I don't want to\ntest it myself.\n\nBut it looks like it's just\n sudo port install meson\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Jul 2022 11:47:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-05 14:42:03 -0400, Robert Haas wrote:\n>> Since I'm a macports user I hope at some point we'll have directions\n>> for that as well as for homebrew.\n\n> But it looks like it's just\n> sudo port install meson\n\nYeah, that's what I did to install it locally. The ninja package\nhas some weird name (ninja-build or some such), but you don't have\nto remember that because installing meson is enough to pull it in.\n\nI dunno anything about the other steps Andres mentioned, but\npresumably they're independent of where you got meson from.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 14:52:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 2:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-07-05 14:42:03 -0400, Robert Haas wrote:\n> >> Since I'm a macports user I hope at some point we'll have directions\n> >> for that as well as for homebrew.\n>\n> > But it looks like it's just\n> > sudo port install meson\n>\n> Yeah, that's what I did to install it locally. The ninja package\n> has some weird name (ninja-build or some such), but you don't have\n> to remember that because installing meson is enough to pull it in.\n>\n> I dunno anything about the other steps Andres mentioned, but\n> presumably they're independent of where you got meson from.\n\nThat seems simple enough that even I can handle it!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:52:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-05 14:52:05 -0400, Tom Lane wrote:\n> I dunno anything about the other steps Andres mentioned, but\n> presumably they're independent of where you got meson from.\n\nYea. They might not be independent of where you get other dependencies from\nthough. Does macports install headers / libraries into a path that's found by\ndefault? Or does one have to pass --with-includes / --with-libs to configure\nand set PKG_CONFIG_PATH, like with homebrew?\n\nExcept that with meson doing PKG_CONFIG_PATH should suffice for most (all?)\ndependencies on macos, and that the syntax for with-includes/libs is a bit\ndifferent (-Dextra_include_dirs=... and -Dextra_lib_dirs=...) and that\noptionally one can use a parameter (--pkg-config-path) instead of\nPKG_CONFIG_PATH, that part shouldn't really differ from what's neccesary\nfor configure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Jul 2022 12:02:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Yea. They might not be independent of where you get other dependencies from\n> though. Does macports install headers / libraries into a path that's found by\n> default? Or does one have to pass --with-includes / --with-libs to configure\n> and set PKG_CONFIG_PATH, like with homebrew?\n\nWhat are you expecting to need PKG_CONFIG_PATH for? Or more precisely,\nwhy would meson/ninja create any new need for that that doesn't exist\nin the autoconf case?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 15:06:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-05 15:06:31 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Yea. They might not be independent of where you get other dependencies from\n> > though. Does macports install headers / libraries into a path that's found by\n> > default? Or does one have to pass --with-includes / --with-libs to configure\n> > and set PKG_CONFIG_PATH, like with homebrew?\n> \n> What are you expecting to need PKG_CONFIG_PATH for? Or more precisely,\n> why would meson/ninja create any new need for that that doesn't exist\n> in the autoconf case?\n\nIt's just used in more cases than before, with fallback to non-pkg-config in\nmost cases. I think all dependencies besides perl can use pkg-config. So all\nthat changes compared to AC is that you might not need to pass extra\ninclude/lib paths for some dependencies that needed it before, if you set/pass\nPKG_CONFIG_PATH.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Jul 2022 12:14:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 3:02 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea. They might not be independent of where you get other dependencies from\n> though. Does macports install headers / libraries into a path that's found by\n> default? Or does one have to pass --with-includes / --with-libs to configure\n> and set PKG_CONFIG_PATH, like with homebrew?\n\nMy configure switches include: --with-libraries=/opt/local/lib\n--with-includes=/opt/local/include\n\nI don't do anything with PKG_CONFIG_PATH.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 17:04:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "... anyway, to get back to the main point of this thread:\n\nThe Autoconf developers were pretty responsive to my bug reports,\nand after some back-and-forth we determined that:\n\n1. The minimum GNU m4 version for modern autoconf is 1.4.8; this\nis directly traceable to intentional behavioral changes in that\nversion, so it's a pretty hard requirement. They've updated their\nown configure script to enforce that minimum.\n\n2. The macOS-specific problems I saw with the STDBOOL tests are\nresolved by the attached patch, which should also appear in 2.72.\nSince AC_HEADER_STDBOOL appears to work correctly in our usage\nanyway, this is only important if you're the kind of person who\nlikes to see 100% pass from a tool's own self-tests before you\ninstall it.\n\nSo as far as autoconf itself is concerned, we could probably move\nforward, perhaps after waiting for 2.72. The difficulty here is the\nprospect that some people might find themselves having to install a\nnewer GNU m4, because GNU m4 is a hot mess. Many post-1.4.8 versions\nflat out don't compile on $your-favorite-platform [1], and many\nothers contain a showstopper bug (that's rejected by a runtime test in\nautoconf's configure, independently of the min-version test) [2].\nIf you don't have a pretty recent m4 available from a package manager,\nyou might be in for a lot of hair-pulling.\n\nThe flip side of that is that probably nobody really needs to\nupdate the configure script on non-mainstream platforms, so\nmaybe this wouldn't matter to us too much in practice.\n\nOn the whole though, my feeling is that autoconf 2.71 doesn't\noffer enough to us to justify possibly causing substantial pain\nfor a few developers. I recommend setting this project aside\nfor now. We can always reconsider if the situation changes.\n\n\t\t\tregards, tom lane\n\n[1] https://lists.gnu.org/archive/html/bug-autoconf/2022-07/msg00004.html\n[2] https://lists.gnu.org/archive/html/bug-autoconf/2022-07/msg00006.html",
"msg_date": "Sat, 16 Jul 2022 11:26:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
},
{
"msg_contents": "On 16.07.22 17:26, Tom Lane wrote:\n> On the whole though, my feeling is that autoconf 2.71 doesn't\n> offer enough to us to justify possibly causing substantial pain\n> for a few developers. I recommend setting this project aside\n> for now. We can always reconsider if the situation changes.\n\nOk, let's do that.\n\n\n",
"msg_date": "Mon, 18 Jul 2022 11:11:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: Autoconf has risen from the dead"
}
] |
[
{
"msg_contents": "While investigating for 353708e1f, I happened to notice that pg_dump's\nbinary_upgrade_set_type_oids_by_type_oid() contains\n\n PQExpBuffer upgrade_query = createPQExpBuffer();\n ...\n appendPQExpBuffer(upgrade_query,\n \"SELECT typarray \"\n ...\n res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data);\n ...\n appendPQExpBuffer(upgrade_query,\n \"SELECT t.oid, t.typarray \"\n ...\n res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data);\n\nHow's that work? It turns out that the second ExecuteSqlQueryForSingleRow\nis sending a string like \"SELECT typarray ...;SELECT t.oid, t.typarray ...\"\nwhich the backend happily interprets as multiple commands. It sends\nback multiple query results, and then PQexec discards all but the last\none. So, very accidentally, there's no observable bug, just some wasted\ncycles.\n\nI will go fix this, but I wondered if there are any other similar\nerrors, or what we might do to prevent the same type of mistake\nin future. I experimented with replacing most of pg_dump's PQexec\ncalls with PQexecParams, as in the attached quick hack (NOT meant\nfor commit). That did not turn up any additional cases, but of\ncourse I have limited faith in the code coverage of check-world.\n\nWe could consider a more global change to get rid of using\nappendPQExpBuffer where it's not absolutely necessary, so that\nthere are fewer bad examples to copy. Another idea is to deem\nit an anti-pattern to end a query with a semicolon. But I don't\nhave a lot of faith in people following those coding rules in\nfuture, either. It'd also be a lot of code churn for what is\nin the end a relatively harmless bug.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 23 Jan 2022 13:31:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bogus duplicate command issued in pg_dump"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 11:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data);\n> ...\n> appendPQExpBuffer(upgrade_query,\n> \"SELECT t.oid, t.typarray \"\n> ...\n> res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data);\n>\n> How's that work?\n\n\nI just spent 10 minutes thinking you were wrong because I confused the\nupgrade_query and upgrade_buffer variables in that function.\n\nYou might just as well have fixed the first upgrade_query command to be\nprint instead of append. And, better yet, renamed its variable to\n\"array_oid_query\" then added a new PQExpBuffer variable \"range_oid_query\".\nBecause, is double-purposing a variable here, with a badly chosen generic\nname, really worth saving a create buffer call? If it is, naming is\nsomething like \"oid_query\" would be better than leading with \"upgrade\".\nThough I am looking at this function in isolation...\n\nWe could consider a more global change to get rid of using\n> appendPQExpBuffer where it's not absolutely necessary, so that\n> there are fewer bad examples to copy. Another idea is to deem\n> it an anti-pattern to end a query with a semicolon. But I don't\n> have a lot of faith in people following those coding rules in\n> future, either. It'd also be a lot of code churn for what is\n> in the end a relatively harmless bug.\n>\n> Thoughts?\n>\n>\nI would avoid overreacting. The biggest issue would be when the previous\nquery used to execute in some cases but using append incorrectly prevents\nthat prior execution. I don't think that is likely to get past review and\ncommitted in practice. Here it is all new code and while as I noted above\nit has some quality concerns it did work correctly when committed and that\nisn't surprising. I don't see enough benefit to warrant refactoring here.\n\nI think a contributing factor here is the fact that the upgrade_buffer is\ndesigned around using appendPQExpBuffer. The kind of typo seems obvious\ngiven that in most cases it will actually provide valid results. But it\nalso seems to restrict our ability to do something globally.\n\nDavid J.\n\nOn Sun, Jan 23, 2022 at 11:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data);\n ...\n appendPQExpBuffer(upgrade_query,\n \"SELECT t.oid, t.typarray \"\n ...\n res = ExecuteSqlQueryForSingleRow(fout, upgrade_query->data);\n\nHow's that work?I just spent 10 minutes thinking you were wrong because I confused the upgrade_query and upgrade_buffer variables in that function.You might just as well have fixed the first upgrade_query command to be print instead of append. And, better yet, renamed its variable to \"array_oid_query\" then added a new PQExpBuffer variable \"range_oid_query\". Because, is double-purposing a variable here, with a badly chosen generic name, really worth saving a create buffer call? If it is, naming is something like \"oid_query\" would be better than leading with \"upgrade\". Though I am looking at this function in isolation...\nWe could consider a more global change to get rid of using\nappendPQExpBuffer where it's not absolutely necessary, so that\nthere are fewer bad examples to copy. Another idea is to deem\nit an anti-pattern to end a query with a semicolon. But I don't\nhave a lot of faith in people following those coding rules in\nfuture, either. It'd also be a lot of code churn for what is\nin the end a relatively harmless bug.\n\nThoughts?I would avoid overreacting. The biggest issue would be when the previous query used to execute in some cases but using append incorrectly prevents that prior execution. I don't think that is likely to get past review and committed in practice. Here it is all new code and while as I noted above it has some quality concerns it did work correctly when committed and that isn't surprising. I don't see enough benefit to warrant refactoring here.I think a contributing factor here is the fact that the upgrade_buffer is designed around using appendPQExpBuffer. The kind of typo seems obvious given that in most cases it will actually provide valid results. But it also seems to restrict our ability to do something globally.David J.",
"msg_date": "Sun, 23 Jan 2022 12:39:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus duplicate command issued in pg_dump"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I just spent 10 minutes thinking you were wrong because I confused the\n> upgrade_query and upgrade_buffer variables in that function.\n\n> You might just as well have fixed the first upgrade_query command to be\n> print instead of append. And, better yet, renamed its variable to\n> \"array_oid_query\" then added a new PQExpBuffer variable \"range_oid_query\".\n> Because, is double-purposing a variable here, with a badly chosen generic\n> name, really worth saving a create buffer call? If it is, naming is\n> something like \"oid_query\" would be better than leading with \"upgrade\".\n\nYeah, I was not terribly impressed with the naming choices in that\ncode either, but I didn't feel like getting into cosmetic changes.\nIf I were renaming things in that area, I'd start with the function\nname --- binary_upgrade_set_type_oids_by_type_oid is long enough\nto aggravate carpal-tunnel problems, and yet it's as clear as mud;\nwhat does \"by\" mean in this context? Don't expect the function's\ncomment to tell you, because there is none, another way in which\nthis code is subpar.\n\nThe bigger issue to me is that the behavior of PQexec masks what\nseems like a pretty easy mistake to make. I don't like that,\nbut I'm not seeing any non-invasive way to improve things.\nAnd, as you say, an invasive change seems like overreaction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Jan 2022 14:59:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus duplicate command issued in pg_dump"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 01:31:03PM -0500, Tom Lane wrote:\n> We could consider a more global change to get rid of using\n> appendPQExpBuffer where it's not absolutely necessary, so that\n> there are fewer bad examples to copy. Another idea is to deem\n> it an anti-pattern to end a query with a semicolon. But I don't\n> have a lot of faith in people following those coding rules in\n> future, either. It'd also be a lot of code churn for what is\n> in the end a relatively harmless bug.\n\nCould a backend-side, run-time configurable developper GUC,\npotentially help here? This could look after multiple queries in code\npaths where we don't want any, once you combine it with a specific\ncompilation flag à-la-ENFORCE_REGRESSION_TEST.\n--\nMichael",
"msg_date": "Mon, 24 Jan 2022 11:25:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bogus duplicate command issued in pg_dump"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 7:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Jan 23, 2022 at 01:31:03PM -0500, Tom Lane wrote:\n> > We could consider a more global change to get rid of using\n> > appendPQExpBuffer where it's not absolutely necessary, so that\n> > there are fewer bad examples to copy. Another idea is to deem\n> > it an anti-pattern to end a query with a semicolon. But I don't\n> > have a lot of faith in people following those coding rules in\n> > future, either. It'd also be a lot of code churn for what is\n> > in the end a relatively harmless bug.\n>\n> Could a backend-side, run-time configurable developper GUC,\n> potentially help here? This could look after multiple queries in code\n> paths where we don't want any, once you combine it with a specific\n> compilation flag à-la-ENFORCE_REGRESSION_TEST.\n>\n>\nI don't see how this helps unless you change the system (under the\ncompilation flag) into \"Don't allow multiple commands under simple query\nprotocol unless the user has indicated that they will be doing that by\nenabling the allow_multiple_commands_in_simple_query_protocol GUC at the\nstart of their, possibly multi-transaction, query.\" (I don't even want to\nconsider them toggling it). Forcing an author to specify when they don't\nwant multiple commands is just going to be ignored and since no errors will\nbe raised (even under the compiler flag) we will effectively remain status\nquo.\n\nI do not see that alternative mode being practical, let alone a net\npositive.\n\nIs there some trick in C where you can avoid the buffer? That is what\nbasically caused the issue. If one could write:\nres = ExecuteSqlQueryForSingleRow(fout, \"SELECT \"\ntypname FROM \" ...);\ndirectly the decision to print or append the buffer would not be necessary\nand I would expect one-shot queries to then be done using this, thus\navoiding the observed issue.\n\nI could see having the executor operate in a mode where if a query result\nis discarded it logs a warning. But that cannot be unconditional. \"SELECT\nperform_function(); SELECT * FROM table_the_function_just_populated;\"\ndiscards a result but because functions must be executed in SELECT this\nsituation is one that should be ignored. In short, the setup seems like it\nshould be easy enough (I'd hope we can figure out when we've discarded a\nquery result because a new one came after) but defining the exceptions to\nthe rule seems much trickier (we'd then probably want the GUC in order to\nget rid of false positives that cannot be added to the exceptions). And in\norder to catch existing bugs you still have to have confidence in the\ncheck-world. But if you have that then a behavioral bug introduced by this\nkind of error is sufficiently likely to be caught anyway that the need for\nthis decreases substantially.\n\nSo, it seems the time would probably be better spent doing organized code\nexploring and improving test coverage if there is a real concern that there\nare more bugs of this ilk out there causing behavioral or meaningful\nperformance issues. At least for pg_dump we ostensibly can test that the\nmost important outcome isn't violated - the what was dumped gets restored.\nI presume we do that and it feels like if we missed capturing the outcome\nof a select command that would be unlikely.\n\nDavid J.\n\nOn Sun, Jan 23, 2022 at 7:25 PM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Jan 23, 2022 at 01:31:03PM -0500, Tom Lane wrote:\n> We could consider a more global change to get rid of using\n> appendPQExpBuffer where it's not absolutely necessary, so that\n> there are fewer bad examples to copy. Another idea is to deem\n> it an anti-pattern to end a query with a semicolon. But I don't\n> have a lot of faith in people following those coding rules in\n> future, either. It'd also be a lot of code churn for what is\n> in the end a relatively harmless bug.\n\nCould a backend-side, run-time configurable developper GUC,\npotentially help here? This could look after multiple queries in code\npaths where we don't want any, once you combine it with a specific\ncompilation flag à-la-ENFORCE_REGRESSION_TEST.I don't see how this helps unless you change the system (under the compilation flag) into \"Don't allow multiple commands under simple query protocol unless the user has indicated that they will be doing that by enabling the allow_multiple_commands_in_simple_query_protocol GUC at the start of their, possibly multi-transaction, query.\" (I don't even want to consider them toggling it). Forcing an author to specify when they don't want multiple commands is just going to be ignored and since no errors will be raised (even under the compiler flag) we will effectively remain status quo.I do not see that alternative mode being practical, let alone a net positive.Is there some trick in C where you can avoid the buffer? That is what basically caused the issue. If one could write:res = ExecuteSqlQueryForSingleRow(fout, \"SELECT \"typname FROM \" ...);directly the decision to print or append the buffer would not be necessary and I would expect one-shot queries to then be done using this, thus avoiding the observed issue.I could see having the executor operate in a mode where if a query result is discarded it logs a warning. But that cannot be unconditional. \"SELECT perform_function(); SELECT * FROM table_the_function_just_populated;\" discards a result but because functions must be executed in SELECT this situation is one that should be ignored. In short, the setup seems like it should be easy enough (I'd hope we can figure out when we've discarded a query result because a new one came after) but defining the exceptions to the rule seems much trickier (we'd then probably want the GUC in order to get rid of false positives that cannot be added to the exceptions). And in order to catch existing bugs you still have to have confidence in the check-world. But if you have that then a behavioral bug introduced by this kind of error is sufficiently likely to be caught anyway that the need for this decreases substantially.So, it seems the time would probably be better spent doing organized code exploring and improving test coverage if there is a real concern that there are more bugs of this ilk out there causing behavioral or meaningful performance issues. At least for pg_dump we ostensibly can test that the most important outcome isn't violated - the what was dumped gets restored. I presume we do that and it feels like if we missed capturing the outcome of a select command that would be unlikely.David J.",
"msg_date": "Sun, 23 Jan 2022 21:20:37 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus duplicate command issued in pg_dump"
}
] |
[
{
"msg_contents": "Most of this is new in v15 or doesn't affect user-facing docs so doesn't need\nto be backpatched.\n\nFeel free to ignore this for now and revisit in April...\n\n@Michael: I'm not sure what this is trying to say.\n1e9475694b0ae2cf1204d01d2ef6ad86f3c7cac8\n+ First, scan the directory where the WAL segment files are written and\n+ find the newest completed segment file, using as starting point the\n+ beginning of the next WAL segment file. This is calculated independently\n+ on the compression method used to compress each segment.\n\nI suppose it should say independently *of* the compression method, but then I\nstill don't know what it means. I checked FindStreamingStart().\nIt that doesn't look like it's \"calculated independently\" - actually, it takes\nthe compression method into account and explicitly handles each compression\nmethod.\n\nIs there any reason the user-facing docs need to say anything about this at\nall?",
"msg_date": "Sun, 23 Jan 2022 21:00:01 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "typos"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 09:00:01PM -0600, Justin Pryzby wrote:\n> Feel free to ignore this for now and revisit in April...\n\nI don't mind fixing that now. That means less to do later.\n\n> @Michael: I'm not sure what this is trying to say.\n> 1e9475694b0ae2cf1204d01d2ef6ad86f3c7cac8\n> + First, scan the directory where the WAL segment files are written and\n> + find the newest completed segment file, using as starting point the\n> + beginning of the next WAL segment file. This is calculated independently\n> + on the compression method used to compress each segment.\n> \n> I suppose it should say independently *of* the compression method, but then I\n> still don't know what it means. I checked FindStreamingStart().\n> It that doesn't look like it's \"calculated independently\" - actually, it takes\n> the compression method into account and explicitly handles each compression\n> method.\n\nThis means that we are able to calculate the starting LSN even if the\nsegments stored use different compression methods or are\nuncompressed. Would you reword that differently? Or perhaps removing\nthe last sentence of this paragraph would be simpler in the long run?\n--\nMichael",
"msg_date": "Mon, 24 Jan 2022 16:01:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 04:01:47PM +0900, Michael Paquier wrote:\n> On Sun, Jan 23, 2022 at 09:00:01PM -0600, Justin Pryzby wrote:\n> > Feel free to ignore this for now and revisit in April...\n> \n> I don't mind fixing that now. That means less to do later.\n\nThanks.\n\n> > @Michael: I'm not sure what this is trying to say.\n> > 1e9475694b0ae2cf1204d01d2ef6ad86f3c7cac8\n> > + First, scan the directory where the WAL segment files are written and\n> > + find the newest completed segment file, using as starting point the\n> > + beginning of the next WAL segment file. This is calculated independently\n> > + on the compression method used to compress each segment.\n> > \n> > I suppose it should say independently *of* the compression method, but then I\n> > still don't know what it means. I checked FindStreamingStart().\n> > It that doesn't look like it's \"calculated independently\" - actually, it takes\n> > the compression method into account and explicitly handles each compression\n> > method.\n> \n> This means that we are able to calculate the starting LSN even if the\n> segments stored use different compression methods or are\n> uncompressed. Would you reword that differently? Or perhaps removing\n> the last sentence of this paragraph would be simpler in the long run?\n\ndifferent from what? From each other ?\n\nMaybe I would have written:\n| This is calculated separately for each segment, which may each use\n| different compression methods.\n\nBut probably I would just remove it.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 24 Jan 2022 01:07:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 01:07:54AM -0600, Justin Pryzby wrote:\n> different from what? From each other ?\n\nEach segment could be either uncompressed, compressed with LZ4, or\ncompressed with GZIP, so could be different from each other.\n\n> Maybe I would have written:\n> | This is calculated separately for each segment, which may each use\n> | different compression methods.\n> \n> But probably I would just remove it.\n\nI'm thinking about just removing that at the end.\n--\nMichael",
"msg_date": "Mon, 24 Jan 2022 16:55:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 04:55:31PM +0900, Michael Paquier wrote:\n> I'm thinking about just removing that at the end.\n\nAnd done this way, keeping the whole simpler. I have applied most of\nthe things you suggested, with a backpatch down to 10 for the relevant\nuser-visible parts in the docs. Thanks!\n--\nMichael",
"msg_date": "Tue, 25 Jan 2022 10:53:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
}
] |
[
{
"msg_contents": "There are many Makefile rules like\n\nfoo: bar\n\t./tool $< > $@\n\nIf the rule is interrupted (due to ^C or ENOSPC), foo can be 0 bytes or\npartially written, but won't be rebuilt until someone runs distclean or debugs\nit and removes the individual file, as I did for errcodes.h.\n\nIt'd be better if these did\n\n./tool $< > $@.new\nmv $@.new $@\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 23 Jan 2022 21:23:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "makefiles writing to $@ should first write to $@.new"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 09:23:05PM -0600, Justin Pryzby wrote:\n> If the rule is interrupted (due to ^C or ENOSPC), foo can be 0 bytes or\n> partially written, but won't be rebuilt until someone runs distclean or debugs\n> it and removes the individual file, as I did for errcodes.h.\n\nHonestly, I am not sure that this worth bothering about. This comes\ndown to a balance between the code complexity and the likelihood of a\nfailure, and the odds are not in favor of the later IMO. Now, it\ncould be perhaps possible to make such a change simple enough while it\navoids a lot of technical debt, but we have a lot of custom rules\nparticularly in src/bin/, so changing all that or even require that in\nfuture changes is not really appealing.\n--\nMichael",
"msg_date": "Mon, 24 Jan 2022 12:41:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: makefiles writing to $@ should first write to $@.new"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 24, 2022 at 12:41:49PM +0900, Michael Paquier wrote:\n> On Sun, Jan 23, 2022 at 09:23:05PM -0600, Justin Pryzby wrote:\n> > If the rule is interrupted (due to ^C or ENOSPC), foo can be 0 bytes or\n> > partially written, but won't be rebuilt until someone runs distclean or debugs\n> > it and removes the individual file, as I did for errcodes.h.\n> \n> Honestly, I am not sure that this worth bothering about. This comes\n> down to a balance between the code complexity and the likelihood of a\n> failure, and the odds are not in favor of the later IMO. Now, it\n> could be perhaps possible to make such a change simple enough while it\n> avoids a lot of technical debt, but we have a lot of custom rules\n> particularly in src/bin/, so changing all that or even require that in\n> future changes is not really appealing.\n\nI agree, it doesn't seem worth it.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 12:07:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: makefiles writing to $@ should first write to $@.new"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Jan 24, 2022 at 12:41:49PM +0900, Michael Paquier wrote:\n>> Honestly, I am not sure that this worth bothering about. This comes\n>> down to a balance between the code complexity and the likelihood of a\n>> failure, and the odds are not in favor of the later IMO. Now, it\n>> could be perhaps possible to make such a change simple enough while it\n>> avoids a lot of technical debt, but we have a lot of custom rules\n>> particularly in src/bin/, so changing all that or even require that in\n>> future changes is not really appealing.\n\n> I agree, it doesn't seem worth it.\n\nAgreed. Another reason to not bother about this is the likelihood\nthat it'd all be wasted effort as soon as we switch to meson.\nIf that project fails, I'd be open to revisiting this issue;\nbut I don't think we should spend time improving the Makefiles\nright now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jan 2022 09:32:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: makefiles writing to $@ should first write to $@.new"
},
{
"msg_contents": "On 24.01.22 04:23, Justin Pryzby wrote:\n> There are many Makefile rules like\n> \n> foo: bar\n> \t./tool $< > $@\n> \n> If the rule is interrupted (due to ^C or ENOSPC), foo can be 0 bytes or\n> partially written, but won't be rebuilt until someone runs distclean or debugs\n> it and removes the individual file, as I did for errcodes.h.\n\nIf a rule fails, make removes the target file. So I don't see how this \ncan happen unless you hard kill -9 make or something like that.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 17:32:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: makefiles writing to $@ should first write to $@.new"
}
] |
[
{
"msg_contents": "Hi,\n\nI was trying to build Postgres from source on my Mac (MacOS Monterey 12.1)\nand ran into an error when running configure.\n\n./configure\n\n\n...\n\nchecking for gcc option to accept ISO C99... unsupported\n\nconfigure: error: C compiler \"gcc\" does not support C99\n\n\nWhen I do gcc --version I see:\n\nConfigured with: --prefix=/Library/Developer/CommandLineTools/usr\n--with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/c++/4.2.1\n\nApple clang version 13.0.0 (clang-1300.0.27.3)\n\nTarget: x86_64-apple-darwin21.2.0\n\nThread model: posix\n\nInstalledDir: /Library/Developer/CommandLineTools/usr/bin\n\n\nSo, it looks like it is using clang to compile and not gcc.\n\n\nconfigure runs successfully if I do: ./configure CC=gcc-11 or soft link gcc\nto gcc-11 and then run configure. However, I didn't find these tips in\nPlatform specific notes in docs:\nhttps://www.postgresql.org/docs/9.6/installation-platform-notes.html#INSTALLATION-NOTES-MACOS\n.\n\n\nSo, I wanted to ask if this behavior is expected and if so, should we\nupdate docs to make a note of this?\n\n\nRegards,\n\nSamay\n\nHi,I was trying to build Postgres from source on my Mac (MacOS Monterey 12.1) and ran into an error when running configure.\n./configure\n...\nchecking for gcc option to accept ISO C99... unsupported\nconfigure: error: C compiler \"gcc\" does not support C99When I do gcc --version I see:Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/c++/4.2.1Apple clang version 13.0.0 (clang-1300.0.27.3)Target: x86_64-apple-darwin21.2.0Thread model: posix\nInstalledDir: /Library/Developer/CommandLineTools/usr/binSo, it looks like it is using clang to compile and not gcc.configure runs successfully if I do: ./configure CC=gcc-11 or soft link gcc to gcc-11 and then run configure. However, I didn't find these tips in Platform specific notes in docs: https://www.postgresql.org/docs/9.6/installation-platform-notes.html#INSTALLATION-NOTES-MACOS.So, I wanted to ask if this behavior is expected and if so, should we update docs to make a note of this?Regards,Samay",
"msg_date": "Sun, 23 Jan 2022 23:07:55 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Error running configure on Mac"
},
{
"msg_contents": "samay sharma <smilingsamay@gmail.com> writes:\n> I was trying to build Postgres from source on my Mac (MacOS Monterey 12.1)\n> and ran into an error when running configure.\n\nWorks for me, and for other developers, and for assorted buildfarm\nanimals.\n\n> checking for gcc option to accept ISO C99... unsupported\n> configure: error: C compiler \"gcc\" does not support C99\n\nThat is bizarre. Can you show the segment of config.log\nthat corresponds to this? The exact error message that\nthe compiler is reporting would be useful.\n\nAlso, I wonder if you are using Apple's gcc (yeah, that's\nreally clang), or a gcc from MacPorts or Brew or the like.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jan 2022 02:14:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error running configure on Mac"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 11:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> samay sharma <smilingsamay@gmail.com> writes:\n> > I was trying to build Postgres from source on my Mac (MacOS Monterey\n> 12.1)\n> > and ran into an error when running configure.\n>\n> Works for me, and for other developers, and for assorted buildfarm\n> animals.\n>\n> > checking for gcc option to accept ISO C99... unsupported\n> > configure: error: C compiler \"gcc\" does not support C99\n>\n> That is bizarre. Can you show the segment of config.log\n> that corresponds to this? The exact error message that\n> the compiler is reporting would be useful.\n>\n\nThe line above the error message in config.log is:\n\nconfigure:4607: result: unsupported\n\nconfigure:4623: error: C compiler \"gcc\" does not support C99\n\n\nSlightly above that, I see this error message too:\n\n\nconfigure:4591: gcc -qlanglvl=extc99 -c -g -O2 conftest.c >&5\n\nclang: error: unknown argument: '-qlanglvl=extc99'\n\nconfigure:4591: $? = 1\n\nconfigure: failed program was:\n\n....\n\n\nI also see many more error messages in config.log when I grep for error.\nSo, I've attached the entire file in case any other output is useful.\n\n\n\n> Also, I wonder if you are using Apple's gcc (yeah, that's\n> really clang), or a gcc from MacPorts or Brew or the like.\n>\n\nI've pasted the output of gcc --version in the previous email which reports\nit to be Apple clang version 13.0.0 (clang-1300.0.27.3). Is there any other\ncommand which I can run to give more info about this?\n\nRegards,\nSamay\n\n>\n> regards, tom lane\n>",
"msg_date": "Sun, 23 Jan 2022 23:44:11 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Error running configure on Mac"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-23 23:44:11 -0800, samay sharma wrote:\n> I also see many more error messages in config.log when I grep for error.\n> So, I've attached the entire file in case any other output is useful.\n\nThe important lines seem to be:\n\nconfigure:4591: gcc -c -g -O2 conftest.c >&5\nIn file included from conftest.c:21:\nIn file included from /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/stdlib.h:66:\nIn file included from /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/sys/wait.h:110:\n/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/sys/resource.h:202:2: error: unknown type name 'uint8_t'\n uint8_t ri_uuid[16];\n\nI.e. that for some reason on your system stdlib.h is not standalone. Which is\nquite bizarre.\n\n\nGrepping through headers on an m1 mini running monterey, I see\n\nsys/resource.h:\n#include <sys/cdefs.h>\n...\n#if __DARWIN_C_LEVEL >= __DARWIN_C_FULL\n#include <stdint.h>\n#endif /* __DARWIN_C_LEVEL >= __DARWIN_C_FULL */\n\nsys/cdefs.h:\n#if defined(_ANSI_SOURCE)\n#define __DARWIN_C_LEVEL __DARWIN_C_ANSI\n#elif defined(_POSIX_C_SOURCE) && !defined(_DARWIN_C_SOURCE) && !defined(_NONSTD_SOURCE)\n#define __DARWIN_C_LEVEL _POSIX_C_SOURCE\n#else\n#define __DARWIN_C_LEVEL __DARWIN_C_FULL\n#endif\n\nSo it seems that the problem is that for some reason your environment ends up\nwith __DARWIN_C_LEVEL being set to something too low?\n\nIt's interesting that all the _POSIX_C_SOURCE values are smaller than\n__DARWIN_C_LEVEL, which seems to indicate that the above include of stdint.h\nwon't be made if _POSIX_C_SOURCE is set?\n\n\nCould you create a test.c file like:\n#include <stdarg.h>\n#include <stdbool.h>\n#include <stdlib.h>\n#include <wchar.h>\n#include <stdio.h>\n\nand then run\n\ngcc -v -E -dD test.c -o test.i\n\nand then share both the output of that and test.i?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Jan 2022 00:09:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Error running configure on Mac"
},
{
"msg_contents": "Hey,\n\nOn Mon, Jan 24, 2022 at 12:09 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-01-23 23:44:11 -0800, samay sharma wrote:\n> > I also see many more error messages in config.log when I grep for error.\n> > So, I've attached the entire file in case any other output is useful.\n>\n> The important lines seem to be:\n>\n> configure:4591: gcc -c -g -O2 conftest.c >&5\n> In file included from conftest.c:21:\n> In file included from\n> /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/stdlib.h:66:\n> In file included from\n> /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/sys/wait.h:110:\n> /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/sys/resource.h:202:2:\n> error: unknown type name 'uint8_t'\n> uint8_t ri_uuid[16];\n>\n> I.e. that for some reason on your system stdlib.h is not standalone. Which\n> is\n> quite bizarre.\n>\n>\n> Grepping through headers on an m1 mini running monterey, I see\n>\n> sys/resource.h:\n> #include <sys/cdefs.h>\n> ...\n> #if __DARWIN_C_LEVEL >= __DARWIN_C_FULL\n> #include <stdint.h>\n> #endif /* __DARWIN_C_LEVEL >= __DARWIN_C_FULL */\n>\n> sys/cdefs.h:\n> #if defined(_ANSI_SOURCE)\n> #define __DARWIN_C_LEVEL __DARWIN_C_ANSI\n> #elif defined(_POSIX_C_SOURCE) && !defined(_DARWIN_C_SOURCE) &&\n> !defined(_NONSTD_SOURCE)\n> #define __DARWIN_C_LEVEL _POSIX_C_SOURCE\n> #else\n> #define __DARWIN_C_LEVEL __DARWIN_C_FULL\n> #endif\n>\n> So it seems that the problem is that for some reason your environment ends\n> up\n> with __DARWIN_C_LEVEL being set to something too low?\n>\n> It's interesting that all the _POSIX_C_SOURCE values are smaller than\n> __DARWIN_C_LEVEL, which seems to indicate that the above include of\n> stdint.h\n> won't be made if _POSIX_C_SOURCE is set?\n>\n>\n> Could you create a test.c file like:\n> #include <stdarg.h>\n> #include <stdbool.h>\n> #include <stdlib.h>\n> #include <wchar.h>\n> #include <stdio.h>\n>\n> and then run\n>\n> gcc -v -E -dD test.c -o test.i\n>\n> and then share both the output of that and test.i?\n>\n\nHere's the output of running gcc -v -E -dD test.c -o test.i for that\nprogram.\n\nApple clang version 13.0.0 (clang-1300.0.27.3)\n\nTarget: x86_64-apple-darwin21.2.0\n\nThread model: posix\n\nInstalledDir: /Library/Developer/CommandLineTools/usr/bin\n\n \"/Library/Developer/CommandLineTools/usr/bin/clang\" -cc1 -triple\nx86_64-apple-macosx12.0.0 -Wundef-prefix=TARGET_OS_\n-Wdeprecated-objc-isa-usage -Werror=deprecated-objc-isa-usage\n-Werror=implicit-function-declaration -E -disable-free\n-disable-llvm-verifier -discard-value-names -main-file-name test.c\n-mrelocation-model pic -pic-level 2 -mframe-pointer=all -fno-strict-return\n-fno-rounding-math -munwind-tables -target-sdk-version=12.1\n-fvisibility-inlines-hidden-static-local-var -target-cpu penryn -tune-cpu\ngeneric -debugger-tuning=lldb -target-linker-version 710.1 -v -resource-dir\n/Library/Developer/CommandLineTools/usr/lib/clang/13.0.0 -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk\n-I/usr/local/include -internal-isystem\n/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/local/include\n-internal-isystem\n/Library/Developer/CommandLineTools/usr/lib/clang/13.0.0/include\n-internal-externc-isystem\n/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include\n-internal-externc-isystem /Library/Developer/CommandLineTools/usr/include\n-Wno-reorder-init-list -Wno-implicit-int-float-conversion\n-Wno-c99-designator -Wno-final-dtor-non-final-class -Wno-extra-semi-stmt\n-Wno-misleading-indentation -Wno-quoted-include-in-framework-header\n-Wno-implicit-fallthrough -Wno-enum-enum-conversion\n-Wno-enum-float-conversion -Wno-elaborated-enum-base\n-fdebug-compilation-dir /Users/sash/PostgreSQL_Dev -ferror-limit 19\n-stack-protector 1 -fstack-check -mdarwin-stkchk-strong-link -fblocks\n-fencode-extended-block-signature -fregister-global-dtors-with-atexit\n-fgnuc-version=4.2.1 -fmax-type-align=16 -fcommon -fcolor-diagnostics -dD\n-clang-vendor-feature=+nullptrToBoolConversion\n-clang-vendor-feature=+messageToSelfInClassMethodIdReturnType\n-clang-vendor-feature=+disableInferNewAvailabilityFromInit\n-clang-vendor-feature=+disableNeonImmediateRangeCheck\n-clang-vendor-feature=+disableNonDependentMemberExprInCurrentInstantiation\n-fno-odr-hash-protocols -clang-vendor-feature=+revert09abecef7bbf -mllvm\n-disable-aligned-alloc-awareness=1 -mllvm -enable-dse-memoryssa=0 -o test.i\n-x c test.c\n\nclang -cc1 version 13.0.0 (clang-1300.0.27.3) default target\nx86_64-apple-darwin21.2.0\n\nignoring nonexistent directory\n\"/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/local/include\"\n\nignoring nonexistent directory\n\"/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/Library/Frameworks\"\n\n#include \"...\" search starts here:\n\n#include <...> search starts here:\n\n /usr/local/include\n\n /Library/Developer/CommandLineTools/usr/lib/clang/13.0.0/include\n\n /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include\n\n /Library/Developer/CommandLineTools/usr/include\n\n /Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/System/Library/Frameworks\n(framework directory)\n\nEnd of search list.\n\nI've also attached the test.i file.\n\nRegards,\nSamay\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>",
"msg_date": "Mon, 24 Jan 2022 08:41:39 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Error running configure on Mac"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-24 08:41:39 -0800, samay sharma wrote:\n> I've also attached the test.i file.\n\nThe problem is that you got a stdint.h in /usr/local/include/. And that\nstdint.h doesn't match the system one. Which explains why there's a\ncompilation failure and also explains why others don't see this problem.\n\nfrom test.i\n> # 1 \"/usr/local/include/stdint.h\" 1 3 4\n> \n> #define _ISL_INCLUDE_ISL_STDINT_H 1\n> \n> #define _GENERATED_STDINT_H \"isl 0.14.1\"\n> \n> #define _STDINT_HAVE_STDINT_H 1\n> \n> # 1 \"/usr/local/include/stdint.h\" 1 3 4\n> # 8 \"/usr/local/include/stdint.h\" 2 3 4\n> # 73 \"/Library/Developer/CommandLineTools/SDKs/MacOSX12.1.sdk/usr/include/sys/resource.h\" 2 3 4\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Jan 2022 11:47:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Error running configure on Mac"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using Dynamic shared memory areas(DSA) to manage some variable\nlength shared memory, I've found that in some cases allocation fails even\nthough there are enough contiguous pages.\n\nThe steps to reproduce are as follows:\n1. create a dsa area with a 1MB DSM segment\n2. set its size limit to 1MB\n3. allocate 4KB memory until fails\n4. free all allocated memory in step 3\n5. repeat step 3 and step 4\n\nWhen I first run the step 3, there is 240 4KB memory allocated successfully.\nBut when I free all and allocate again, no memory can be allocated even\nthough there are 252 contiguous pages. IMO, this should not be expected\nto happen, right?\n\nThe call stack is as follows:\n#0 get_best_segment (area=0x200cc70, npages=16) at dsa.c:1972\n#1 0x0000000000b46b36 in ensure_active_superblock (area=0x200cc70,\npool=0x7fa7b51f46f0,\n size_class=33) at dsa.c:1666\n#2 0x0000000000b46555 in alloc_object (area=0x200cc70, size_class=33)\nat dsa.c:1460\n#3 0x0000000000b44f05 in dsa_allocate_extended (area=0x200cc70,\nsize=4096, flags=2) at dsa.c:795\n\nI read the relevant code and found that get_best_segment re-bin the segment\nto segment index 4 when first run the step 3. But when free all and run the\nstep 3 again, get_best_segment search from the first bin that *might*\nhave enough\ncontiguous pages, it is calculated by contiguous_pages_to_segment_bin(),\nfor a superblock with 16 pages, contiguous_pages_to_segment_bin is 5.\nSo the second time, get_best_segment search bin from segment index 5 to 16,\nbut the suitable segment has been re-bin to 4 that we do not check.\nFinally, the get_best_segment return NULL and dsa_allocate_extended return\na invalid dsa pointer.\n\nMaybe we can use one of the following methods to fix it:\n1. re-bin segment to suitable segment index when called dsa_free\n2. get_best_segment search all bins\n\nI wrote a simple test code that is attached to reproduce it.\n\nAnt thoughts?\n\n--\nBest Regards\nDongming(https://www.aliyun.com/)",
"msg_date": "Mon, 24 Jan 2022 17:58:44 +0800",
"msg_from": "Dongming Liu <ldming101@gmail.com>",
"msg_from_op": true,
"msg_subject": "DSA failed to allocate memory"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 4:59 AM Dongming Liu <ldming101@gmail.com> wrote:\n> Maybe we can use one of the following methods to fix it:\n> 1. re-bin segment to suitable segment index when called dsa_free\n> 2. get_best_segment search all bins\n\n(2) is definitely the wrong idea. The comments say:\n\n/*\n * What is the lowest bin that holds segments that *might* have n contiguous\n * free pages? There is no point in looking in segments in lower bins; they\n * definitely can't service a request for n free pages.\n */\n#define contiguous_pages_to_segment_bin(n) Min(fls(n), DSA_NUM_SEGMENT_BINS - 1)\n\nSo it's OK for a segment to be in a bin that suggests that it has more\nconsecutive free pages than it really does. But it's NOT ok for a\nsegment to be in a bin that suggests it has fewer consecutive pages\nthan it really does. If dsa_free() is putting things back into the\nwrong place, that's what we need to fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 12:20:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": ">\n> So it's OK for a segment to be in a bin that suggests that it has more\n> consecutive free pages than it really does. But it's NOT ok for a\n> segment to be in a bin that suggests it has fewer consecutive pages\n> than it really does. If dsa_free() is putting things back into the\n> wrong place, that's what we need to fix.\n\n\nI'm trying to move segments into appropriate bins in dsa_free().\nIn 0001-Re-bin-segment-when-dsa-memory-is-freed.patch, I extract\nthe re-bin segment logic into a separate function called rebin_segment,\ncall it to move the segment to the appropriate bin when dsa memory is\nfreed. Otherwise, when allocating memory, due to the segment with\nenough contiguous pages is in a smaller bin, a suitable segment\nmay not be found to allocate memory.\n\nFot test, I port the test_dsa patch from [1] and add an OOM case to\ntest memory allocation until OOM, free and then allocation, compare\nthe number of allocated memory before and after.\n\nAny thoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/CAEepm%3D3U7%2BRo7%3DECeQuAZoeFXs8iDVX56NXGCV7z3%3D%2BH%2BWd0Sw%40mail.gmail.com",
"msg_date": "Fri, 18 Mar 2022 15:30:49 +0800",
"msg_from": "Dongming Liu <ldming101@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 3:30 PM Dongming Liu <ldming101@gmail.com> wrote:\n\n> So it's OK for a segment to be in a bin that suggests that it has more\n>> consecutive free pages than it really does. But it's NOT ok for a\n>> segment to be in a bin that suggests it has fewer consecutive pages\n>> than it really does. If dsa_free() is putting things back into the\n>> wrong place, that's what we need to fix.\n>\n>\n> I'm trying to move segments into appropriate bins in dsa_free().\n> In 0001-Re-bin-segment-when-dsa-memory-is-freed.patch, I extract\n> the re-bin segment logic into a separate function called rebin_segment,\n> call it to move the segment to the appropriate bin when dsa memory is\n> freed. Otherwise, when allocating memory, due to the segment with\n> enough contiguous pages is in a smaller bin, a suitable segment\n> may not be found to allocate memory.\n>\n> Fot test, I port the test_dsa patch from [1] and add an OOM case to\n> test memory allocation until OOM, free and then allocation, compare\n> the number of allocated memory before and after.\n>\n> Any thoughts?\n>\n> [1]\n> https://www.postgresql.org/message-id/CAEepm%3D3U7%2BRo7%3DECeQuAZoeFXs8iDVX56NXGCV7z3%3D%2BH%2BWd0Sw%40mail.gmail.com\n>\n>\nFix rebin_segment not working on in-place dsa.\n\n-- \nBest Regards,\nDongming",
"msg_date": "Mon, 28 Mar 2022 15:13:46 +0800",
"msg_from": "Dongming Liu <ldming101@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 8:14 PM Dongming Liu <ldming101@gmail.com> wrote:\n> On Fri, Mar 18, 2022 at 3:30 PM Dongming Liu <ldming101@gmail.com> wrote:\n>> I'm trying to move segments into appropriate bins in dsa_free().\n>> In 0001-Re-bin-segment-when-dsa-memory-is-freed.patch, I extract\n>> the re-bin segment logic into a separate function called rebin_segment,\n>> call it to move the segment to the appropriate bin when dsa memory is\n>> freed. Otherwise, when allocating memory, due to the segment with\n>> enough contiguous pages is in a smaller bin, a suitable segment\n>> may not be found to allocate memory.\n>>\n>> Fot test, I port the test_dsa patch from [1] and add an OOM case to\n>> test memory allocation until OOM, free and then allocation, compare\n>> the number of allocated memory before and after.\n\nHi Dongming,\n\nThanks for the report, and for working on the fix. Can you please\ncreate a commitfest entry (if you haven't already)? I plan to look at\nthis soon, after the code freeze.\n\nAre you proposing that the test_dsa module should be added to the\ntree? If so, some trivial observations: \"#ifndef\nHAVE_INT64_TIMESTAMP\" isn't needed anymore (see commit b6aa17e0, which\nis in all supported branches), the year should be updated, and we use\nsize_t instead of Size in new code.\n\n\n",
"msg_date": "Mon, 28 Mar 2022 20:53:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 3:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Hi Dongming,\n>\n> Thanks for the report, and for working on the fix. Can you please\n> create a commitfest entry (if you haven't already)? I plan to look at\n> this soon, after the code freeze.\n\nI created a commitfest entry https://commitfest.postgresql.org/38/3607/.\nThanks for your review.\n\nAre you proposing that the test_dsa module should be added to the\n> tree? If so, some trivial observations: \"#ifndef\n> HAVE_INT64_TIMESTAMP\" isn't needed anymore (see commit b6aa17e0, which\n> is in all supported branches), the year should be updated, and we use\n> size_t instead of Size in new code.\n>\nYes, I think test_dsa is very helpful and necessary to develop dsa related\nfeatures. I have removed the HAVE_INT64_TIMESTAMP related code.\nMost of the code for test_dsa comes from your patch[1] and I add some\ntest cases.\n\nIn addition, I add a few OOM test cases that allocate a fixed size of\nmemory\nuntil the memory overflows, run it twice and compare the amount of memory\nthey allocate. These cases will fail on the current master branch.\n\n[1]\nhttps://www.postgresql.org/message-id/CAEepm%3D3U7%2BRo7%3DECeQuAZoeFXs8iDVX56NXGCV7z3%3D%2BH%2BWd0Sw%40mail.gmail.com\n-- \nBest Regards,\nDongming",
"msg_date": "Wed, 6 Apr 2022 15:10:39 +0800",
"msg_from": "Dongming Liu <ldming101@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Thanks for the report, and for working on the fix. Can you please\n> create a commitfest entry (if you haven't already)? I plan to look at\n> this soon, after the code freeze.\n\nHi Thomas, are you still intending to look at this DSA bug fix?\nIt's been sitting idle for months.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Jan 2023 17:44:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Thanks for the report, and for working on the fix. Can you please\n> > create a commitfest entry (if you haven't already)? I plan to look at\n> > this soon, after the code freeze.\n>\n> Hi Thomas, are you still intending to look at this DSA bug fix?\n> It's been sitting idle for months.\n\nYeah. I think the analysis looks good, but I'll do some testing next\nweek with the aim of getting it committed. Looks like it now needs\nMeson changes, but I'll look after that as my penance.\n\n\n",
"msg_date": "Fri, 20 Jan 2023 23:02:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 11:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Yeah. I think the analysis looks good, but I'll do some testing next\n> week with the aim of getting it committed. Looks like it now needs\n> Meson changes, but I'll look after that as my penance.\n\nHere's an updated version that I'm testing...\n\nChanges to the main patch:\n\n* Adjust a few comments\n* pgindent\n* Explained a bit more in the commit message\n\nI'm wondering about this bit in rebin_segment():\n\n+ if (segment_map->header == NULL)\n+ return;\n\nWhy would we be rebinning an uninitialised/unused segment? Does\nsomething in your DSA-client code (I guess you have an extension?) hit\nthis case? The tests certainly don't; I'm not sure how the case could\nbe reached.\n\nChanges to the test:\n\n* Update copyright year\n* Size -> size_t\n* pgindent\n* Add Meson glue\n* Re-alphabetise the makefile\n* Make sure we get BGWH_STOPPED while waiting for bgworkers to exit\n* Background worker main function return type is fixed (void)\n* results[1] -> results[FLEXIBLE_ARRAY_MEMBER]\n* getpid() -> MyProcPid\n\nI wonder if this code would be easier to understand, but not\nmaterially less efficient, if we re-binned eagerly when allocating\ntoo, so the bin is always correct/optimal. Checking fpm_largest()\nagain after allocating should be cheap, I guess (it just reads a\nmember variable that we already paid the cost of maintaining). We\ndon't really seem to amortise much, we just transfer the rebinning\nwork to the next caller to consider the segment. I haven't tried out\nthat theory though.",
"msg_date": "Mon, 20 Feb 2023 17:52:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 5:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'm wondering about this bit in rebin_segment():\n>\n> + if (segment_map->header == NULL)\n> + return;\n>\n> Why would we be rebinning an uninitialised/unused segment?\n\nAnswering my own question: because destroy_superblock() can do that.\nSo I think destroy_superblock() should test for that case, not\nrebin_segment(). See attached.",
"msg_date": "Wed, 14 Jun 2023 12:29:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DSA failed to allocate memory"
},
{
"msg_contents": "Pushed.\n\nI wasn't sure it was worth keeping the test in the tree. It's here in\nthe mailing list archives for future reference.\n\n\n",
"msg_date": "Tue, 4 Jul 2023 16:22:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DSA failed to allocate memory"
}
] |
[
{
"msg_contents": "Server-side gzip compression.\n\npg_basebackup's --compression option now lets you write either\n\"client-gzip\" or \"server-gzip\" instead of just \"gzip\" to specify\nwhere the compression should be performed. If you write simply\n\"gzip\" it's taken to mean \"client-gzip\" unless you also use\n--target, in which case it is interpreted to mean \"server-gzip\",\nbecause that's the only thing that makes any sense in that case.\n\nTo make this work, the BASE_BACKUP command now takes new\nCOMPRESSION and COMPRESSION_LEVEL options.\n\nAt present, pg_basebackup cannot decompress .gz files, so\nserver-side compression will cause a failure if (1) -Ft is not\nused or (2) -R is used or (3) -D- is used without --no-manifest.\n\nAlong the way, I removed the information message added by commit\n5c649fe153367cdab278738ee4aebbfd158e0546 which occurred if you\nspecified no compression level and told you that the default level\nhad been used instead. That seemed like more output than most\npeople would want.\n\nAlso along the way, this adds a check to the server for\nunrecognized base backup options. This repairs a bug introduced\nby commit 0ba281cb4bf9f5f65529dfa4c8282abb734dd454.\n\nThis commit also adds some new test cases for pg_verifybackup.\nThey take a server-side backup with and without compression, and\nthen extract the backup if we have the OS facilities available\nto do so, and then run pg_verifybackup on the extracted\ndirectory. That is a good test of the functionality added by\nthis commit and also improves test coverage for the backup target\npatch (commit 3500ccc39b0dadd1068a03938e4b8ff562587ccc) and for\npg_verifybackup itself.\n\nPatch by me, with a bug fix by Jeevan Ladhe. The patch set of which\nthis is a part has also had review and/or testing from Tushar Ahuja,\nSuraj Kharage, Dipesh Pandit, and Mark Dilger.\n\nDiscussion: http://postgr.es/m/CA+Tgmoa-ST7fMLsVJduOB7Eub=2WjfpHS+QxHVEpUoinf4bOSg@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/0ad8032910d5eb8efd32867c45b6a25c85e60f50\n\nModified Files\n--------------\ndoc/src/sgml/protocol.sgml | 22 +++\ndoc/src/sgml/ref/pg_basebackup.sgml | 29 ++-\nsrc/backend/Makefile | 2 +-\nsrc/backend/replication/Makefile | 1 +\nsrc/backend/replication/basebackup.c | 54 ++++++\nsrc/backend/replication/basebackup_gzip.c | 309 ++++++++++++++++++++++++++++++\nsrc/bin/pg_basebackup/pg_basebackup.c | 136 +++++++++++--\nsrc/bin/pg_verifybackup/Makefile | 7 +\nsrc/bin/pg_verifybackup/t/008_untar.pl | 104 ++++++++++\nsrc/include/replication/basebackup_sink.h | 1 +\n10 files changed, 641 insertions(+), 24 deletions(-)",
"msg_date": "Mon, 24 Jan 2022 20:14:33 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Server-side gzip compression."
},
{
"msg_contents": "On Tue, 25 Jan 2022 at 09:14, Robert Haas <rhaas@postgresql.org> wrote:\n> src/backend/replication/basebackup_gzip.c | 309 ++++++++++++++++++++++++++++++\n\nThis could do with the attached. MSVC compilers need a bit more\nreassurance that ereport/elog ERRORs don't return.\n\nDavid",
"msg_date": "Tue, 25 Jan 2022 22:19:58 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Server-side gzip compression."
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 4:20 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Tue, 25 Jan 2022 at 09:14, Robert Haas <rhaas@postgresql.org> wrote:\n> > src/backend/replication/basebackup_gzip.c | 309 ++++++++++++++++++++++++++++++\n>\n> This could do with the attached. MSVC compilers need a bit more\n> reassurance that ereport/elog ERRORs don't return.\n\nErr, well, if we need it, we need it. It surprises me, though:\nwouldn't this same consideration apply to a very large number of other\nplaces in the code base?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jan 2022 13:12:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Server-side gzip compression."
},
{
"msg_contents": "On Wed, 26 Jan 2022 at 07:12, Robert Haas <robertmhaas@gmail.com> wrote:\n> wouldn't this same consideration apply to a very large number of other\n> places in the code base?\n\nAll of the other places are handled. See locations with \"keep compiler quiet\".\n\nThis one is the only one that generates a warning:\n\nbasebackup_gzip.c(90): warning C4715: 'bbsink_gzip_new': not all\ncontrol paths return a value\n\nDavid\n\n\n",
"msg_date": "Wed, 26 Jan 2022 09:56:10 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Server-side gzip compression."
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 3:56 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 26 Jan 2022 at 07:12, Robert Haas <robertmhaas@gmail.com> wrote:\n> > wouldn't this same consideration apply to a very large number of other\n> > places in the code base?\n>\n> All of the other places are handled. See locations with \"keep compiler quiet\".\n>\n> This one is the only one that generates a warning:\n>\n> basebackup_gzip.c(90): warning C4715: 'bbsink_gzip_new': not all\n> control paths return a value\n\nOK. I'm still surprised, but it is what it is. I've committed this now.\n\nFWIW, I would have been fine with you just committing this change. I\ncan't see the warning locally, so I'm not in a position to\nsecond-guess your statement that it's needed.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 Jan 2022 14:43:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Server-side gzip compression."
},
{
"msg_contents": "On Fri, 28 Jan 2022 at 08:44, Robert Haas <robertmhaas@gmail.com> wrote:\n> OK. I'm still surprised, but it is what it is. I've committed this now.\n\nThanks\n\n> FWIW, I would have been fine with you just committing this change.\n\nThat's good to know, thanks for mentioning it. FWIW, I just held back\nas I wasn't 100% sure on the etiquette.\n\nDavid\n\n\n",
"msg_date": "Mon, 31 Jan 2022 17:07:47 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Server-side gzip compression."
}
] |
[
{
"msg_contents": "Hi,\n\nRight now we run tap tests separately in each directory. Which is one of the\nreasons the make output is so unreadable - instead of having one 'prove'\noutput listing all tests, we get a lot of different prove outputs, all\ninterspersed. And it severely limits parallelism on windows right now.\n\nIt's currently not possible to just run all tap tests in one prove run,\nbecause a number of tests assume that they are run from specific directories\nand/or with per-directory parameters.\n\nFor meson I \"solved\" this by running each individual test in a wrapper that\nchanges directory etc. But that's not really a great approach.\n\n\nTo me it seems we should change our tests and test invocations to not depend\non being run from a specific directory and to unify the environment variables\npassed to tap tests to one set somewhere central.\n\nI think this would require:\n\n1) Moving handling of PG_TEST_EXTRA into the tap tests themselves. That's a\n good idea imo, because there's then output explaining that some tests\n aren't run.\n\n2) teach tap test infrastructure to add the directory containing the test to\n the perl search path, so that modules like RewindTest.pm can be found.\n\n3) teach tap test infrastructure to compute the output directory for a\n specific test from the file location of the test itself and a parameter\n like top_builddir.\n\n4) Pass env variables like LZ4, TAR, GZIP_PROGRAM by a mechanism other than\n exports in individual makefiles. Maybe just generating a perl file at\n configure / mkvcbuild.pl / meson setup time?\n\nWhile far from a small amount of work, it does seem doable? A good number of\ntap tests already pass this way, btw, just not all.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Jan 2022 12:35:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "TAP tests, directories and parameters"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's currently not possible to just run all tap tests in one prove run,\n> because a number of tests assume that they are run from specific directories\n> and/or with per-directory parameters.\n> For meson I \"solved\" this by running each individual test in a wrapper that\n> changes directory etc. But that's not really a great approach.\n> To me it seems we should change our tests and test invocations to not depend\n> on being run from a specific directory and to unify the environment variables\n> passed to tap tests to one set somewhere central.\n\nI'd be sad if this implied that running \"make [install]check\" in a\nparticular subdirectory no longer runs just that directory's tests.\nOtherwise, sounds fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jan 2022 15:43:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests, directories and parameters"
}
] |
[
{
"msg_contents": "Hi Juan José,\n\nI a bit tested this feature and have small doubts about block:\n\n+/*\n+ * Windows will use hyphens between language and territory, where POSIX\n+ * uses an underscore. Simply make it POSIX looking.\n+ */\n+ hyphen = strchr(localebuf, '-');\n+ if (hyphen)\n+ *hyphen = '_';\n\nAfter this block modified collation name is used in function\n\nGetNLSVersionEx(COMPARE_STRING, wide_collcollate, &version)\n\n(see win32_read_locale() -> CollationFromLocale() -> CollationCreate()\ncall). Is it correct to use (wide_collcollate = \"en_NZ\") instead of\n(wide_collcollate = \"en-NZ\") in GetNLSVersionEx() function?\n\n1) Documentation [1], [2], quote:\nIf it is a neutral locale for which the script is significant,\nthe pattern is <language>-<Script>.\n\n2) Conversation [3], David Rowley, quote:\nThen, since GetNLSVersionEx()\nwants yet another variant with a - rather than an _, I've just added a\ncouple of lines to swap the _ for a -.\n\n\nOn my computer (Windows 10 Pro 21H2 19044.1466, MSVC2019 version\n16.11.9) work correctly both variants (\"en_NZ\", \"en-NZ\").\n\nBut David Rowley (MSVC2010 and MSVC2017) replaced \"_\" to \"-\"\nfor the same function. Maybe he had a problem with \"_\" on MSVC2010 or \nMSVC2017?\n\n[1] \nhttps://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getnlsversionex\n[2] https://docs.microsoft.com/en-us/windows/win32/intl/locale-names\n[3] \nhttps://www.postgresql.org/message-id/flat/CAApHDvq3FXpH268rt-6sD_Uhe7Ekv9RKXHFvpv%3D%3Duh4c9OeHHQ%40mail.gmail.com\n\nWith best regards,\nDmitry Koval.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 00:23:38 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 24.01.22 22:23, Dmitry Koval wrote:\n> +/*\n> + * Windows will use hyphens between language and territory, where POSIX\n> + * uses an underscore. Simply make it POSIX looking.\n> + */\n> + hyphen = strchr(localebuf, '-');\n> + if (hyphen)\n> + *hyphen = '_';\n> \n> After this block modified collation name is used in function\n> \n> GetNLSVersionEx(COMPARE_STRING, wide_collcollate, &version)\n> \n> (see win32_read_locale() -> CollationFromLocale() -> CollationCreate()\n> call). Is it correct to use (wide_collcollate = \"en_NZ\") instead of\n> (wide_collcollate = \"en-NZ\") in GetNLSVersionEx() function?\n\nI don't really know if this is necessary anyway. Just create the \ncollations with the names that the operating system presents. There is \nno requirement to make the names match POSIX.\n\nIf you want to make them match POSIX for some reason, you can also just \nchange the object name but leave the collcollate/collctype fields the \nway they came from the OS.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 11:40:45 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 11:40 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 24.01.22 22:23, Dmitry Koval wrote:\n>\n\nThanks for looking into this.\n\n\n> > +/*\n> > + * Windows will use hyphens between language and territory, where POSIX\n> > + * uses an underscore. Simply make it POSIX looking.\n> > + */\n> > + hyphen = strchr(localebuf, '-');\n> > + if (hyphen)\n> > + *hyphen = '_';\n> >\n> > After this block modified collation name is used in function\n> >\n> > GetNLSVersionEx(COMPARE_STRING, wide_collcollate, &version)\n> >\n> > (see win32_read_locale() -> CollationFromLocale() -> CollationCreate()\n> > call). Is it correct to use (wide_collcollate = \"en_NZ\") instead of\n> > (wide_collcollate = \"en-NZ\") in GetNLSVersionEx() function?\n>\n\nThe problem that David Rowley addressed was coming from Windows collations\nin the shape of \"English_New Zealand\", GetNLSVersionEx() will work with\nboth \"en_NZ\" and \"en-NZ\". You can check collversion in pg_collation in the\npatched version.\n\n>\n> I don't really know if this is necessary anyway. Just create the\n> collations with the names that the operating system presents. There is\n> no requirement to make the names match POSIX.\n>\n> If you want to make them match POSIX for some reason, you can also just\n> change the object name but leave the collcollate/collctype fields the\n> way they came from the OS.\n>\n\nI think there is some value in making collation names consistent across\ndifferent platforms, e.g. making user scripts more portable. So, I'm doing\nthat in the attached version, just changing the object name.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Tue, 25 Jan 2022 15:49:01 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-25 15:49:01 +0100, Juan Jos� Santamar�a Flecha wrote:\n> So, I'm doing that in the attached version, just changing the object name.\n\nCurrently fails to apply, please rebase: http://cfbot.cputube.org/patch_37_3450.log\n\nMarked as waiting-on-author.\n\n- Andres\n\n\n",
"msg_date": "Mon, 21 Mar 2022 18:00:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 2:00 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Currently fails to apply, please rebase:\n> http://cfbot.cputube.org/patch_37_3450.log\n>\n> Marked as waiting-on-author.\n>\n> Please, find attached a rebased version, no other significant change.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Mon, 11 Apr 2022 14:20:30 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "Please find attached a rebased version. I have split the patch into two\nparts trying to make it easier to review, one with the code changes and the\nother with the test.\n\nOther than that, there are minimal changes from the previous version to the\ncode due to the update of _WIN32_WINNT and enabling the test on cirrus.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>",
"msg_date": "Tue, 12 Jul 2022 21:32:33 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 12.07.22 21:32, Juan José Santamaría Flecha wrote:\n> Please find attached a rebased version. I have split the patch into two \n> parts trying to make it easier to review, one with the code changes and \n> the other with the test.\n> \n> Other than that, there are minimal changes from the previous version to \n> the code due to the update of _WIN32_WINNT and enabling the test on cirrus.\n\nI'm not familiar with Windows, so I'm just looking at the overall \nstructure of this patch. I think it pretty much makes sense. But we \nneed to consider that this operates on the confluence of various \ndifferent operating system interfaces that not all people will be \nfamiliar with, so we need to really get the documentation done well.\n\nConsider this function you are introducing:\n\n+/*\n+ * Create a collation if the input locale is valid for so.\n+ * Also keeps track of the number of valid locales and collations created.\n+ */\n+static int\n+CollationFromLocale(char *isolocale, char *localebuf, int *nvalid,\n+ int *ncreated, int nspid)\n\nThis declaration is incomprehensible without studying all the callers \nand the surrounding code.\n\nStart with the name: What does \"collation from locale\" mean? Does it \nmake a collation? Does it convert one? Does it find one? There should \nbe a verb in there.\n\n(I think in the context of this file, a lower case name would be more \nappropriate for a static function.)\n\nThen the arguments. The input arguments should be \"const\". All the \narguments should be documented. What is \"isolocale\", what is \n\"localebuf\", how are they different? What is being counted by \"valid\" \n(collatons?, locales?), and what makes a thing valid and invalid? What \nis being \"created\"? What is nspid? What is the return value?\n\nPlease make another pass over this.\n\nAlso consider describing in the commit message what you are doing in \nmore detail, including some of the things that have been discussed in \nthis thread.\n\n\n\n",
"msg_date": "Mon, 31 Oct 2022 15:09:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 3:09 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\nThanks for taking a look into this patch.\n\n>\n> Consider this function you are introducing:\n>\n> +/*\n> + * Create a collation if the input locale is valid for so.\n> + * Also keeps track of the number of valid locales and collations created.\n> + */\n> +static int\n> +CollationFromLocale(char *isolocale, char *localebuf, int *nvalid,\n> + int *ncreated, int nspid)\n>\n> This declaration is incomprehensible without studying all the callers\n> and the surrounding code.\n>\n> Start with the name: What does \"collation from locale\" mean? Does it\n> make a collation? Does it convert one? Does it find one? There should\n> be a verb in there.\n>\n> (I think in the context of this file, a lower case name would be more\n> appropriate for a static function.)\n>\n> Then the arguments. The input arguments should be \"const\". All the\n> arguments should be documented. What is \"isolocale\", what is\n> \"localebuf\", how are they different? What is being counted by \"valid\"\n> (collatons?, locales?), and what makes a thing valid and invalid? What\n> is being \"created\"? What is nspid? What is the return value?\n>\n> Please make another pass over this.\n>\n> Ok, I can definitely improve the comments for that function.\n\n\n> Also consider describing in the commit message what you are doing in\n> more detail, including some of the things that have been discussed in\n> this thread.\n>\n> Going through the thread for the commit message, I think that maybe the\ncollation naming remarks were not properly addressed. In the current\nversion the collations retain their native name, but an alias is created\nfor those with a shape that we can assume a POSIX equivalent exists.\n\nPlease find attached a new version.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Fri, 4 Nov 2022 23:08:24 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 04.11.22 23:08, Juan José Santamaría Flecha wrote:\n> Ok, I can definitely improve the comments for that function.\n> \n> Also consider describing in the commit message what you are doing in\n> more detail, including some of the things that have been discussed in\n> this thread.\n> \n> Going through the thread for the commit message, I think that maybe the \n> collation naming remarks were not properly addressed. In the current \n> version the collations retain their native name, but an alias is created \n> for those with a shape that we can assume a POSIX equivalent exists.\n\nThis looks pretty good to me. The refactoring of the non-Windows parts \nmakes sense. The Windows parts look reasonable on manual inspection, \nbut again, I don't have access to Windows here, so someone else should \nalso look it over.\n\nA small style issue: Change return (TRUE) to return TRUE.\n\nThe code\n\n+ if (strlen(localebuf) == 5 && localebuf[2] == '-')\n\nmight be too specific. At least on some POSIX systems, I have seen \nlocales with a three-letter language name. Maybe you should look with \nstrchr() and not be too strict about the exact position.\n\nFor the test patch, why is a separate test for non-UTF8 needed on \nWindows. Does the UTF8 one not work?\n\n+ version() !~ 'Visual C\\+\\+'\n\nThis probably won't work for MinGW.\n\n\n\n",
"msg_date": "Mon, 7 Nov 2022 16:08:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 4:08 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n>\n> This looks pretty good to me. The refactoring of the non-Windows parts\n> makes sense. The Windows parts look reasonable on manual inspection,\n> but again, I don't have access to Windows here, so someone else should\n> also look it over.\n>\n> I was going to say that at least it is getting tested on the CI, but I\nhave found out that meson changes version(). That is fixed in this version.\n\n\n> A small style issue: Change return (TRUE) to return TRUE.\n>\n> Fixed.\n\n\n> The code\n>\n> + if (strlen(localebuf) == 5 && localebuf[2] == '-')\n>\n> might be too specific. At least on some POSIX systems, I have seen\n> locales with a three-letter language name. Maybe you should look with\n> strchr() and not be too strict about the exact position.\n>\n> Ok, in this version the POSIX alias is created unconditionally.\n\n\n> For the test patch, why is a separate test for non-UTF8 needed on\n> Windows. Does the UTF8 one not work?\n>\n> Windows locales will retain their CP_ACP encoding unless you change the OS\ncode page to UFT8, which is still experimental [1].\n\n\n> + version() !~ 'Visual C\\+\\+'\n>\n> This probably won't work for MinGW.\n>\n> When I proposed this patch it wouldn't have worked because of the\nproject's Windows minimum version requirement, now it should work in MinGW.\nIt actually doesn't because most locales are failing with \"skipping locale\nwith unrecognized encoding\", but checking what's wrong\nwith pg_get_encoding_from_locale() in MiNGW is subject for another thread.\n\n[1]\nhttps://stackoverflow.com/questions/56419639/what-does-beta-use-unicode-utf-8-for-worldwide-language-support-actually-do\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Wed, 9 Nov 2022 00:02:39 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 12:02 AM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n> On Mon, Nov 7, 2022 at 4:08 PM Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n>\n>>\n>> This looks pretty good to me. The refactoring of the non-Windows parts\n>> makes sense. The Windows parts look reasonable on manual inspection,\n>> but again, I don't have access to Windows here, so someone else should\n>> also look it over.\n>>\n>> I was going to say that at least it is getting tested on the CI, but I\n> have found out that meson changes version(). That is fixed in this version.\n>\n\nNow is currently failing due to [1], so maybe we can leave this patch on\nhold until that's addressed.\n\n[1]\nhttps://www.postgresql.org/message-id/CAC%2BAXB1wJEqfKCuVcNpoH%3Dgxd61N%3D7c2fR3Ew6YRPpSfEUA%3DyQ%40mail.gmail.com\n\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Nov 9, 2022 at 12:02 AM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:On Mon, Nov 7, 2022 at 4:08 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\nThis looks pretty good to me. The refactoring of the non-Windows parts \nmakes sense. The Windows parts look reasonable on manual inspection, \nbut again, I don't have access to Windows here, so someone else should \nalso look it over.\nI was going to say that at least it is getting tested on the CI, but I have found out that meson changes version(). That is fixed in this version.Now is currently failing due to [1], so maybe we can leave this patch on hold until that's addressed. [1] https://www.postgresql.org/message-id/CAC%2BAXB1wJEqfKCuVcNpoH%3Dgxd61N%3D7c2fR3Ew6YRPpSfEUA%3DyQ%40mail.gmail.com Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 10 Nov 2022 11:08:32 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 10.11.22 11:08, Juan José Santamaría Flecha wrote:\n> This looks pretty good to me. The refactoring of the\n> non-Windows parts\n> makes sense. The Windows parts look reasonable on manual\n> inspection,\n> but again, I don't have access to Windows here, so someone else\n> should\n> also look it over.\n> \n> I was going to say that at least it is getting tested on the CI, but\n> I have found out that meson changes version(). That is fixed in this\n> version.\n> \n> \n> Now is currently failing due to [1], so maybe we can leave this patch on \n> hold until that's addressed.\n> \n> [1] \n> https://www.postgresql.org/message-id/CAC%2BAXB1wJEqfKCuVcNpoH%3Dgxd61N%3D7c2fR3Ew6YRPpSfEUA%3DyQ%40mail.gmail.com <https://www.postgresql.org/message-id/CAC%2BAXB1wJEqfKCuVcNpoH%3Dgxd61N%3D7c2fR3Ew6YRPpSfEUA%3DyQ%40mail.gmail.com>\n\nWhat is the status of this now? I think the other issue has been addressed?\n\n\n\n",
"msg_date": "Thu, 1 Dec 2022 08:46:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 8:46 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n>\n> What is the status of this now? I think the other issue has been\n> addressed?\n>\n\nYes, that's addressed for MSVC builds. I think there are a couple of\npending issues for MinGW, but those should have their own threads.\n\nThe patch had rotten, so PFA a rebased version.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Fri, 9 Dec 2022 13:48:53 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 09.12.22 13:48, Juan José Santamaría Flecha wrote:\n> On Thu, Dec 1, 2022 at 8:46 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> \n> \n> What is the status of this now? I think the other issue has been\n> addressed?\n> \n> \n> Yes, that's addressed for MSVC builds. I think there are a couple of \n> pending issues for MinGW, but those should have their own threads.\n> \n> The patch had rotten, so PFA a rebased version.\n\ncommitted\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 14:48:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 2023-01-03 Tu 08:48, Peter Eisentraut wrote:\n> On 09.12.22 13:48, Juan José Santamaría Flecha wrote:\n>> On Thu, Dec 1, 2022 at 8:46 AM Peter Eisentraut \n>> <peter.eisentraut@enterprisedb.com \n>> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n>>\n>>\n>> What is the status of this now? I think the other issue has been\n>> addressed?\n>>\n>>\n>> Yes, that's addressed for MSVC builds. I think there are a couple of \n>> pending issues for MinGW, but those should have their own threads.\n>>\n>> The patch had rotten, so PFA a rebased version.\n>\n> committed\n>\n>\n\nNow that I have removed the barrier to testing this in the buildfarm, \nand added an appropriate locale setting to drongo, we can see that this \ntest fails like this:\n\n\ndiff -w -U3 c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\n--- c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out\t2023-01-23 04:39:06.755149600 +0000\n+++ c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\t2023-02-26 17:32:54.115515200 +0000\n@@ -363,16 +363,17 @@\n \n -- to_char\n SET lc_time TO 'de_DE';\n+ERROR: invalid value for parameter \"lc_time\": \"de_DE\"\n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY');\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n -- to_date\n\n\nThe last of these is especially an issue, as it doesn't even throw an error.\n\nSee \n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-02-26%2016%3A56%3A30>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-01-03 Tu 08:48, Peter\n Eisentraut wrote:\n\nOn\n 09.12.22 13:48, Juan José Santamaría Flecha wrote:\n \nOn Thu, Dec 1, 2022 at 8:46 AM Peter\n Eisentraut <peter.eisentraut@enterprisedb.com\n<mailto:peter.eisentraut@enterprisedb.com>> wrote:\n \n\n\n What is the status of this now? I think the other issue has\n been\n \n addressed?\n \n\n\n Yes, that's addressed for MSVC builds. I think there are a\n couple of pending issues for MinGW, but those should have\n their own threads.\n \n\n The patch had rotten, so PFA a rebased version.\n \n\n\n committed\n \n\n\n\n\n\nNow that I have removed the barrier to testing this in the\n buildfarm, and added an appropriate locale setting to drongo, we\n can see that this test fails like this:\n\n\ndiff -w -U3 c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\n--- c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out\t2023-01-23 04:39:06.755149600 +0000\n+++ c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\t2023-02-26 17:32:54.115515200 +0000\n@@ -363,16 +363,17 @@\n \n -- to_char\n SET lc_time TO 'de_DE';\n+ERROR: invalid value for parameter \"lc_time\": \"de_DE\"\n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY');\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n -- to_date\n\n\nThe last of these is especially an issue, as it doesn't even\n throw an error.\nSee\n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-02-26%2016%3A56%3A30>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 26 Feb 2023 16:02:38 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 2023-02-26 Su 16:02, Andrew Dunstan wrote:\n>\n>\n> On 2023-01-03 Tu 08:48, Peter Eisentraut wrote:\n>> On 09.12.22 13:48, Juan José Santamaría Flecha wrote:\n>>> On Thu, Dec 1, 2022 at 8:46 AM Peter Eisentraut \n>>> <peter.eisentraut@enterprisedb.com \n>>> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n>>>\n>>>\n>>> What is the status of this now? I think the other issue has been\n>>> addressed?\n>>>\n>>>\n>>> Yes, that's addressed for MSVC builds. I think there are a couple of \n>>> pending issues for MinGW, but those should have their own threads.\n>>>\n>>> The patch had rotten, so PFA a rebased version.\n>>\n>> committed\n>>\n>>\n>\n> Now that I have removed the barrier to testing this in the buildfarm, \n> and added an appropriate locale setting to drongo, we can see that \n> this test fails like this:\n>\n>\n> diff -w -U3 c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\n> --- c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out\t2023-01-23 04:39:06.755149600 +0000\n> +++ c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\t2023-02-26 17:32:54.115515200 +0000\n> @@ -363,16 +363,17 @@\n> \n> -- to_char\n> SET lc_time TO 'de_DE';\n> +ERROR: invalid value for parameter \"lc_time\": \"de_DE\"\n> SELECT to_char(date '2010-03-01', 'DD TMMON YYYY');\n> to_char\n> -------------\n> - 01 MRZ 2010\n> + 01 MAR 2010\n> (1 row)\n> \n> SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n> to_char\n> -------------\n> - 01 MRZ 2010\n> + 01 MAR 2010\n> (1 row)\n> \n> -- to_date\n>\n>\n> The last of these is especially an issue, as it doesn't even throw an \n> error.\n>\n> See \n> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-02-26%2016%3A56%3A30>\n>\n>\n>\n\n\nFurther investigation shows that if we change the two instances of \n\"de_DE\" to \"de-DE\" the tests behave as expected, so it appears that \nwhile POSIX style aliases have been created for the BCP 47 style \nlocales, using the POSIX aliases doesn't in fact work. I cant see \nanything that turns the POSIX locale name back into BCP 47 at the point \nof use, which seems to be what's needed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-26 Su 16:02, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-01-03 Tu 08:48, Peter\n Eisentraut wrote:\n\nOn\n 09.12.22 13:48, Juan José Santamaría Flecha wrote: \nOn Thu, Dec 1, 2022 at 8:46 AM Peter\n Eisentraut <peter.eisentraut@enterprisedb.com\n<mailto:peter.eisentraut@enterprisedb.com>>\n wrote: \n\n\n What is the status of this now? I think the other issue\n has been \n addressed? \n\n\n Yes, that's addressed for MSVC builds. I think there are a\n couple of pending issues for MinGW, but those should have\n their own threads. \n\n The patch had rotten, so PFA a rebased version. \n\n\n committed \n\n\n\n\n\nNow that I have removed the barrier to testing this in the\n buildfarm, and added an appropriate locale setting to drongo, we\n can see that this test fails like this:\n\n\ndiff -w -U3 c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\n--- c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out\t2023-01-23 04:39:06.755149600 +0000\n+++ c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\t2023-02-26 17:32:54.115515200 +0000\n@@ -363,16 +363,17 @@\n \n -- to_char\n SET lc_time TO 'de_DE';\n+ERROR: invalid value for parameter \"lc_time\": \"de_DE\"\n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY');\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n -- to_date\n\n\nThe last of these is especially an issue, as it doesn't even\n throw an error.\nSee\n <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-02-26%2016%3A56%3A30>\n\n\n\n\n\n\n\n\nFurther investigation shows that if we change the two instances\n of \"de_DE\" to \"de-DE\" the tests behave as expected, so it appears\n that while POSIX style aliases have been created for the BCP 47\n style locales, using the POSIX aliases doesn't in fact work. I\n cant see anything that turns the POSIX locale name back into BCP\n 47 at the point of use, which seems to be what's needed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 27 Feb 2023 07:09:57 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 1:10 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 2023-02-26 Su 16:02, Andrew Dunstan wrote:\n>\n> Now that I have removed the barrier to testing this in the buildfarm, and\n> added an appropriate locale setting to drongo, we can see that this test\n> fails like this:\n>\n>\n> diff -w -U3 c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\n> --- c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/expected/collate.windows.win1252.out\t2023-01-23 04:39:06.755149600 +0000\n> +++ c:/prog/bf/root/HEAD/pgsql.build/src/test/regress/results/collate.windows.win1252.out\t2023-02-26 17:32:54.115515200 +0000\n> @@ -363,16 +363,17 @@\n>\n> -- to_char\n> SET lc_time TO 'de_DE';\n> +ERROR: invalid value for parameter \"lc_time\": \"de_DE\"\n> SELECT to_char(date '2010-03-01', 'DD TMMON YYYY');\n> to_char\n> -------------\n> - 01 MRZ 2010\n> + 01 MAR 2010\n> (1 row)\n>\n> SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n> to_char\n> -------------\n> - 01 MRZ 2010\n> + 01 MAR 2010\n> (1 row)\n>\n> -- to_date\n>\n>\n> The last of these is especially an issue, as it doesn't even throw an\n> error.\n>\n> See\n> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-02-26%2016%3A56%3A30>\n> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-02-26%2016%3A56%3A30>\n>\n>\n> Further investigation shows that if we change the two instances of \"de_DE\"\n> to \"de-DE\" the tests behave as expected, so it appears that while POSIX\n> style aliases have been created for the BCP 47 style locales, using the\n> POSIX aliases doesn't in fact work. I cant see anything that turns the\n> POSIX locale name back into BCP 47 at the point of use, which seems to be\n> what's needed.\n>\n\nThe command that's failing is \"SET lc_time TO 'de_DE';\", and that area of\ncode is untouched by this patch. As mentioned in [1], the problem seems to\ncome from a Windows bug that the CI images and my development machines have\npatched out.\n\nI think we should change the locale name to make the test more robust, as\nthe attached. But I don't see a problem with making an alias for the\ncollations.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Mon, 27 Feb 2023 23:05:23 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "El lun, 27 feb 2023, 23:05, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> escribió:\n\n>\n> The command that's failing is \"SET lc_time TO 'de_DE';\", and that area of\n> code is untouched by this patch. As mentioned in [1], the problem seems to\n> come from a Windows bug that the CI images and my development machines have\n> patched out.\n>\n\nWhat I wanted to post as [1]:\n\nhttps://www.postgresql.org/message-id/CAC%2BAXB1agvrgpyHEfqbDr2MOpcON3d%2BWYte_SLzn1E4TamLs9g%40mail.gmail.com\n\n\n> Regards,\n>\n> Juan José Santamaría Flecha\n>\n\nEl lun, 27 feb 2023, 23:05, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> escribió: The command that's failing is \"SET lc_time TO 'de_DE';\", and that area of code is untouched by this patch. As mentioned in [1], the problem seems to come from a Windows bug that the CI images and my development machines have patched out.What I wanted to post as [1]:https://www.postgresql.org/message-id/CAC%2BAXB1agvrgpyHEfqbDr2MOpcON3d%2BWYte_SLzn1E4TamLs9g%40mail.gmail.comRegards,Juan José Santamaría Flecha",
"msg_date": "Mon, 27 Feb 2023 23:20:51 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 2023-02-27 Mo 17:20, Juan José Santamaría Flecha wrote:\n>\n>\n> El lun, 27 feb 2023, 23:05, Juan José Santamaría Flecha \n> <juanjo.santamaria@gmail.com> escribió:\n>\n>\n> The command that's failing is \"SET lc_time TO 'de_DE';\", and that\n> area of code is untouched by this patch. As mentioned in [1],\n> the problem seems to come from a Windows bug that the CI images\n> and my development machines have patched out.\n>\n>\n> What I wanted to post as [1]:\n>\n> https://www.postgresql.org/message-id/CAC%2BAXB1agvrgpyHEfqbDr2MOpcON3d%2BWYte_SLzn1E4TamLs9g%40mail.gmail.com\n\n\nHmm, yeah. I'm not sure I understand the point of this test anyway:\n\n\nSELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-27 Mo 17:20, Juan José\n Santamaría Flecha wrote:\n\n\n\n\n\n\nEl lun, 27 feb 2023, 23:05,\n Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>\n escribió: \n\n\n\n\n\nThe command that's failing is \"SET lc_time TO\n 'de_DE';\", and that area of code is untouched by this\n patch. As mentioned in [1], the problem seems to come\n from a Windows bug that the CI images and my\n development machines have patched out.\n\n\n\n\n\n\nWhat I wanted to post as [1]:\n\n\nhttps://www.postgresql.org/message-id/CAC%2BAXB1agvrgpyHEfqbDr2MOpcON3d%2BWYte_SLzn1E4TamLs9g%40mail.gmail.com\n\n\n\n\n\nHmm, yeah. I'm not sure I understand the point of this test\n anyway:\n\n\nSELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE\n \"de_DE\");\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 28 Feb 2023 06:55:17 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 12:55 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 2023-02-27 Mo 17:20, Juan José Santamaría Flecha wrote:\n>\n>\n> Hmm, yeah. I'm not sure I understand the point of this test anyway:\n>\n>\n> SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n>\n\nUhm, they probably don't make much sense except for \"tr_TR\", so I'm fine\nwith removing them. PFA a patch for so.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Tue, 28 Feb 2023 17:40:35 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On 2023-02-28 Tu 11:40, Juan José Santamaría Flecha wrote:\n>\n> On Tue, Feb 28, 2023 at 12:55 PM Andrew Dunstan <andrew@dunslane.net> \n> wrote:\n>\n> On 2023-02-27 Mo 17:20, Juan José Santamaría Flecha wrote:\n>\n>\n> Hmm, yeah. I'm not sure I understand the point of this test anyway:\n>\n>\n> SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n>\n>\n> Uhm, they probably don't make much sense except for \"tr_TR\", so I'm \n> fine with removing them. PFA a patch for so.\n>\n>\n\nI think you missed my point, which was that the COLLATE clause above \nseemed particularly pointless. But I agree that all these are not much \nuse, so I'll remove them as you suggest.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-28 Tu 11:40, Juan José\n Santamaría Flecha wrote:\n\n\n\n\n\n\n\nOn Tue, Feb 28, 2023 at\n 12:55 PM Andrew Dunstan <andrew@dunslane.net>\n wrote:\n\n\n\nOn 2023-02-27 Mo 17:20, Juan José Santamaría Flecha\n wrote:\n\n\nHmm, yeah. I'm not sure I understand the point of this\n test anyway:\n\n\nSELECT to_char(date '2010-03-01', 'DD TMMON YYYY'\n COLLATE \"de_DE\");\n\n\n\n\nUhm, they probably don't make much sense except for\n \"tr_TR\", so I'm fine with removing them. PFA a patch for so.\n\n\n\n\n\n\n\n\nI think you missed my point, which was that the COLLATE clause\n above seemed particularly pointless. But I agree that all these\n are not much use, so I'll remove them as you suggest.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 28 Feb 2023 15:26:07 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 9:26 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> I think you missed my point, which was that the COLLATE clause above\n> seemed particularly pointless. But I agree that all these are not much use,\n> so I'll remove them as you suggest.\n>\n\nMaybe there has been some miscommunication, please let me try to explain\nmyself a little better. The whole test is an attempt to mimic\ncollate.linux.utf8, which has that same command, only for collate 'tr_TR',\nand so does collate.icu.utf8 but commented out.\n\nI've seen that you have committed this and now drongo is green, which is\ngreat. Thank you for taking care of it.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Feb 28, 2023 at 9:26 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\nI think you missed my point, which was that the COLLATE clause\n above seemed particularly pointless. But I agree that all these\n are not much use, so I'll remove them as you suggest.Maybe there has been some miscommunication, please let me try to explain myself a little better. The whole test is an attempt to mimic collate.linux.utf8, which has that same command, only for collate 'tr_TR', and so does collate.icu.utf8 but commented out.I've seen that you have committed this and now drongo is green, which is great. Thank you for taking care of it.Regards,Juan José Santamaría Flecha",
"msg_date": "Wed, 1 Mar 2023 09:49:52 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIN32 pg_import_system_collations"
}
] |
[
{
"msg_contents": "Hi,\n\n\n\n I Recently noted that expressions involved in either side of HashCondition in HashJoin is not being pushed down to foreign scan. This leads to evaluation of the same expression multiple times - (for hashvalue computation from hashkeys, for HashCondition expr evaluation, for Projection). Not sure if intended behavior is to not push down expressions in HashCond. Kindly clarify this case. Have attached sample plan for reference.\n\n contrib_regression=# explain verbose select x_vec.a*2, y_vec.a*2 as a from x_vec, y_vec where x_vec.a*2 = y_vec.a*2 and x_vec.a*2 != 10;\n\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------\n\n Hash Join (cost=2.09..4.40 rows=4 width=12)\n\n Output: (x_vec.a * 2), (y_vec.a * 2)\n\n Hash Cond: ((x_vec.a * 2) = (y_vec.a * 2))\n\n -> Foreign Scan on public.x_vec (cost=0.00..2.18 rows=12 width=4)\n\n Output: x_vec.a, x_vec.b\n\n Filter: ((x_vec.a * 2) <> 10)\n\n CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453195\n\n CStore Table Size: 28 kB\n\n -> Hash (cost=2.04..2.04 rows=4 width=8)\n\n Output: y_vec.a\n\n -> Foreign Scan on public.y_vec (cost=0.00..2.04 rows=4 width=8)\n\n Output: y_vec.a\n\n CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453068\n\n CStore Table Size: 28 kB\n\n(14 rows)\n\n\n Here the same expression is being used in HashCond, Projection. Since its not being pushed down to Scan its being evaluated multiple times for HashValue, HashCond and Projection. \nHave used a simple expression for an example. If the expression is complex, query execution slows down due to this.\n\n\nThe same is also being done even if the expression is used in multiple levels. \ncontrib_regression=# explain verbose select * from (select x_vec.a*2 as xa2, y_vec.a*2 as ya2 from x_vec, y_vec where x_vec.a*2 = y_vec.a*2) q1 join a on q1.xa2 = a.a;\n\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------------\n\n Hash Join (cost=4.37..8.51 rows=2 width=28)\n\n Output: (x_vec.a * 2), (y_vec.a * 2), a.a, a.b\n\n Hash Cond: (a.a = (x_vec.a * 2))\n\n -> Foreign Scan on public.a (cost=0.00..4.07 rows=7 width=16)\n\n Output: a.a, a.b\n\n CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453149\n\n CStore Table Size: 28 kB\n\n -> Hash (cost=4.32..4.32 rows=4 width=12)\n\n Output: x_vec.a, y_vec.a\n\n -> Hash Join (cost=2.09..4.32 rows=4 width=12)\n\n Output: x_vec.a, y_vec.a\n\n Hash Cond: ((x_vec.a * 2) = (y_vec.a * 2))\n\n -> Foreign Scan on public.x_vec (cost=0.00..2.12 rows=12 width=4)\n\n Output: x_vec.a, x_vec.b\n\n CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453195\n\n CStore Table Size: 28 kB\n\n -> Hash (cost=2.04..2.04 rows=4 width=8)\n\n Output: y_vec.a\n\n -> Foreign Scan on public.y_vec (cost=0.00..2.04 rows=4 width=8)\n\n Output: y_vec.a\n\n CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453068\n\n CStore Table Size: 28 kB\n\n(22 rows)\n\n\n\n\n\nThanks and regards,\n\nVignesh K.\nHi, I Recently noted that expressions involved in either side of HashCondition in HashJoin is not being pushed down to foreign scan. This leads to evaluation of the same expression multiple times - (for hashvalue computation from hashkeys, for HashCondition expr evaluation, for Projection). Not sure if intended behavior is to not push down expressions in HashCond. Kindly clarify this case. Have attached sample plan for reference. contrib_regression=# explain verbose select x_vec.a*2, y_vec.a*2 as a from x_vec, y_vec where x_vec.a*2 = y_vec.a*2 and x_vec.a*2 != 10; QUERY PLAN -------------------------------------------------------------------------------------------- Hash Join (cost=2.09..4.40 rows=4 width=12) Output: (x_vec.a * 2), (y_vec.a * 2) Hash Cond: ((x_vec.a * 2) = (y_vec.a * 2)) -> Foreign Scan on public.x_vec (cost=0.00..2.18 rows=12 width=4) Output: x_vec.a, x_vec.b Filter: ((x_vec.a * 2) <> 10) CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453195 CStore Table Size: 28 kB -> Hash (cost=2.04..2.04 rows=4 width=8) Output: y_vec.a -> Foreign Scan on public.y_vec (cost=0.00..2.04 rows=4 width=8) Output: y_vec.a CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453068 CStore Table Size: 28 kB(14 rows) Here the same expression is being used in HashCond, Projection. Since its not being pushed down to Scan its being evaluated multiple times for HashValue, HashCond and Projection. Have used a simple expression for an example. If the expression is complex, query execution slows down due to this.The same is also being done even if the expression is used in multiple levels. contrib_regression=# explain verbose select * from (select x_vec.a*2 as xa2, y_vec.a*2 as ya2 from x_vec, y_vec where x_vec.a*2 = y_vec.a*2) q1 join a on q1.xa2 = a.a; QUERY PLAN -------------------------------------------------------------------------------------------------------- Hash Join (cost=4.37..8.51 rows=2 width=28) Output: (x_vec.a * 2), (y_vec.a * 2), a.a, a.b Hash Cond: (a.a = (x_vec.a * 2)) -> Foreign Scan on public.a (cost=0.00..4.07 rows=7 width=16) Output: a.a, a.b CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453149 CStore Table Size: 28 kB -> Hash (cost=4.32..4.32 rows=4 width=12) Output: x_vec.a, y_vec.a -> Hash Join (cost=2.09..4.32 rows=4 width=12) Output: x_vec.a, y_vec.a Hash Cond: ((x_vec.a * 2) = (y_vec.a * 2)) -> Foreign Scan on public.x_vec (cost=0.00..2.12 rows=12 width=4) Output: x_vec.a, x_vec.b CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453195 CStore Table Size: 28 kB -> Hash (cost=2.04..2.04 rows=4 width=8) Output: y_vec.a -> Foreign Scan on public.y_vec (cost=0.00..2.04 rows=4 width=8) Output: y_vec.a CStore Dir: /home/test/postgres/datasets11/cstore_fdw/452395/453068 CStore Table Size: 28 kB(22 rows)Thanks and regards,Vignesh K.",
"msg_date": "Tue, 25 Jan 2022 09:44:54 +0530",
"msg_from": "Vignesh K <vignesh.kr@zohocorp.com>",
"msg_from_op": true,
"msg_subject": "Reg. evaluation of expression in HashCond"
}
] |
[
{
"msg_contents": "Hi,\n\nI was looking the shared memory stats patch again. The rebase of which\ncollided fairly heavily with the addition of pg_stat_subscription_workers.\n\nI'm concerned about the design of pg_stat_subscription_workers. The view was\nintroduced in\n\n\ncommit 8d74fc96db5fd547e077bf9bf4c3b67f821d71cd\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2021-11-30 08:54:30 +0530\n\n Add a view to show the stats of subscription workers.\n\n This commit adds a new system view pg_stat_subscription_workers, that\n shows information about any errors which occur during the application of\n logical replication changes as well as during performing initial table\n synchronization. The subscription statistics entries are removed when the\n corresponding subscription is removed.\n\n It also adds an SQL function pg_stat_reset_subscription_worker() to reset\n single subscription errors.\n\n The contents of this view can be used by an upcoming patch that skips the\n particular transaction that conflicts with the existing data on the\n subscriber.\n\n This view can be extended in the future to track other xact related\n statistics like the number of xacts committed/aborted for subscription\n workers.\n\n Author: Masahiko Sawada\n Reviewed-by: Greg Nancarrow, Hou Zhijie, Tang Haiying, Vignesh C, Dilip Kumar, Takamichi Osumi, Amit Kapila\n Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com\n\n\nI tried to skim-read the discussion leading to its introduction, but it's\nextraordinarily long: 474 messages in [1], 131 messages in [2], as well as a\nfew other associated threads.\n\n\n From the commit message alone I am concerned that this appears to be intended\nto be used to store important state in pgstats. For which pgstats is\nfundamentally unsuitable (pgstat can loose state during normal operation,\nalways looses state during crash restarts, the state can be reset).\n\nI don't really understand the name \"pg_stat_subscription_workers\" - what\nworkers are stats kept about exactly? The columns don't obviously refer to a\nsingle worker or such? From the contents it should be name\npg_stat_subscription_table_stats or such. But no, that'd not quite right,\nbecause apply errors are stored per-susbcription, while initial sync stuff is\nper-(subscription, table).\n\nThe pgstat entries are quite wide (292 bytes), because of the error message\nstored. That's nearly twice the size of PgStat_StatTabEntry. And as far as I\ncan tell, once there was an error, we'll just keep the stats entry around\nuntil the subscription is dropped. And that includes stats for long dropped\ntables, as far as I can see - except that they're hidden from view, due to a\njoin to pg_subscription_rel.\n\n\nTo me this looks like it's using pgstat as an extremely poor IPC mechanism.\n\n\nWhy isn't this just storing data in pg_subscription_rel?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK%3D30xJfUVihNZDA%40mail.gmail.com\n[2] https://postgr.es/m/OSBPR01MB48887CA8F40C8D984A6DC00CED199%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 22:31:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 25, 2022 at 3:31 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I was looking the shared memory stats patch again. The rebase of which\n> collided fairly heavily with the addition of pg_stat_subscription_workers.\n>\n> I'm concerned about the design of pg_stat_subscription_workers. The view was\n> introduced in\n>\n>\n> commit 8d74fc96db5fd547e077bf9bf4c3b67f821d71cd\n> Author: Amit Kapila <akapila@postgresql.org>\n> Date: 2021-11-30 08:54:30 +0530\n>\n> Add a view to show the stats of subscription workers.\n>\n> This commit adds a new system view pg_stat_subscription_workers, that\n> shows information about any errors which occur during the application of\n> logical replication changes as well as during performing initial table\n> synchronization. The subscription statistics entries are removed when the\n> corresponding subscription is removed.\n>\n> It also adds an SQL function pg_stat_reset_subscription_worker() to reset\n> single subscription errors.\n>\n> The contents of this view can be used by an upcoming patch that skips the\n> particular transaction that conflicts with the existing data on the\n> subscriber.\n>\n> This view can be extended in the future to track other xact related\n> statistics like the number of xacts committed/aborted for subscription\n> workers.\n>\n> Author: Masahiko Sawada\n> Reviewed-by: Greg Nancarrow, Hou Zhijie, Tang Haiying, Vignesh C, Dilip Kumar, Takamichi Osumi, Amit Kapila\n> Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com\n>\n>\n> I tried to skim-read the discussion leading to its introduction, but it's\n> extraordinarily long: 474 messages in [1], 131 messages in [2], as well as a\n> few other associated threads.\n>\n>\n> From the commit message alone I am concerned that this appears to be intended\n> to be used to store important state in pgstats. For which pgstats is\n> fundamentally unsuitable (pgstat can loose state during normal operation,\n> always looses state during crash restarts, the state can be reset).\n\nThe information on pg_stat_subscription_workers view, especially\nlast_error_xid, can be used to specify XID to \"ALTER SUBSCRIPTION ...\nSKIP (xid = XXX)\" command which is proposed on the same thread, but it\ndoesn't mean that the new SKIP command relies on this information. The\nfailure XID is written in the server logs as well and the user\nspecifies XID manually.\n\n>\n> I don't really understand the name \"pg_stat_subscription_workers\" - what\n> workers are stats kept about exactly? The columns don't obviously refer to a\n> single worker or such? From the contents it should be name\n> pg_stat_subscription_table_stats or such. But no, that'd not quite right,\n> because apply errors are stored per-susbcription, while initial sync stuff is\n> per-(subscription, table).\n\nThis stores stats for subscription workers namely apply and tablesync\nworker, so named as pg_stat_subscription_workers.\n\nAlso, there is another proposal to add transaction statistics for\nlogical replication subscribers[1], and it's reasonable to merge these\nstatistics and this error information rather than having separate\nviews[2]. There also was an idea to add the transaction statistics to\npg_stat_subscription view, but it doesn't seem a good idea because the\npg_stat_subscription shows dynamic statistics whereas the transaction\nstatistics are accumulative statistics[3].\n\n>\n> The pgstat entries are quite wide (292 bytes), because of the error message\n> stored. That's nearly twice the size of PgStat_StatTabEntry. And as far as I\n> can tell, once there was an error, we'll just keep the stats entry around\n> until the subscription is dropped.\n\nWe can drop the particular statistics by\npg_stat_reset_subscription_worker() function.\n\n> And that includes stats for long dropped\n> tables, as far as I can see - except that they're hidden from view, due to a\n> join to pg_subscription_rel.\n\nWe are planning to drop this after successfully apply[4].\n\n> To me this looks like it's using pgstat as an extremely poor IPC mechanism.\n>\n>\n> Why isn't this just storing data in pg_subscription_rel?\n\nThese need to be updated on error which means for a failed xact and we\ndon't want to update the system catalog in that state. There will be\nsome challenges in a case where updating pg_subscription_rel also\nfailed too (what to report to the user, etc.). And moreover, we don't\nwant to consume space for temporary information in the system catalog.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/OSBPR01MB48887CA8F40C8D984A6DC00CED199%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/message-id/CAD21AoDF7LmSALzMfmPshRw_xFcRz3WvB-me8T2gO6Ht%3D3zL2w%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAA4eK1JqwpsvjhLxV8CMYQ3NrhimZ8AFhWHh0Qn1FrL%3DLXfY6Q%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/CAA4eK1%2B9yXkWkJSNtWYV2rG7QNAnoAt%2BeNH0PexoSP9ZQmXKPg%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 25 Jan 2022 20:27:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nI didn't quite get to responding in depth, but I wanted to at least respond to\none point today.\n\nOn 2022-01-25 20:27:07 +0900, Masahiko Sawada wrote:\n> > The pgstat entries are quite wide (292 bytes), because of the error message\n> > stored. That's nearly twice the size of PgStat_StatTabEntry. And as far as I\n> > can tell, once there was an error, we'll just keep the stats entry around\n> > until the subscription is dropped.\n> \n> We can drop the particular statistics by\n> pg_stat_reset_subscription_worker() function.\n\nOnly if either the user wants to drop all stats, or somehow knows the oids of\nalready dropped tables...\n\n\n\n> > Why isn't this just storing data in pg_subscription_rel?\n> \n> These need to be updated on error which means for a failed xact and we\n> don't want to update the system catalog in that state.\n\nRightly so! In fact, I'm concerned with sending a pgstats message in that\nstate as well. But: You don't need to. Just abort the current transaction,\nstart a new one, and update the state.\n\n\n> There will be some challenges in a case where updating pg_subscription_rel\n> also failed too (what to report to the user, etc.). And moreover, we don't\n> want to consume space for temporary information in the system catalog.\n\nYou're consuming resources in a *WAY* worse way right now. The stats file gets\nconstantly written out, and quite often read back by backends. In contrast to\nparts of pg_subscription_rel or such that data can't be removed from\nshared_buffers under pressure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 Jan 2022 21:46:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Jan 27, 2022 at 11:16 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-01-25 20:27:07 +0900, Masahiko Sawada wrote:\n>\n> > There will be some challenges in a case where updating pg_subscription_rel\n> > also failed too (what to report to the user, etc.). And moreover, we don't\n> > want to consume space for temporary information in the system catalog.\n>\n> You're consuming resources in a *WAY* worse way right now. The stats file gets\n> constantly written out, and quite often read back by backends. In contrast to\n> parts of pg_subscription_rel or such that data can't be removed from\n> shared_buffers under pressure.\n>\n\nI don't think pg_subscription_rel is the right place to store error\ninfo as the error can happen say while processing some message type\nlike BEGIN where we can't map it to pg_subscription_rel entry. There\ncould be other cases as well where we won't be able to map it to\npg_subscription_rel like some error related to some other table while\nprocessing trigger function.\n\nIn general, there doesn't appear to be much advantage in storing all\nthe error info in system catalogs as we don't want it to be persistent\n(crash-safe). Also, this information is not about any system object\nthat future operations can use, so won't map from that angle as well.\n\nBut, I see the point related to the size overhead of each message (296\nbytes) and that is because of the error message present in each entry.\nI think it would be better to store error_code instead of the message.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 27 Jan 2022 17:37:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Jan 27, 2022 at 5:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Jan 27, 2022 at 11:16 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-01-25 20:27:07 +0900, Masahiko Sawada wrote:\n> >\n> > > There will be some challenges in a case where updating\n> pg_subscription_rel\n> > > also failed too (what to report to the user, etc.). And moreover, we\n> don't\n> > > want to consume space for temporary information in the system catalog.\n> >\n> > You're consuming resources in a *WAY* worse way right now. The stats\n> file gets\n> > constantly written out, and quite often read back by backends. In\n> contrast to\n> > parts of pg_subscription_rel or such that data can't be removed from\n> > shared_buffers under pressure.\n> >\n>\n> I don't think pg_subscription_rel is the right place to store error\n> info as the error can happen say while processing some message type\n> like BEGIN where we can't map it to pg_subscription_rel entry. There\n> could be other cases as well where we won't be able to map it to\n> pg_subscription_rel like some error related to some other table while\n> processing trigger function.\n>\n> In general, there doesn't appear to be much advantage in storing all\n> the error info in system catalogs as we don't want it to be persistent\n> (crash-safe). Also, this information is not about any system object\n> that future operations can use, so won't map from that angle as well.\n>\n\nRepeating myself here to try and keep complaints regarding\npg_stat_subscription_worker in one place.\n\nThis is my specific email with respect to the pg_stat_scription_workers\ndesign.\n\nhttps://www.postgresql.org/message-id/CAKFQuwZbFuPSV1WLiNFuODst1sUZon2Qwbj8d9tT%3D38hMhJfvw%40mail.gmail.com\n\nSpecifically,\n\npg_stat_subscription_workers is defined as showing:\n\"will contain one row per subscription\nworker on which errors have occurred, for workers applying logical\nreplication changes and workers handling the initial data copy of the\nsubscribed tables.\"\n\nThe fact that these errors remain (last_error_*) even after they are no\nlonger relevant is my main gripe regarding this feature. The information\nitself is generally useful though last_error_count is not. These fields\nshould auto-clear and be named current_error_* as they exist for purposes\nof describing the current state of any error-encountering logical\nreplication worker so that ALTER SUBSCRIPTION SKIP, or someone other manual\nintervention, can be done with that knowledge without having to scan the\nsubscriber's server logs.\n\nThis is my email trying to understand reality better in order to figure out\nwhat exactly is causing the limitations that are negatively impacting the\ndesign of this feature.\n\nhttps://www.postgresql.org/message-id/CAKFQuwYJ7dsW%2BStsw5%2BZVoY3nwQ9j6pPt-7oYjGddH-h7uVb%2Bg%40mail.gmail.com\n\nIn short, it was convenient to use the statistics collector here even if\ndoing so resulted in a non-user friendly (IMO) design. Given all of the\nlimitations to the statistics collection infrastructure, and the fact that\nthis data is not statistical in the usual usage of the term, I find that to\nbe less than satisfying. To the point that I'd be inclined to revert this\nfeature and hold up the ALTER SUBSCRIPTION SET patch until a more\nuser-friendly design can be done using proper IPC techniques. (I also noted\nin the first email that pg_stat_archiver, solely by observing the column\nnames it exposes, shares this same abuse of the statistics collector for\nnon-statistical data).\n\nIn my second email I did some tracing and ended up at the PG_CATCH() block\nin src/backend/replication/logical/worker.c:L3629. When mentioning trying\nto get rid of the PG_RE_THROW() there apparently doing so completely is\nunwarranted due to fatal/panic errors. I am curious that the addition of\nthe statistic reporting logic doesn't seem to care about the same. And in\nany case, while maybe PG_RE_THROW() cannot go away it could maybe be done\nconditionally, and the worker still allowed to exit should that be more\ndesirable than making the routine safe for looping after an error.\n\nAndres, I do not know how to be more precise than your comment \"But: You\ndon't need to. Just abort the current transaction, start a new one, and\nupdate the state.\". When I suggested that idea it didn't seem to resonate\nwith anyone on the other thread. Starting at the main PG_TRY() loop in\nworker.c noted above, could you maybe please explain in a bit more detail\nwhether, and how hard, it would be to go from \"just PG_RE_THROW();\" to\n\"abort and start a new transaction\"?\n\nDavid J.\n\nOn Thu, Jan 27, 2022 at 5:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Jan 27, 2022 at 11:16 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-01-25 20:27:07 +0900, Masahiko Sawada wrote:\n>\n> > There will be some challenges in a case where updating pg_subscription_rel\n> > also failed too (what to report to the user, etc.). And moreover, we don't\n> > want to consume space for temporary information in the system catalog.\n>\n> You're consuming resources in a *WAY* worse way right now. The stats file gets\n> constantly written out, and quite often read back by backends. In contrast to\n> parts of pg_subscription_rel or such that data can't be removed from\n> shared_buffers under pressure.\n>\n\nI don't think pg_subscription_rel is the right place to store error\ninfo as the error can happen say while processing some message type\nlike BEGIN where we can't map it to pg_subscription_rel entry. There\ncould be other cases as well where we won't be able to map it to\npg_subscription_rel like some error related to some other table while\nprocessing trigger function.\n\nIn general, there doesn't appear to be much advantage in storing all\nthe error info in system catalogs as we don't want it to be persistent\n(crash-safe). Also, this information is not about any system object\nthat future operations can use, so won't map from that angle as well. Repeating myself here to try and keep complaints regarding pg_stat_subscription_worker in one place.This is my specific email with respect to the pg_stat_scription_workers design.https://www.postgresql.org/message-id/CAKFQuwZbFuPSV1WLiNFuODst1sUZon2Qwbj8d9tT%3D38hMhJfvw%40mail.gmail.comSpecifically,pg_stat_subscription_workers is defined as showing:\"will contain one row per subscriptionworker on which errors have occurred, for workers applying logicalreplication changes and workers handling the initial data copy of thesubscribed tables.\"The fact that these errors remain (last_error_*) even after they are no longer relevant is my main gripe regarding this feature. The information itself is generally useful though last_error_count is not. These fields should auto-clear and be named current_error_* as they exist for purposes of describing the current state of any error-encountering logical replication worker so that ALTER SUBSCRIPTION SKIP, or someone other manual intervention, can be done with that knowledge without having to scan the subscriber's server logs.This is my email trying to understand reality better in order to figure out what exactly is causing the limitations that are negatively impacting the design of this feature.https://www.postgresql.org/message-id/CAKFQuwYJ7dsW%2BStsw5%2BZVoY3nwQ9j6pPt-7oYjGddH-h7uVb%2Bg%40mail.gmail.comIn short, it was convenient to use the statistics collector here even if doing so resulted in a non-user friendly (IMO) design. Given all of the limitations to the statistics collection infrastructure, and the fact that this data is not statistical in the usual usage of the term, I find that to be less than satisfying. To the point that I'd be inclined to revert this feature and hold up the ALTER SUBSCRIPTION SET patch until a more user-friendly design can be done using proper IPC techniques. (I also noted in the first email that pg_stat_archiver, solely by observing the column names it exposes, shares this same abuse of the statistics collector for non-statistical data).In my second email I did some tracing and ended up at the PG_CATCH() block in src/backend/replication/logical/worker.c:L3629. When mentioning trying to get rid of the PG_RE_THROW() there apparently doing so completely is unwarranted due to fatal/panic errors. I am curious that the addition of the statistic reporting logic doesn't seem to care about the same. And in any case, while maybe PG_RE_THROW() cannot go away it could maybe be done conditionally, and the worker still allowed to exit should that be more desirable than making the routine safe for looping after an error.Andres, I do not know how to be more precise than your comment \"But: You don't need to. Just abort the current transaction, start a new one, and update the state.\". When I suggested that idea it didn't seem to resonate with anyone on the other thread. Starting at the main PG_TRY() loop in worker.c noted above, could you maybe please explain in a bit more detail whether, and how hard, it would be to go from \"just PG_RE_THROW();\" to \"abort and start a new transaction\"?David J.",
"msg_date": "Thu, 27 Jan 2022 13:18:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-27 13:18:51 -0700, David G. Johnston wrote:\n> Repeating myself here to try and keep complaints regarding\n> pg_stat_subscription_worker in one place.\n\nThanks!\n\n\n> This is my specific email with respect to the pg_stat_scription_workers\n> design.\n>\n> https://www.postgresql.org/message-id/CAKFQuwZbFuPSV1WLiNFuODst1sUZon2Qwbj8d9tT%3D38hMhJfvw%40mail.gmail.com\n>\n> Specifically,\n>\n> pg_stat_subscription_workers is defined as showing:\n> \"will contain one row per subscription\n> worker on which errors have occurred, for workers applying logical\n> replication changes and workers handling the initial data copy of the\n> subscribed tables.\"\n>\n> The fact that these errors remain (last_error_*) even after they are no\n> longer relevant is my main gripe regarding this feature. The information\n> itself is generally useful though last_error_count is not. These fields\n> should auto-clear and be named current_error_* as they exist for purposes\n> of describing the current state of any error-encountering logical\n> replication worker so that ALTER SUBSCRIPTION SKIP, or someone other manual\n> intervention, can be done with that knowledge without having to scan the\n> subscriber's server logs.\n\nIndeed.\n\nAnother related thing is that using a 32bit xid for allowing skipping is a bad\nidea anyway - we shouldn't adding new interfaces with xid wraparound dangers -\nit's getting more and more common to have multiple wraparounds a day. An\neasily better alternative would be the LSN at which a transaction starts.\n\n\n> This is my email trying to understand reality better in order to figure out\n> what exactly is causing the limitations that are negatively impacting the\n> design of this feature.\n>\n> https://www.postgresql.org/message-id/CAKFQuwYJ7dsW%2BStsw5%2BZVoY3nwQ9j6pPt-7oYjGddH-h7uVb%2Bg%40mail.gmail.com\n>\n> In short, it was convenient to use the statistics collector here even if\n> doing so resulted in a non-user friendly (IMO) design.\n\nAnd importantly, the whole justification for the scheme, namely the inability\nto change actual tables in that state, just doesn't hold up. It's a few lines\nto abort the failed transaction and log the error after.\n\n\nJust retrying over and over at full pace doesn't seem like a good thing. We\nshould start to back off retries - the retries themselves can very well\ncontribute to making it harder to fix the problem, by holding locks etc. For\nthat the launcher (or workers) should check whether there's no worker because\nit's errored out. With pgstats such a check would need this full sequence:\n\n1) worker sends failure stats message\n2) pgstats receive stats message\n3) launcher sends ping to pgstats to request file to be written out\n4) pgstats writes out the whole database's stats\n5) launcher reads the whole stats file\n\nThat's a big and expensive cannon for a check whether we should delay the\nlauncher of a worker.\n\n\n> Given all of the\n> limitations to the statistics collection infrastructure, and the fact that\n> this data is not statistical in the usual usage of the term, I find that to\n> be less than satisfying. To the point that I'd be inclined to revert this\n> feature and hold up the ALTER SUBSCRIPTION SET patch until a more\n> user-friendly design can be done using proper IPC techniques.\n\nSame.\n\n\n> In my second email I did some tracing and ended up at the PG_CATCH() block\n> in src/backend/replication/logical/worker.c:L3629. When mentioning trying\n> to get rid of the PG_RE_THROW() there apparently doing so completely is\n> unwarranted due to fatal/panic errors. I am curious that the addition of\n> the statistic reporting logic doesn't seem to care about the same.\n\nWe shouldn't even think about doing stuff like stats updates when we've\nPANICed. You could argue its safe to do that in the FATAL case - but where\nwould such a FATAL validly come from? It'd be something like a user calling\npg_terminate_backend(), which isn't transaction specific, so it'd not make\nsense to record details like xid in pg_stat_subscription_workers.\n\nBut the argument of needing to do something in PG_CATCH in the fatal/panic\ncase is bogus, because FATAL/PANIC doesn't reach PG_CATCH. errfinish() only\nthrows when elevel == ERROR, in the FATAL case we end with proc_exit(1), with\nPANIC we abort().\n\n\n> Andres, I do not know how to be more precise than your comment \"But: You\n> don't need to. Just abort the current transaction, start a new one, and\n> update the state.\". When I suggested that idea it didn't seem to resonate\n> with anyone on the other thread. Starting at the main PG_TRY() loop in\n> worker.c noted above, could you maybe please explain in a bit more detail\n> whether, and how hard, it would be to go from \"just PG_RE_THROW();\" to\n> \"abort and start a new transaction\"?\n\nIt's pretty easy from the POV of getting into a new transaction.\n\nPG_CATCH():\n\n /* get us out of the failed transaction */\n AbortOutOfAnyTransaction();\n\n StartTransactionCommand();\n /* do something to remember the error we just got */\n CommitTransactionCommand();\n\n\nIt may be a bit harder to afterwards to to not just error out the whole\nworker, because we'd need to know what to do instead.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 27 Jan 2022 13:15:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Jan 27, 2022 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Another related thing is that using a 32bit xid for allowing skipping is a\n> bad\n> idea anyway - we shouldn't adding new interfaces with xid wraparound\n> dangers -\n> it's getting more and more common to have multiple wraparounds a day. An\n> easily better alternative would be the LSN at which a transaction starts.\n>\n>\nInteresting idea. I do not think a well-designed skipping feature need\nworry about wrap-around though. The XID to be skipped was just seen be a\nworker and because it failed it will continue to be the same XID\nencountered by that worker until it is resolved. There is no effective\nprogression in time while the subscriber is stuck for wrap-around to\nhappen. Since we want to skip the transaction as a whole adding a layer of\nhidden indirection to the process seems undesirable. I'm not against the\nidea though - to the user it is basically \"copy this value from the error\nmessage in order to skip the transaction that caused the error\". Then the\nsystem verifies the value and then ensures it skips one, and only one,\ntransaction.\n\n\n> It's pretty easy from the POV of getting into a new transaction.\n>\n> PG_CATCH():\n>\n> /* get us out of the failed transaction */\n> AbortOutOfAnyTransaction();\n>\n> StartTransactionCommand();\n> /* do something to remember the error we just got */\n> CommitTransactionCommand();\n>\n\nThank you.\n\n> It may be a bit harder to afterwards to to not just error out the whole\n> worker, because we'd need to know what to do instead.\n>\n>\nI imagine the launcher and worker startup code can be made to deal with the\nrestart adequately. Just wait if the last thing seen was an error.\nRequire the user to manually resume the worker - unless we really think\na try-until-you-succeed with a backoff protocol is superior. Upon system\nrestart all error information is cleared and we start from scratch and let\nthe errors happen (or not depending) as they will.\n\nDavid J.\n\nOn Thu, Jan 27, 2022 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:Another related thing is that using a 32bit xid for allowing skipping is a bad\nidea anyway - we shouldn't adding new interfaces with xid wraparound dangers -\nit's getting more and more common to have multiple wraparounds a day. An\neasily better alternative would be the LSN at which a transaction starts.\nInteresting idea. I do not think a well-designed skipping feature need worry about wrap-around though. The XID to be skipped was just seen be a worker and because it failed it will continue to be the same XID encountered by that worker until it is resolved. There is no effective progression in time while the subscriber is stuck for wrap-around to happen. Since we want to skip the transaction as a whole adding a layer of hidden indirection to the process seems undesirable. I'm not against the idea though - to the user it is basically \"copy this value from the error message in order to skip the transaction that caused the error\". Then the system verifies the value and then ensures it skips one, and only one, transaction.\n\nIt's pretty easy from the POV of getting into a new transaction.\n\nPG_CATCH():\n\n /* get us out of the failed transaction */\n AbortOutOfAnyTransaction();\n\n StartTransactionCommand();\n /* do something to remember the error we just got */\n CommitTransactionCommand();Thank you.\nIt may be a bit harder to afterwards to to not just error out the whole\nworker, because we'd need to know what to do instead.I imagine the launcher and worker startup code can be made to deal with the restart adequately. Just wait if the last thing seen was an error. Require the user to manually resume the worker - unless we really think a try-until-you-succeed with a backoff protocol is superior. Upon system restart all error information is cleared and we start from scratch and let the errors happen (or not depending) as they will.David J.",
"msg_date": "Thu, 27 Jan 2022 15:35:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Fri, Jan 28, 2022 at 1:49 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Thu, Jan 27, 2022 at 5:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, Jan 27, 2022 at 11:16 AM Andres Freund <andres@anarazel.de> wrote:\n>> >\n>> > On 2022-01-25 20:27:07 +0900, Masahiko Sawada wrote:\n>> >\n>> > > There will be some challenges in a case where updating pg_subscription_rel\n>> > > also failed too (what to report to the user, etc.). And moreover, we don't\n>> > > want to consume space for temporary information in the system catalog.\n>> >\n>> > You're consuming resources in a *WAY* worse way right now. The stats file gets\n>> > constantly written out, and quite often read back by backends. In contrast to\n>> > parts of pg_subscription_rel or such that data can't be removed from\n>> > shared_buffers under pressure.\n>> >\n>>\n>> I don't think pg_subscription_rel is the right place to store error\n>> info as the error can happen say while processing some message type\n>> like BEGIN where we can't map it to pg_subscription_rel entry. There\n>> could be other cases as well where we won't be able to map it to\n>> pg_subscription_rel like some error related to some other table while\n>> processing trigger function.\n>>\n>> In general, there doesn't appear to be much advantage in storing all\n>> the error info in system catalogs as we don't want it to be persistent\n>> (crash-safe). Also, this information is not about any system object\n>> that future operations can use, so won't map from that angle as well.\n>\n>\n> Repeating myself here to try and keep complaints regarding pg_stat_subscription_worker in one place.\n>\n> This is my specific email with respect to the pg_stat_scription_workers design.\n>\n> https://www.postgresql.org/message-id/CAKFQuwZbFuPSV1WLiNFuODst1sUZon2Qwbj8d9tT%3D38hMhJfvw%40mail.gmail.com\n>\n> Specifically,\n>\n> pg_stat_subscription_workers is defined as showing:\n> \"will contain one row per subscription\n> worker on which errors have occurred, for workers applying logical\n> replication changes and workers handling the initial data copy of the\n> subscribed tables.\"\n>\n> The fact that these errors remain (last_error_*) even after they are no longer relevant is my main gripe regarding this feature. The information itself is generally useful though last_error_count is not. These fields should auto-clear and be named current_error_* as they exist for purposes of describing the current state of any error-encountering logical replication worker so that ALTER SUBSCRIPTION SKIP, or someone other manual intervention, can be done with that knowledge without having to scan the subscriber's server logs.\n>\n\nWe can discuss names of columns but the main reason was that tomorrow\nsay we want to account for total errors not only the current error\nthen we have to introduce the field error_count or something like that\nwhich will then conflict with names like current_*. Similar for\ntransaction abort_count. In the initial versions of the patch, we were\nnot using last_* for column names but similar arguments led us to\nchange names to last_ terminology and the same was being used in\npg_stat_archiver. But, feel free to suggest better names. Yes, I agree\nwith an auto-clear point as well and there seems to be an agreement\nfor doing it after the next successful apply and or after we skipped\nthe failed xact.\n\n> This is my email trying to understand reality better in order to figure out what exactly is causing the limitations that are negatively impacting the design of this feature.\n>\n> https://www.postgresql.org/message-id/CAKFQuwYJ7dsW%2BStsw5%2BZVoY3nwQ9j6pPt-7oYjGddH-h7uVb%2Bg%40mail.gmail.com\n>\n> In short, it was convenient to use the statistics collector here even if doing so resulted in a non-user friendly (IMO) design. Given all of the limitations to the statistics collection infrastructure, and the fact that this data is not statistical in the usual usage of the term, I find that to be less than satisfying.\n>\n\nI think the failures/conflicts are also important information for\nusers to know, so having a view of those doesn't appear to be a bad\nidea. All this data is less suitable for system catalogs like\npg_subscription_rel or others for the reasons quoted in my previous\nemail [1]. You have already noted one view pg_stat_archiver and we do\nhave failure information like checksum_failures, deadlocks in some\nother views. Then, we have some information like conflicts available\nvia pg_stat_database_conflicts. I think the error/conflict info about\napply failures is on similar lines.\n\n> To the point that I'd be inclined to revert this feature and hold up the ALTER SUBSCRIPTION SET patch until a more user-friendly design can be done using proper IPC techniques.\n>\n\nIf we find a different/better way and that is a conclusion then I will\ndo it. But, in my humble opinion, let's first discuss and see why this\nis incorrect? IIUC, your main argument was to allow auto skip instead\nof allowing users to fetch XID (from a view or server logs) and then\nuse it in the command. We are already trying to brainstorm that and\nSawada-San has already proposed a couple of ideas [2]. Also, the other\nidea Andres has shared is to use LSN (of the corresponding failed\ntransaction) instead of XID which I find better.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BMDngbOQfMcAMsrf__s2a-MMMHaCR0zwde3GVeEi-bbQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAD21AoBdEcyXKMCMws7HjcYDbbPyq_KfUbCnTX84rDeP45Hbrw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 28 Jan 2022 11:29:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Fri, Jan 28, 2022 at 2:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 28, 2022 at 1:49 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Thu, Jan 27, 2022 at 5:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Thu, Jan 27, 2022 at 11:16 AM Andres Freund <andres@anarazel.de> wrote:\n> >> >\n> >> > On 2022-01-25 20:27:07 +0900, Masahiko Sawada wrote:\n> >> >\n> >> > > There will be some challenges in a case where updating pg_subscription_rel\n> >> > > also failed too (what to report to the user, etc.). And moreover, we don't\n> >> > > want to consume space for temporary information in the system catalog.\n> >> >\n> >> > You're consuming resources in a *WAY* worse way right now. The stats file gets\n> >> > constantly written out, and quite often read back by backends. In contrast to\n> >> > parts of pg_subscription_rel or such that data can't be removed from\n> >> > shared_buffers under pressure.\n> >> >\n> >>\n> >> I don't think pg_subscription_rel is the right place to store error\n> >> info as the error can happen say while processing some message type\n> >> like BEGIN where we can't map it to pg_subscription_rel entry. There\n> >> could be other cases as well where we won't be able to map it to\n> >> pg_subscription_rel like some error related to some other table while\n> >> processing trigger function.\n> >>\n> >> In general, there doesn't appear to be much advantage in storing all\n> >> the error info in system catalogs as we don't want it to be persistent\n> >> (crash-safe). Also, this information is not about any system object\n> >> that future operations can use, so won't map from that angle as well.\n> >\n> >\n> > Repeating myself here to try and keep complaints regarding pg_stat_subscription_worker in one place.\n> >\n> > This is my specific email with respect to the pg_stat_scription_workers design.\n> >\n> > https://www.postgresql.org/message-id/CAKFQuwZbFuPSV1WLiNFuODst1sUZon2Qwbj8d9tT%3D38hMhJfvw%40mail.gmail.com\n> >\n> > Specifically,\n> >\n> > pg_stat_subscription_workers is defined as showing:\n> > \"will contain one row per subscription\n> > worker on which errors have occurred, for workers applying logical\n> > replication changes and workers handling the initial data copy of the\n> > subscribed tables.\"\n> >\n> > The fact that these errors remain (last_error_*) even after they are no longer relevant is my main gripe regarding this feature. The information itself is generally useful though last_error_count is not. These fields should auto-clear and be named current_error_* as they exist for purposes of describing the current state of any error-encountering logical replication worker so that ALTER SUBSCRIPTION SKIP, or someone other manual intervention, can be done with that knowledge without having to scan the subscriber's server logs.\n> >\n>\n> We can discuss names of columns but the main reason was that tomorrow\n> say we want to account for total errors not only the current error\n> then we have to introduce the field error_count or something like that\n> which will then conflict with names like current_*. Similar for\n> transaction abort_count. In the initial versions of the patch, we were\n> not using last_* for column names but similar arguments led us to\n> change names to last_ terminology and the same was being used in\n> pg_stat_archiver. But, feel free to suggest better names. Yes, I agree\n> with an auto-clear point as well and there seems to be an agreement\n> for doing it after the next successful apply and or after we skipped\n> the failed xact.\n>\n> > This is my email trying to understand reality better in order to figure out what exactly is causing the limitations that are negatively impacting the design of this feature.\n> >\n> > https://www.postgresql.org/message-id/CAKFQuwYJ7dsW%2BStsw5%2BZVoY3nwQ9j6pPt-7oYjGddH-h7uVb%2Bg%40mail.gmail.com\n> >\n> > In short, it was convenient to use the statistics collector here even if doing so resulted in a non-user friendly (IMO) design. Given all of the limitations to the statistics collection infrastructure, and the fact that this data is not statistical in the usual usage of the term, I find that to be less than satisfying.\n> >\n>\n> I think the failures/conflicts are also important information for\n> users to know, so having a view of those doesn't appear to be a bad\n> idea. All this data is less suitable for system catalogs like\n> pg_subscription_rel or others for the reasons quoted in my previous\n> email [1].\n\nI see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\nfeature to pass error-XID or error-LSN information to the worker\nwhereas I'm also not sure of the advantages in storing all error\ninformation in a system catalog. Since what we need to do for this\npurpose is only error-XID/LSN, we can store only error-XID/LSN in the\ncatalog? That is, the worker stores error-XID/LSN in the catalog on an\nerror, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\nthe transaction in question. The worker clears the error-XID/LSN after\nsuccessfully applying or skipping the first non-empty transaction.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 1 Feb 2022 15:17:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jan 28, 2022 at 2:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 28, 2022 at 1:49 AM David G. Johnston\n> > <david.g.johnston@gmail.com> wrote:\n> > >\n> > >\n> > > In short, it was convenient to use the statistics collector here even if doing so resulted in a non-user friendly (IMO) design. Given all of the limitations to the statistics collection infrastructure, and the fact that this data is not statistical in the usual usage of the term, I find that to be less than satisfying.\n> > >\n> >\n> > I think the failures/conflicts are also important information for\n> > users to know, so having a view of those doesn't appear to be a bad\n> > idea. All this data is less suitable for system catalogs like\n> > pg_subscription_rel or others for the reasons quoted in my previous\n> > email [1].\n>\n> I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n> feature to pass error-XID or error-LSN information to the worker\n> whereas I'm also not sure of the advantages in storing all error\n> information in a system catalog. Since what we need to do for this\n> purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n> catalog? That is, the worker stores error-XID/LSN in the catalog on an\n> error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n> the transaction in question. The worker clears the error-XID/LSN after\n> successfully applying or skipping the first non-empty transaction.\n>\n\nWhere do you propose to store this information? I think we can't use\npg_subscription_rel for reasons quoted by me in email [1]. We can\nstore it in pg_subscription but that won't cover tablesync cases. I\nthink it can work if we store at both places. I think that would be\nextendable if one wants to bring parallelism on the apply-side as we\ncan think of storing the values in the array. The other possibility\ncould be to invent a new catalog for this info but I guess it will\nthen have to have some duplicate info from pg_subscription/_rel.\n\nThe other point is after this, do we want an interface where the user\ncan also be allowed to specify error_lsn or error_xid? I think it\nwould be better to have such flexibility as that can be extended later\nto allow users to skip some specific operations like 'update',\n'insert', etc., or other similar things.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BMDngbOQfMcAMsrf__s2a-MMMHaCR0zwde3GVeEi-bbQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Feb 2022 08:37:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 1, 2022 at 8:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n>\n> >\n> > I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n> > feature to pass error-XID or error-LSN information to the worker\n> > whereas I'm also not sure of the advantages in storing all error\n> > information in a system catalog. Since what we need to do for this\n> > purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n> > catalog? That is, the worker stores error-XID/LSN in the catalog on an\n> > error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n> > the transaction in question. The worker clears the error-XID/LSN after\n> > successfully applying or skipping the first non-empty transaction.\n> >\n>\n> Where do you propose to store this information?\n\n\npg_subscription_worker\n\nThe error message and context is very important. Just make sure it is only\nnon-null when the worker state is \"syncing failed\" (or whatever term we\nuse).\n\nRecords are removed upon server restart (the launcher can handle this).\nConsider recording a last activity timestamp (some protection/visibility\nagainst bugs or, say, a worker ending without reporting that fact).\nRecords stay around even when the worker goes away (the user can filter the\nstate field to omit inactive rows). I'd consider just removing them when\ndone and/or having a reset function that the DBA could run (it should never\nbe wrong to clear the table).\n\nRe: XID and/or LSN, I don't know enough yet to really judge this...\n\nThe other possibility\n> could be to invent a new catalog for this info but I guess it will\n> then have to have some duplicate info from pg_subscription/_rel.\n\n\n> The other point is after this, do we want an interface where the user\n> can also be allowed to specify error_lsn or error_xid?\n\n\n...but whatever is decided, tell me, the user, what my options are, the\nlimitations, and what info to copy from this catalog into the command(s)\nthat I issue to the server, that will make the errors go away. This is\ngeneric, not specific to the skipping a commit command or the skip-to-lsn\nfunctions, but also includes considering performing DML on the relevant\ntable(s) to avoid the error.\n\nI don't think the fields would be duplicated. While some of the fields\nseem similar, aside from the key fields the data we would show would be\nstate info for a given worker. None of the v14 fields do this at the\nworker scope.\n\nThat all makes the new catalog a generally useful monitoring source and a\nstandalone patch. I'd personally start a new thread, with a functioning\npatch as the first message, and a recap of what and why this rework is\nbeing done. In order for Andres to make progress on the shared memory\nstatistics patch I would suggest reverting this and building the new patch\nas if this statistics collector approach never happened.\n\nI'd still like to get some clarity regarding the observation that our\nerror-die-restart process seems problematic. Since that process needs to\ntalk to the new catalog anyway I'd rather commit the changes to the process\n(if any, but I hope we can either all agree on the status quo or get\nsomething better in for v15), and the new catalog that provides insight\ninto that process, as part of this first commit. That includes a probable\nuser function to restart a halted worker instead of doing so continually\n(even with the suggested back-off protocol).\n\nThen the SKIP commit can go in, leveraging the state information exposed in\nthe catalog. That discussion and work should be restarted on a new thread\nwith an intro recap message. The existing patch should be adapted to\nleverage the new pg_subscription_worker catalog before starting the new\nthread.\n\nDavid J.\n\nOn Tue, Feb 1, 2022 at 8:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n> feature to pass error-XID or error-LSN information to the worker\n> whereas I'm also not sure of the advantages in storing all error\n> information in a system catalog. Since what we need to do for this\n> purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n> catalog? That is, the worker stores error-XID/LSN in the catalog on an\n> error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n> the transaction in question. The worker clears the error-XID/LSN after\n> successfully applying or skipping the first non-empty transaction.\n>\n\nWhere do you propose to store this information?pg_subscription_workerThe error message and context is very important. Just make sure it is only non-null when the worker state is \"syncing failed\" (or whatever term we use).Records are removed upon server restart (the launcher can handle this). Consider recording a last activity timestamp (some protection/visibility against bugs or, say, a worker ending without reporting that fact). Records stay around even when the worker goes away (the user can filter the state field to omit inactive rows). I'd consider just removing them when done and/or having a reset function that the DBA could run (it should never be wrong to clear the table).Re: XID and/or LSN, I don't know enough yet to really judge this...The other possibility\ncould be to invent a new catalog for this info but I guess it will\nthen have to have some duplicate info from pg_subscription/_rel.\n\nThe other point is after this, do we want an interface where the user\ncan also be allowed to specify error_lsn or error_xid?...but whatever is decided, tell me, the user, what my options are, the limitations, and what info to copy from this catalog into the command(s) that I issue to the server, that will make the errors go away. This is generic, not specific to the skipping a commit command or the skip-to-lsn functions, but also includes considering performing DML on the relevant table(s) to avoid the error.I don't think the fields would be duplicated. While some of the fields seem similar, aside from the key fields the data we would show would be state info for a given worker. None of the v14 fields do this at the worker scope.That all makes the new catalog a generally useful monitoring source and a standalone patch. I'd personally start a new thread, with a functioning patch as the first message, and a recap of what and why this rework is being done. In order for Andres to make progress on the shared memory statistics patch I would suggest reverting this and building the new patch as if this statistics collector approach never happened.I'd still like to get some clarity regarding the observation that our error-die-restart process seems problematic. Since that process needs to talk to the new catalog anyway I'd rather commit the changes to the process (if any, but I hope we can either all agree on the status quo or get something better in for v15), and the new catalog that provides insight into that process, as part of this first commit. That includes a probable user function to restart a halted worker instead of doing so continually (even with the suggested back-off protocol).Then the SKIP commit can go in, leveraging the state information exposed in the catalog. That discussion and work should be restarted on a new thread with an intro recap message. The existing patch should be adapted to leverage the new pg_subscription_worker catalog before starting the new thread.David J.",
"msg_date": "Tue, 1 Feb 2022 21:11:40 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 2, 2022 at 9:41 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Feb 1, 2022 at 8:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> >\n>> > I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n>> > feature to pass error-XID or error-LSN information to the worker\n>> > whereas I'm also not sure of the advantages in storing all error\n>> > information in a system catalog. Since what we need to do for this\n>> > purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n>> > catalog? That is, the worker stores error-XID/LSN in the catalog on an\n>> > error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n>> > the transaction in question. The worker clears the error-XID/LSN after\n>> > successfully applying or skipping the first non-empty transaction.\n>> >\n>>\n>> Where do you propose to store this information?\n>\n>\n> pg_subscription_worker\n>\n> The error message and context is very important. Just make sure it is only non-null when the worker state is \"syncing failed\" (or whatever term we use).\n>\n>\n\nSure, but is this the reason you want to store all the error info in\nthe system catalog? I agree that providing more error info could be\nuseful and also possibly the previously failed (apply) xacts info as\nwell but I am not able to see why you want to have that sort of info\nin the catalog. I could see storing info like err_lsn/err_xid that can\nallow to proceed to apply worker automatically or to slow down the\nlaunch of errored apply worker but not all sort of other error info\n(like err_cnt, err_code, err_message, err_time, etc.). I want to know\nwhy you are insisting to make all the error info persistent via the\nsystem catalog?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Feb 2022 12:24:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 1, 2022 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Feb 2, 2022 at 9:41 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Tue, Feb 1, 2022 at 8:07 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >> >\n> >> > I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n> >> > feature to pass error-XID or error-LSN information to the worker\n> >> > whereas I'm also not sure of the advantages in storing all error\n> >> > information in a system catalog. Since what we need to do for this\n> >> > purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n> >> > catalog? That is, the worker stores error-XID/LSN in the catalog on an\n> >> > error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n> >> > the transaction in question. The worker clears the error-XID/LSN after\n> >> > successfully applying or skipping the first non-empty transaction.\n> >> >\n> >>\n> >> Where do you propose to store this information?\n> >\n> >\n> > pg_subscription_worker\n> >\n> > The error message and context is very important. Just make sure it is\n> only non-null when the worker state is \"syncing failed\" (or whatever term\n> we use).\n> >\n> >\n>\n> Sure, but is this the reason you want to store all the error info in\n> the system catalog? I agree that providing more error info could be\n> useful and also possibly the previously failed (apply) xacts info as\n> well but I am not able to see why you want to have that sort of info\n> in the catalog. I could see storing info like err_lsn/err_xid that can\n> allow to proceed to apply worker automatically or to slow down the\n> launch of errored apply worker but not all sort of other error info\n> (like err_cnt, err_code, err_message, err_time, etc.). I want to know\n> why you are insisting to make all the error info persistent via the\n> system catalog?\n>\n\nI look at the catalog and am informed that the worker has stopped because\nof an error. I'd rather simply read the error message right then instead\nof having to go look at the log file. And if I am going to take an action\nin order to overcome the error I would have to know what that error is; so\nthe error message is not something I can ignore. The error is an attribute\nof system state, and the catalog stores the current state of the (workers)\nsystem.\n\nI already explained that the concept of err_cnt is not useful. The fact\nthat you include it here makes me think you are still thinking that this\nall somehow is meant to keep track of history. It is not. The workers are\nstate machines and \"error\" is one of the states - with relevant attributes\nto display to the user, and system, while in that state. The state machine\nreporting does not care about historical states nor does it report on\nthem. There is some uncertainty if we continue with the automatic\nre-launch; which, now that I write this, I can see where what you call\nerr_cnt is effectively a count of how many times the worker re-launched\nwithout the underlying problem being resolved and thus encountered the same\nerror. If we persist with the re-launch behavior then maybe err_cnt should\nbe left in place - with the description for it basically being the ah-ha!\ncomment I just made. In a world where we do not typically re-launch and\nsimply re-try without being informed there is a change - such a count\nremains of minimal value.\n\nI don't really understand the confusion here though - this error data\nalready exists in the pg_stat_subscription_workers stat collector view -\nthe fact that I want to keep it around (just changing the reset behavior) -\ndoesn't seem like it should be controversial. I, thinking as a user,\nreally don't care about all of these implementation details. Whether it is\na pg_stat_* view (collector or shmem IPC) or a pg_* catalog is immaterial\nto me. The behavior I observe is what matters. As a developer I don't\nwant to use the statistics collector because these are not statistics and\nthe collector is unreliable. I don't know enough about the relevant\ndifferences between shared memory IPC and catalog tables to decide between\nthem. But catalog tables seem like a lower bar to meet and seem like they\ncan implement the user-facing requirements as I envision them.\n\nDavid J.\n\nOn Tue, Feb 1, 2022 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Feb 2, 2022 at 9:41 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Feb 1, 2022 at 8:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> >\n>> > I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n>> > feature to pass error-XID or error-LSN information to the worker\n>> > whereas I'm also not sure of the advantages in storing all error\n>> > information in a system catalog. Since what we need to do for this\n>> > purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n>> > catalog? That is, the worker stores error-XID/LSN in the catalog on an\n>> > error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n>> > the transaction in question. The worker clears the error-XID/LSN after\n>> > successfully applying or skipping the first non-empty transaction.\n>> >\n>>\n>> Where do you propose to store this information?\n>\n>\n> pg_subscription_worker\n>\n> The error message and context is very important. Just make sure it is only non-null when the worker state is \"syncing failed\" (or whatever term we use).\n>\n>\n\nSure, but is this the reason you want to store all the error info in\nthe system catalog? I agree that providing more error info could be\nuseful and also possibly the previously failed (apply) xacts info as\nwell but I am not able to see why you want to have that sort of info\nin the catalog. I could see storing info like err_lsn/err_xid that can\nallow to proceed to apply worker automatically or to slow down the\nlaunch of errored apply worker but not all sort of other error info\n(like err_cnt, err_code, err_message, err_time, etc.). I want to know\nwhy you are insisting to make all the error info persistent via the\nsystem catalog?I look at the catalog and am informed that the worker has stopped because of an error. I'd rather simply read the error message right then instead of having to go look at the log file. And if I am going to take an action in order to overcome the error I would have to know what that error is; so the error message is not something I can ignore. The error is an attribute of system state, and the catalog stores the current state of the (workers) system.I already explained that the concept of err_cnt is not useful. The fact that you include it here makes me think you are still thinking that this all somehow is meant to keep track of history. It is not. The workers are state machines and \"error\" is one of the states - with relevant attributes to display to the user, and system, while in that state. The state machine reporting does not care about historical states nor does it report on them. There is some uncertainty if we continue with the automatic re-launch; which, now that I write this, I can see where what you call err_cnt is effectively a count of how many times the worker re-launched without the underlying problem being resolved and thus encountered the same error. If we persist with the re-launch behavior then maybe err_cnt should be left in place - with the description for it basically being the ah-ha! comment I just made. In a world where we do not typically re-launch and simply re-try without being informed there is a change - such a count remains of minimal value.I don't really understand the confusion here though - this error data already exists in the pg_stat_subscription_workers stat collector view - the fact that I want to keep it around (just changing the reset behavior) - doesn't seem like it should be controversial. I, thinking as a user, really don't care about all of these implementation details. Whether it is a pg_stat_* view (collector or shmem IPC) or a pg_* catalog is immaterial to me. The behavior I observe is what matters. As a developer I don't want to use the statistics collector because these are not statistics and the collector is unreliable. I don't know enough about the relevant differences between shared memory IPC and catalog tables to decide between them. But catalog tables seem like a lower bar to meet and seem like they can implement the user-facing requirements as I envision them.David J.",
"msg_date": "Wed, 2 Feb 2022 00:36:08 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 2, 2022 at 1:06 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Feb 1, 2022 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Feb 2, 2022 at 9:41 AM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > On Tue, Feb 1, 2022 at 8:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >>\n>> >> >\n>> >> > I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n>> >> > feature to pass error-XID or error-LSN information to the worker\n>> >> > whereas I'm also not sure of the advantages in storing all error\n>> >> > information in a system catalog. Since what we need to do for this\n>> >> > purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n>> >> > catalog? That is, the worker stores error-XID/LSN in the catalog on an\n>> >> > error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n>> >> > the transaction in question. The worker clears the error-XID/LSN after\n>> >> > successfully applying or skipping the first non-empty transaction.\n>> >> >\n>> >>\n>> >> Where do you propose to store this information?\n>> >\n>> >\n>> > pg_subscription_worker\n>> >\n>> > The error message and context is very important. Just make sure it is only non-null when the worker state is \"syncing failed\" (or whatever term we use).\n>> >\n>> >\n>>\n>> Sure, but is this the reason you want to store all the error info in\n>> the system catalog? I agree that providing more error info could be\n>> useful and also possibly the previously failed (apply) xacts info as\n>> well but I am not able to see why you want to have that sort of info\n>> in the catalog. I could see storing info like err_lsn/err_xid that can\n>> allow to proceed to apply worker automatically or to slow down the\n>> launch of errored apply worker but not all sort of other error info\n>> (like err_cnt, err_code, err_message, err_time, etc.). I want to know\n>> why you are insisting to make all the error info persistent via the\n>> system catalog?\n>\n>\n...\n...\n>\n> I already explained that the concept of err_cnt is not useful. The fact that you include it here makes me think you are still thinking that this all somehow is meant to keep track of history. It is not. The workers are state machines and \"error\" is one of the states - with relevant attributes to display to the user, and system, while in that state. The state machine reporting does not care about historical states nor does it report on them. There is some uncertainty if we continue with the automatic re-launch;\n>\n\nI think automatic retry will help to allow some transient errors say\nlike network glitches that can be resolved on retry and will keep the\nbehavior transparent. This is also consistent with what we do in\nstandby mode where if there is an error on primary due to which\nstandby is not able to fetch some data it will just retry. We can't\nfix any error that occurred on the server-side, so the way is to retry\nwhich is true for both standby and subscribers. Basically, I don't\nthink every kind of error demands user intervention. We can allow to\ncontrol it via some parameter say disable_on_error as is discussed in\nCF entry [1] but don't think that should be the default.\n\n[1] - https://commitfest.postgresql.org/36/3407/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Feb 2022 17:38:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 2, 2022 at 5:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Feb 2, 2022 at 1:06 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>\n> ...\n> >\n> > I already explained that the concept of err_cnt is not useful. The fact\n> that you include it here makes me think you are still thinking that this\n> all somehow is meant to keep track of history. It is not. The workers are\n> state machines and \"error\" is one of the states - with relevant attributes\n> to display to the user, and system, while in that state. The state machine\n> reporting does not care about historical states nor does it report on\n> them. There is some uncertainty if we continue with the automatic\n> re-launch;\n> >\n>\n> I think automatic retry will help to allow some transient errors say\n> like network glitches that can be resolved on retry and will keep the\n> behavior transparent. This is also consistent with what we do in\n> standby mode where if there is an error on primary due to which\n> standby is not able to fetch some data it will just retry. We can't\n> fix any error that occurred on the server-side, so the way is to retry\n> which is true for both standby and subscribers.\n>\n\nGood points. In short there are two subsets of problems to deal with\nhere. We should address them separately, though the pg_subscription_worker\ntable should provide relevant information for both cases. If we are in a\nretry situation relevant information, like next_scheduled_retry\n(estimated), should be provided (if there is some kind of delay involved).\nIn a situation like \"unique constraint violation\" the\n\"next_scheduled_retry\" would be null; or make the field a text field and\nprint \"Manual Intervention Required\". Likewise, the XID/LSN would be null\nin a retry situation since we haven't received a wholly intact transaction\nfrom the publisher (we may know of such an ID but if the final COMMIT\nmessage is never even seen before the feed dies we should not be exposing\nthat incomplete information to the user).\n\nA standby is not expected to encounter any user data constraint problems so\neven a system with manual intervention for such will work for standbys\nbecause they will never hit that code path. And you cannot simply skip\napplying the failed transaction and move onto the next one - that data also\nnever came over.\n\nDavid J.\n\nOn Wed, Feb 2, 2022 at 5:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Feb 2, 2022 at 1:06 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n...\n>\n> I already explained that the concept of err_cnt is not useful. The fact that you include it here makes me think you are still thinking that this all somehow is meant to keep track of history. It is not. The workers are state machines and \"error\" is one of the states - with relevant attributes to display to the user, and system, while in that state. The state machine reporting does not care about historical states nor does it report on them. There is some uncertainty if we continue with the automatic re-launch;\n>\n\nI think automatic retry will help to allow some transient errors say\nlike network glitches that can be resolved on retry and will keep the\nbehavior transparent. This is also consistent with what we do in\nstandby mode where if there is an error on primary due to which\nstandby is not able to fetch some data it will just retry. We can't\nfix any error that occurred on the server-side, so the way is to retry\nwhich is true for both standby and subscribers.Good points. In short there are two subsets of problems to deal with here. We should address them separately, though the pg_subscription_worker table should provide relevant information for both cases. If we are in a retry situation relevant information, like next_scheduled_retry (estimated), should be provided (if there is some kind of delay involved). In a situation like \"unique constraint violation\" the \"next_scheduled_retry\" would be null; or make the field a text field and print \"Manual Intervention Required\". Likewise, the XID/LSN would be null in a retry situation since we haven't received a wholly intact transaction from the publisher (we may know of such an ID but if the final COMMIT message is never even seen before the feed dies we should not be exposing that incomplete information to the user).A standby is not expected to encounter any user data constraint problems so even a system with manual intervention for such will work for standbys because they will never hit that code path. And you cannot simply skip applying the failed transaction and move onto the next one - that data also never came over.David J.",
"msg_date": "Wed, 2 Feb 2022 08:15:05 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 2, 2022 at 4:36 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Feb 1, 2022 at 11:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Feb 2, 2022 at 9:41 AM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > On Tue, Feb 1, 2022 at 8:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> On Tue, Feb 1, 2022 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >>\n>> >> >\n>> >> > I see that it's better to use a better IPC for ALTER SUBSCRIPTION SKIP\n>> >> > feature to pass error-XID or error-LSN information to the worker\n>> >> > whereas I'm also not sure of the advantages in storing all error\n>> >> > information in a system catalog. Since what we need to do for this\n>> >> > purpose is only error-XID/LSN, we can store only error-XID/LSN in the\n>> >> > catalog? That is, the worker stores error-XID/LSN in the catalog on an\n>> >> > error, and ALTER SUBSCRIPTION SKIP command enables the worker to skip\n>> >> > the transaction in question. The worker clears the error-XID/LSN after\n>> >> > successfully applying or skipping the first non-empty transaction.\n>> >> >\n>> >>\n>> >> Where do you propose to store this information?\n>> >\n>> >\n>> > pg_subscription_worker\n>> >\n>> > The error message and context is very important. Just make sure it is only non-null when the worker state is \"syncing failed\" (or whatever term we use).\n>> >\n>> >\n>>\n>> Sure, but is this the reason you want to store all the error info in\n>> the system catalog? I agree that providing more error info could be\n>> useful and also possibly the previously failed (apply) xacts info as\n>> well but I am not able to see why you want to have that sort of info\n>> in the catalog. I could see storing info like err_lsn/err_xid that can\n>> allow to proceed to apply worker automatically or to slow down the\n>> launch of errored apply worker but not all sort of other error info\n>> (like err_cnt, err_code, err_message, err_time, etc.). I want to know\n>> why you are insisting to make all the error info persistent via the\n>> system catalog?\n>\n>\n> I look at the catalog and am informed that the worker has stopped because of an error. I'd rather simply read the error message right then instead of having to go look at the log file. And if I am going to take an action in order to overcome the error I would have to know what that error is; so the error message is not something I can ignore. The error is an attribute of system state, and the catalog stores the current state of the (workers) system.\n>\n> I already explained that the concept of err_cnt is not useful. The fact that you include it here makes me think you are still thinking that this all somehow is meant to keep track of history. It is not. The workers are state machines and \"error\" is one of the states - with relevant attributes to display to the user, and system, while in that state. The state machine reporting does not care about historical states nor does it report on them. There is some uncertainty if we continue with the automatic re-launch; which, now that I write this, I can see where what you call err_cnt is effectively a count of how many times the worker re-launched without the underlying problem being resolved and thus encountered the same error. If we persist with the re-launch behavior then maybe err_cnt should be left in place - with the description for it basically being the ah-ha! comment I just made. In a world where we do not typically re-launch and simply re-try without being informed there is a change - such a count remains of minimal value.\n>\n> I don't really understand the confusion here though - this error data already exists in the pg_stat_subscription_workers stat collector view - the fact that I want to keep it around (just changing the reset behavior) - doesn't seem like it should be controversial. I, thinking as a user, really don't care about all of these implementation details. Whether it is a pg_stat_* view (collector or shmem IPC) or a pg_* catalog is immaterial to me. The behavior I observe is what matters. As a developer I don't want to use the statistics collector because these are not statistics and the collector is unreliable. I don't know enough about the relevant differences between shared memory IPC and catalog tables to decide between them. But catalog tables seem like a lower bar to meet and seem like they can implement the user-facing requirements as I envision them.\n\nI see that important information such as error-XID that can be used\nfor ALTER SUBSCRIPTION SKIP needs to be stored in a reliable way, and\nusing system catalogs is a reasonable way for this purpose. But it's\nstill unclear to me why all error information that is currently shown\nin pg_stat_subscription_workers view, including error-XID and the\nerror message, relation OID, action, etc., need to be stored in the\ncatalog. The information other than error-XID doesn't necessarily need\nto be reliable compared to error-XID. I think we can have\nerror-XID/LSN in the pg_subscription catalog and have other error\ninformation in pg_stat_subscription_workers view. After the user\nchecks the current status of logical replication by checking\nerror-XID/LSN, they can check pg_stat_subscription_workers for\ndetails.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 3 Feb 2022 13:33:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wednesday, February 2, 2022, Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> and have other error\n> information in pg_stat_subscription_workers view.\n>\n\nWhat benefit is there to keeping the existing collector-based\npg_stat_subscripiton_workers view? If we re-write it using shmem IPC then\nwe might as well put everything there and forego using a catalog. Then it\nbehaves in a similar manner to pg_stat_activity but for logical replication\nworkers.\n\nDavid J.\n\nOn Wednesday, February 2, 2022, Masahiko Sawada <sawada.mshk@gmail.com> wrote:and have other error\ninformation in pg_stat_subscription_workers view.\nWhat benefit is there to keeping the existing collector-based pg_stat_subscripiton_workers view? If we re-write it using shmem IPC then we might as well put everything there and forego using a catalog. Then it behaves in a similar manner to pg_stat_activity but for logical replication workers.David J.",
"msg_date": "Wed, 2 Feb 2022 21:48:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 3, 2022 at 1:48 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wednesday, February 2, 2022, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> and have other error\n>> information in pg_stat_subscription_workers view.\n>\n>\n> What benefit is there to keeping the existing collector-based pg_stat_subscripiton_workers view? If we re-write it using shmem IPC then we might as well put everything there and forego using a catalog. Then it behaves in a similar manner to pg_stat_activity but for logical replication workers.\n\nYes, but if we use shmem IPC, we need to allocate shared memory for\nthem based on the number of subscriptions, not logical replication\nworkers (i.e., max_logical_replication_workers). So we cannot estimate\nmemory in the beginning. Also, IIUC the number of subscriptions that\nare concurrently working is limited by max_replication_slots (see\nReplicationStateCtl) but I think we need to remember the state of\ndisabled subscriptions too.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 3 Feb 2022 14:35:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "\nOn 02.02.22 07:54, Amit Kapila wrote:\n>>> Where do you propose to store this information?\n>>\n>>\n>> pg_subscription_worker\n>>\n>> The error message and context is very important. Just make sure it is only non-null when the worker state is \"syncing failed\" (or whatever term we use).\n\nWe could name the table something like pg_subscription_worker_error, so \nit's explicit that it is collecting error information only.\n\n> Sure, but is this the reason you want to store all the error info in\n> the system catalog? I agree that providing more error info could be\n> useful and also possibly the previously failed (apply) xacts info as\n> well but I am not able to see why you want to have that sort of info\n> in the catalog. I could see storing info like err_lsn/err_xid that can\n> allow to proceed to apply worker automatically or to slow down the\n> launch of errored apply worker but not all sort of other error info\n> (like err_cnt, err_code, err_message, err_time, etc.). I want to know\n> why you are insisting to make all the error info persistent via the\n> system catalog?\n\nLet's flip this around and ask, why not? Tables are the place to store \ndata, by default.\n\n\n",
"msg_date": "Thu, 3 Feb 2022 10:55:37 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 3, 2022 at 3:25 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 02.02.22 07:54, Amit Kapila wrote:\n>\n> > Sure, but is this the reason you want to store all the error info in\n> > the system catalog? I agree that providing more error info could be\n> > useful and also possibly the previously failed (apply) xacts info as\n> > well but I am not able to see why you want to have that sort of info\n> > in the catalog. I could see storing info like err_lsn/err_xid that can\n> > allow to proceed to apply worker automatically or to slow down the\n> > launch of errored apply worker but not all sort of other error info\n> > (like err_cnt, err_code, err_message, err_time, etc.). I want to know\n> > why you are insisting to make all the error info persistent via the\n> > system catalog?\n>\n> Let's flip this around and ask, why not?\n>\n\nBecause we don't necessarily need all this information after the crash\nand neither is this information about any system object which we\nrequire for performing operations on objects. OTOH, if we need some\nparticular error/apply state(s) that should be okay to keep in\npersistent form (in system catalog). In walreceiver (for standby), we\ndon't store the errors/conflicts in any table, they are either\nreported in logs or shared via stats. Similarly, the archiver process\ndo share its failure information either via stats or logs. Similar is\nthe case for checkpointer which also just logs the error. Now,\nsimilarly in this case also we are sharing the error information via\nlogs and stats.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Feb 2022 09:23:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 3, 2022 at 2:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Feb 3, 2022 at 1:48 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Wednesday, February 2, 2022, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> and have other error\n> >> information in pg_stat_subscription_workers view.\n> >\n> >\n> > What benefit is there to keeping the existing collector-based pg_stat_subscripiton_workers view? If we re-write it using shmem IPC then we might as well put everything there and forego using a catalog. Then it behaves in a similar manner to pg_stat_activity but for logical replication workers.\n>\n> Yes, but if we use shmem IPC, we need to allocate shared memory for\n> them based on the number of subscriptions, not logical replication\n> workers (i.e., max_logical_replication_workers). So we cannot estimate\n> memory in the beginning. Also, IIUC the number of subscriptions that\n> are concurrently working is limited by max_replication_slots (see\n> ReplicationStateCtl) but I think we need to remember the state of\n> disabled subscriptions too.\n\nCorrection; the replication state remains even after the subscription\nis disabled, and it's removed only when the subscription is dropped.\nTherefore, the number of subscriptions that can be active in the\ndatabase cluster is effectively limited by max_replication_slots.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 15 Feb 2022 14:12:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-03 14:35:10 +0900, Masahiko Sawada wrote:\n> Yes, but if we use shmem IPC, we need to allocate shared memory for\n> them based on the number of subscriptions, not logical replication\n> workers (i.e., max_logical_replication_workers). So we cannot estimate\n> memory in the beginning. Also, IIUC the number of subscriptions that\n> are concurrently working is limited by max_replication_slots (see\n> ReplicationStateCtl) but I think we need to remember the state of\n> disabled subscriptions too.\n\nUse dshash (i.e. dsm) with a small initial allocation in non-dynamic shared\nmemory...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Feb 2022 08:26:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-04 09:23:06 +0530, Amit Kapila wrote:\n> On Thu, Feb 3, 2022 at 3:25 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 02.02.22 07:54, Amit Kapila wrote:\n> >\n> > > Sure, but is this the reason you want to store all the error info in\n> > > the system catalog? I agree that providing more error info could be\n> > > useful and also possibly the previously failed (apply) xacts info as\n> > > well but I am not able to see why you want to have that sort of info\n> > > in the catalog. I could see storing info like err_lsn/err_xid that can\n> > > allow to proceed to apply worker automatically or to slow down the\n> > > launch of errored apply worker but not all sort of other error info\n> > > (like err_cnt, err_code, err_message, err_time, etc.). I want to know\n> > > why you are insisting to make all the error info persistent via the\n> > > system catalog?\n> >\n> > Let's flip this around and ask, why not?\n> >\n> \n> Because we don't necessarily need all this information after the crash\n> and neither is this information about any system object which we\n> require for performing operations on objects.\n\nI find this not particularly convincing. IMO data that leads the user to\ncompromise \"replication integrity\" is pretty crucial.\n\nAnd skipped data needs to be logged somewhere persistent, so that there's a\nchance to analyze / recover.\n\nWe also should utilize more detailed knowledge about errors to influence at\nwhich interval replication is retried. Serialization error: retry soon. Other\nerrors: retry with increasing backoff.\n\n\n> In walreceiver (for standby), we don't store the errors/conflicts in any\n> table, they are either reported in logs or shared via stats.\n\nThat's imo quite different - they're fundamentally time-limited problems. And\nthey aren't leading the user / DBA to skip transactions etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Feb 2022 10:17:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-03 13:33:08 +0900, Masahiko Sawada wrote:\n> I see that important information such as error-XID that can be used\n> for ALTER SUBSCRIPTION SKIP needs to be stored in a reliable way, and\n> using system catalogs is a reasonable way for this purpose. But it's\n> still unclear to me why all error information that is currently shown\n> in pg_stat_subscription_workers view, including error-XID and the\n> error message, relation OID, action, etc., need to be stored in the\n> catalog. The information other than error-XID doesn't necessarily need\n> to be reliable compared to error-XID. I think we can have\n> error-XID/LSN in the pg_subscription catalog and have other error\n> information in pg_stat_subscription_workers view. After the user\n> checks the current status of logical replication by checking\n> error-XID/LSN, they can check pg_stat_subscription_workers for\n> details.\n\nThe stats system isn't geared towards storing error messages and\nsuch. Generally it is about storing counts of events etc, not about\ninformation about a single event. Obviously there are a few cases where that\nboundary isn't that clear...\n\nIOW, storing information like:\n- subscription oid\n- retryable error count\n- hard error count\n- #replicated inserts\n- #replicated updates\n- #replicated deletes\n\nis what pgstats is for. But not\n- subscription oid\n- error message\n- xid of error\n- ...\n\nIMO the addition of the pg_stat_subscription_workers view needs to be\nreverted.\n\nYes, there's some precedent in pg_stat_archiver. But that ship has sailed\n(it's released), and it's a much more limited issue. Just because we did a not\ngreat thing once isn't a reason to do a similar, but even less great, thing\nanother time.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Feb 2022 10:26:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Jan 27, 2022 at 12:46 AM Andres Freund <andres@anarazel.de> wrote:\n> Only if either the user wants to drop all stats, or somehow knows the oids of\n> already dropped tables...\n\nIf it's really true that we can end up storing data for dropped\nobjects, I think that's not acceptable and needs to be fixed.\n\nI don't currently understand the other issues on this thread well\nenough to have a clear opinion on them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Feb 2022 13:31:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 15, 2022 at 11:47 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-02-04 09:23:06 +0530, Amit Kapila wrote:\n> > On Thu, Feb 3, 2022 at 3:25 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > On 02.02.22 07:54, Amit Kapila wrote:\n> > >\n> > > > Sure, but is this the reason you want to store all the error info in\n> > > > the system catalog? I agree that providing more error info could be\n> > > > useful and also possibly the previously failed (apply) xacts info as\n> > > > well but I am not able to see why you want to have that sort of info\n> > > > in the catalog. I could see storing info like err_lsn/err_xid that can\n> > > > allow to proceed to apply worker automatically or to slow down the\n> > > > launch of errored apply worker but not all sort of other error info\n> > > > (like err_cnt, err_code, err_message, err_time, etc.). I want to know\n> > > > why you are insisting to make all the error info persistent via the\n> > > > system catalog?\n> > >\n> > > Let's flip this around and ask, why not?\n> > >\n> >\n> > Because we don't necessarily need all this information after the crash\n> > and neither is this information about any system object which we\n> > require for performing operations on objects.\n>\n> I find this not particularly convincing. IMO data that leads the user to\n> compromise \"replication integrity\" is pretty crucial.\n>\n> And skipped data needs to be logged somewhere persistent, so that there's a\n> chance to analyze / recover.\n>\n\nValid point. I think we can store this in a separate table\n(pg_subsciption_conflict_history or something like that) but at some\npoint we need to clear this data like say when the user drops the\nsubscription or maybe a separate interface altogether or after a\nparticular time interval (user-configurable or otherwise), the\nsubscription worker or some other background worker clears this\ninformation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Feb 2022 14:15:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 15, 2022 at 11:56 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-02-03 13:33:08 +0900, Masahiko Sawada wrote:\n> > I see that important information such as error-XID that can be used\n> > for ALTER SUBSCRIPTION SKIP needs to be stored in a reliable way, and\n> > using system catalogs is a reasonable way for this purpose. But it's\n> > still unclear to me why all error information that is currently shown\n> > in pg_stat_subscription_workers view, including error-XID and the\n> > error message, relation OID, action, etc., need to be stored in the\n> > catalog. The information other than error-XID doesn't necessarily need\n> > to be reliable compared to error-XID. I think we can have\n> > error-XID/LSN in the pg_subscription catalog and have other error\n> > information in pg_stat_subscription_workers view. After the user\n> > checks the current status of logical replication by checking\n> > error-XID/LSN, they can check pg_stat_subscription_workers for\n> > details.\n>\n> The stats system isn't geared towards storing error messages and\n> such. Generally it is about storing counts of events etc, not about\n> information about a single event. Obviously there are a few cases where that\n> boundary isn't that clear...\n>\n\nTrue, in the beginning, we discussed this information to be stored in\nsystem catalog [1] (See .... Also, I am thinking that instead of a\nstat view, do we need to consider having a system table .. ) but later\ndiscussion led us to store this as stats.\n\n> IOW, storing information like:\n> - subscription oid\n> - retryable error count\n> - hard error count\n> - #replicated inserts\n> - #replicated updates\n> - #replicated deletes\n>\n> is what pgstats is for.\n>\n\nSome of this and similar ((like error count, last_error_time)) is\npresent in stats and something on the lines of the other information\nis proposed in [2] (xacts successfully replicated (committed),\naborted, etc) to be stored along with it.\n\n> But not\n> - subscription oid\n> - error message\n> - xid of error\n> - ...\n>\n\nI think from the current set of things we are capturing, the other\nthing in this list will be error_command (insert/update/delete..) and\nor probably error_code. So, we can keep this information in a system\ntable.\n\nBased on this discussion, it appears to me what we want here is to\nstore the error info in some persistent way (system table) and the\ncounters (both error and success) of logical replication in stats. If\nwe can't achieve this work (separation) in the next few weeks (before\nthe feature freeze) then I'll revert the work related to stats.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LTE-AYtwatvLzAw%2BVy53C92QHoB7-rVbX-9nf8ws2Vag%40mail.gmail.com\n[2] - https://commitfest.postgresql.org/37/3504/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Feb 2022 14:19:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 16, 2022 at 12:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 27, 2022 at 12:46 AM Andres Freund <andres@anarazel.de> wrote:\n> > Only if either the user wants to drop all stats, or somehow knows the oids of\n> > already dropped tables...\n>\n> If it's really true that we can end up storing data for dropped\n> objects, I think that's not acceptable and needs to be fixed.\n>\n\nAgreed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Feb 2022 14:20:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 16, 2022 at 5:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 15, 2022 at 11:56 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-02-03 13:33:08 +0900, Masahiko Sawada wrote:\n> > > I see that important information such as error-XID that can be used\n> > > for ALTER SUBSCRIPTION SKIP needs to be stored in a reliable way, and\n> > > using system catalogs is a reasonable way for this purpose. But it's\n> > > still unclear to me why all error information that is currently shown\n> > > in pg_stat_subscription_workers view, including error-XID and the\n> > > error message, relation OID, action, etc., need to be stored in the\n> > > catalog. The information other than error-XID doesn't necessarily need\n> > > to be reliable compared to error-XID. I think we can have\n> > > error-XID/LSN in the pg_subscription catalog and have other error\n> > > information in pg_stat_subscription_workers view. After the user\n> > > checks the current status of logical replication by checking\n> > > error-XID/LSN, they can check pg_stat_subscription_workers for\n> > > details.\n> >\n> > The stats system isn't geared towards storing error messages and\n> > such. Generally it is about storing counts of events etc, not about\n> > information about a single event. Obviously there are a few cases where that\n> > boundary isn't that clear...\n> >\n>\n> True, in the beginning, we discussed this information to be stored in\n> system catalog [1] (See .... Also, I am thinking that instead of a\n> stat view, do we need to consider having a system table .. ) but later\n> discussion led us to store this as stats.\n>\n> > IOW, storing information like:\n> > - subscription oid\n> > - retryable error count\n> > - hard error count\n> > - #replicated inserts\n> > - #replicated updates\n> > - #replicated deletes\n> >\n> > is what pgstats is for.\n> >\n>\n> Some of this and similar ((like error count, last_error_time)) is\n> present in stats and something on the lines of the other information\n> is proposed in [2] (xacts successfully replicated (committed),\n> aborted, etc) to be stored along with it.\n>\n> > But not\n> > - subscription oid\n> > - error message\n> > - xid of error\n> > - ...\n> >\n>\n> I think from the current set of things we are capturing, the other\n> thing in this list will be error_command (insert/update/delete..) and\n> or probably error_code. So, we can keep this information in a system\n> table.\n\nAgreed. Also, we can have also commit-LSN or replace error-XID with error-LSN?\n\n>\n> Based on this discussion, it appears to me what we want here is to\n> store the error info in some persistent way (system table) and the\n> counters (both error and success) of logical replication in stats. If\n> we can't achieve this work (separation) in the next few weeks (before\n> the feature freeze) then I'll revert the work related to stats.\n\nThere was an idea to use shmem to store error info but it seems to be\nbetter to use a system catalog to persist them.\n\nI'll summarize changes we discussed and make a plan of changes and\nfeature designs toward the feature freeze (and v16). I think that once\nwe get a consensus on them we can start implementation and move it\nforward.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Feb 2022 20:36:18 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 16, 2022 at 8:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Feb 16, 2022 at 5:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Feb 15, 2022 at 11:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2022-02-03 13:33:08 +0900, Masahiko Sawada wrote:\n> > > > I see that important information such as error-XID that can be used\n> > > > for ALTER SUBSCRIPTION SKIP needs to be stored in a reliable way, and\n> > > > using system catalogs is a reasonable way for this purpose. But it's\n> > > > still unclear to me why all error information that is currently shown\n> > > > in pg_stat_subscription_workers view, including error-XID and the\n> > > > error message, relation OID, action, etc., need to be stored in the\n> > > > catalog. The information other than error-XID doesn't necessarily need\n> > > > to be reliable compared to error-XID. I think we can have\n> > > > error-XID/LSN in the pg_subscription catalog and have other error\n> > > > information in pg_stat_subscription_workers view. After the user\n> > > > checks the current status of logical replication by checking\n> > > > error-XID/LSN, they can check pg_stat_subscription_workers for\n> > > > details.\n> > >\n> > > The stats system isn't geared towards storing error messages and\n> > > such. Generally it is about storing counts of events etc, not about\n> > > information about a single event. Obviously there are a few cases where that\n> > > boundary isn't that clear...\n> > >\n> >\n> > True, in the beginning, we discussed this information to be stored in\n> > system catalog [1] (See .... Also, I am thinking that instead of a\n> > stat view, do we need to consider having a system table .. ) but later\n> > discussion led us to store this as stats.\n> >\n> > > IOW, storing information like:\n> > > - subscription oid\n> > > - retryable error count\n> > > - hard error count\n> > > - #replicated inserts\n> > > - #replicated updates\n> > > - #replicated deletes\n> > >\n> > > is what pgstats is for.\n> > >\n> >\n> > Some of this and similar ((like error count, last_error_time)) is\n> > present in stats and something on the lines of the other information\n> > is proposed in [2] (xacts successfully replicated (committed),\n> > aborted, etc) to be stored along with it.\n> >\n> > > But not\n> > > - subscription oid\n> > > - error message\n> > > - xid of error\n> > > - ...\n> > >\n> >\n> > I think from the current set of things we are capturing, the other\n> > thing in this list will be error_command (insert/update/delete..) and\n> > or probably error_code. So, we can keep this information in a system\n> > table.\n>\n> Agreed. Also, we can have also commit-LSN or replace error-XID with error-LSN?\n>\n> >\n> > Based on this discussion, it appears to me what we want here is to\n> > store the error info in some persistent way (system table) and the\n> > counters (both error and success) of logical replication in stats. If\n> > we can't achieve this work (separation) in the next few weeks (before\n> > the feature freeze) then I'll revert the work related to stats.\n>\n> There was an idea to use shmem to store error info but it seems to be\n> better to use a system catalog to persist them.\n>\n> I'll summarize changes we discussed and make a plan of changes and\n> feature designs toward the feature freeze (and v16). I think that once\n> we get a consensus on them we can start implementation and move it\n> forward.\n>\n\nHere is the summary of the discussion, changes, and plan.\n\n1. Move some error information such as the error message to a new\nsystem catalog, pg_subscription_error. The pg_subscription_error table\nwould have the following columns:\n\n* sesubid : subscription Oid.\n* serelid : relation Oid (NULL for apply worker).\n* seerrlsn : commit-LSN or the error transaction.\n* seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n* seerrmsg : error message\n\nThe tuple is inserted or updated when an apply worker or a tablesync\nworker raises an error. If the same error occurs in a row, the update\nis skipped. The tuple is removed in the following cases:\n\n* the subscription is dropped.\n* the table is dropped.\n* the table is removed from the subscription.\n* the worker successfully committed a non-empty transaction.\n\nWith this change, pg_stat_subscription_workers will be like:\n\n* subid\n* subname\n* subrelid\n* error_count\n* last_error_timestamp\n\nThis view will be extended by adding transaction statistics proposed\non another thread[1].\n\n2. Fix a bug in pg_stat_subscription_workers. As pointed out by\nAndres, there is a bug in pg_stat_subscription_workers; it doesn't\ndrop entries for already-dropped tables. We need to fix it.\n\n3. Show commit-LSN of the error transaction in errcontext. Currently,\nwe show XID and commit timestamp in the errcontext. But given that we\nuse LSN in pg_subscriptoin_error, it's better to show commit-LSN as\nwell (or instead of error-XID).\n\n4. Skipping the particular conflicted transaction. There are two proposals:\n\n4-1. Use the existing replication_origin_advance() SQL function. We\ndon't need to add any additional syntax, instead use\nreplication_origin_advance() with the error-LSN reported in either\npg_subscription_error or server logs to skip the particular\ntransaction.\n\n4-2. Introduce a new syntax like ALTER SUBSCRIPTION ... SKIP. This\nproposal further has two options: (1) the user specifies error-LSN\nmanually and (2) the user just enables skipping behavior and error-LSN\nis automatically fetched from pg_subscription_error. In any way, the\ncommand raises an error when there is no error entry in\npg_subscription_error.\n\nWe can discuss this item for details on the original thread.\n\n5. Record skipped data to the system catalog, say\npg_subscription_conflict_history so that there is a chance to analyze\nand recover. The pg_subscription_conflict_history I'm thinking is that\nwe record the following of all skipped changes:\n\n* command (INSERT, UPDATE etc.)\n* commit-LSN\n* before row (in json format?)\n* after row (in json format?)\n\nGiven that we end up writing a huge amount of history if the\ntransaction is very large and I think there are peoples who want to\ncheck what changes will be skipped together before enabling skipping\nbehavior, I think it could be optional. Therefore I think we can\nprovide an option for ALTER SUBSCRIPTION ... SKIP to whether the\nskipped data is recorded or not and to the\npg_subscription_conflict_history or server logs. We can discuss the\ndetails in a new thread.\n\n4 and 5 might be better introduced together but I think since the user\nis able to check what changes will be skipped on the publisher side we\ncan do 5 for v16. Feedback and comments are very welcome.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/flat/OSBPR01MB48887CA8F40C8D984A6DC00CED199@OSBPR01MB4888.jpnprd01.prod.outlook.com\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 18 Feb 2022 17:26:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n>\n> Here is the summary of the discussion, changes, and plan.\n>\n> 1. Move some error information such as the error message to a new\n> system catalog, pg_subscription_error. The pg_subscription_error table\n> would have the following columns:\n>\n> * sesubid : subscription Oid.\n> * serelid : relation Oid (NULL for apply worker).\n> * seerrlsn : commit-LSN or the error transaction.\n> * seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n> * seerrmsg : error message\n>\n\nNot a fan of the \"se\" prefix but overall yes. We should include a timestamp.\n\n\n> The tuple is inserted or updated when an apply worker or a tablesync\n> worker raises an error. If the same error occurs in a row, the update\n> is skipped.\n\n\nI disagree with this - I would treat every new instance of an error to be\nimportant and insert on conflict (sesubid, serelid) the new entry, updating\nlsn/cmd/msg with the new values.\n\nThe tuple is removed in the following cases:\n>\n> * the subscription is dropped.\n> * the table is dropped.\n\n* the table is removed from the subscription.\n> * the worker successfully committed a non-empty transaction.\n>\n\nCorrect. This handles the \"end of sync worker\" just fine since its final\naction should be a successful commit of a non-empty transaction.\n\n>\n> With this change, pg_stat_subscription_workers will be like:\n>\n> * subid\n> * subname\n> * subrelid\n> * error_count\n> * last_error_timestamp\n>\n> This view will be extended by adding transaction statistics proposed\n> on another thread[1].\n>\n\nI haven't reviewed that thread but in-so-far as this one goes I would just\ndrop this altogether. The error count, if desired, can be added to\npg_subscription_error, and the timestamp should be added there as noted\nabove.\n\nIf this is useful for the feature on the other thread it can be\nreconstituted from there.\n\n\n> 2. Fix a bug in pg_stat_subscription_workers. As pointed out by\n> Andres, there is a bug in pg_stat_subscription_workers; it doesn't\n> drop entries for already-dropped tables. We need to fix it.\n>\n\nGiven the above, this becomes an N/A.\n\n\n> 3. Show commit-LSN of the error transaction in errcontext. Currently,\n> we show XID and commit timestamp in the errcontext. But given that we\n> use LSN in pg_subscriptoin_error, it's better to show commit-LSN as\n> well (or instead of error-XID).\n>\n\nAgreed, I think: what \"errcontext\" is this referring to?\n\n>\n> 5. Record skipped data to the system catalog, say\n> pg_subscription_conflict_history so that there is a chance to analyze\n> and recover.\n\n\nWe can discuss the\n> details in a new thread.\n>\nAgreed - the \"before skipping\" consideration seems considerably more\nhelpful; but wouldn't need to be persistent, it could just be a view. A\npermanent record probably would best be recorded in the logs - though if we\nget the pre-skip functionality the user can view directly and save the\nresults wherever they wish or we can provide a function to spool the\ninformation to the log. I don't see persistent in-database storage being\nthat desirable here.\n\nIf we only do something after the transaction has been skipped it may be\nuseful to add an option to the skipping command to auto-disable the\nsubscription after the transaction has been skipped and before any\nsubsequent transactions are applied. This will give the user a chance to\nprocess the post-skipped information before restoring sync and having the\nsystem begin changing underneath them again.\n\n\n>\n> 4 and 5 might be better introduced together but I think since the user\n> is able to check what changes will be skipped on the publisher side we\n> can do 5 for v16.\n\n\nHow would one go about doing that (checking what changes will be skipped on\nthe publisher side)?\n\nDavid J.\n\nOn Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\nHere is the summary of the discussion, changes, and plan.\n\n1. Move some error information such as the error message to a new\nsystem catalog, pg_subscription_error. The pg_subscription_error table\nwould have the following columns:\n\n* sesubid : subscription Oid.\n* serelid : relation Oid (NULL for apply worker).\n* seerrlsn : commit-LSN or the error transaction.\n* seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n* seerrmsg : error messageNot a fan of the \"se\" prefix but overall yes. We should include a timestamp.\n\nThe tuple is inserted or updated when an apply worker or a tablesync\nworker raises an error. If the same error occurs in a row, the update\nis skipped.I disagree with this - I would treat every new instance of an error to be important and insert on conflict (sesubid, serelid) the new entry, updating lsn/cmd/msg with the new values. The tuple is removed in the following cases:\n\n* the subscription is dropped.\n* the table is dropped. \n* the table is removed from the subscription.\n* the worker successfully committed a non-empty transaction.Correct. This handles the \"end of sync worker\" just fine since its final action should be a successful commit of a non-empty transaction.\n\nWith this change, pg_stat_subscription_workers will be like:\n\n* subid\n* subname\n* subrelid\n* error_count\n* last_error_timestamp\n\nThis view will be extended by adding transaction statistics proposed\non another thread[1].I haven't reviewed that thread but in-so-far as this one goes I would just drop this altogether. The error count, if desired, can be added to pg_subscription_error, and the timestamp should be added there as noted above.If this is useful for the feature on the other thread it can be reconstituted from there.\n\n2. Fix a bug in pg_stat_subscription_workers. As pointed out by\nAndres, there is a bug in pg_stat_subscription_workers; it doesn't\ndrop entries for already-dropped tables. We need to fix it.Given the above, this becomes an N/A.\n\n3. Show commit-LSN of the error transaction in errcontext. Currently,\nwe show XID and commit timestamp in the errcontext. But given that we\nuse LSN in pg_subscriptoin_error, it's better to show commit-LSN as\nwell (or instead of error-XID).Agreed, I think: what \"errcontext\" is this referring to?\n\n5. Record skipped data to the system catalog, say\npg_subscription_conflict_history so that there is a chance to analyze\nand recover.We can discuss the\ndetails in a new thread.Agreed - the \"before skipping\" consideration seems considerably more helpful; but wouldn't need to be persistent, it could just be a view. A permanent record probably would best be recorded in the logs - though if we get the pre-skip functionality the user can view directly and save the results wherever they wish or we can provide a function to spool the information to the log. I don't see persistent in-database storage being that desirable here.If we only do something after the transaction has been skipped it may be useful to add an option to the skipping command to auto-disable the subscription after the transaction has been skipped and before any subsequent transactions are applied. This will give the user a chance to process the post-skipped information before restoring sync and having the system begin changing underneath them again. \n4 and 5 might be better introduced together but I think since the user\nis able to check what changes will be skipped on the publisher side we\ncan do 5 for v16.How would one go about doing that (checking what changes will be skipped on the publisher side)?David J.",
"msg_date": "Fri, 18 Feb 2022 12:46:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-18 17:26:04 +0900, Masahiko Sawada wrote:\n> With this change, pg_stat_subscription_workers will be like:\n> \n> * subid\n> * subname\n> * subrelid\n> * error_count\n> * last_error_timestamp\n\n> This view will be extended by adding transaction statistics proposed\n> on another thread[1].\n\nI do not agree with these bits. What's the point of these per-relation stats\nat this poitns. You're just duplicating the normal relation pg_stats here.\n\nI really think we just should drop pg_stat_subscription_workers. Even if we\ndon't, we definitely should rename it, because it still isn't meaningfully\nabout workers.\n\n\nThis stuff is getting painful for me. I'm trying to clean up some stuff for\nshared memory stats, and this stuff doesn't fit in with the rest. I'll have to\nrework some core stuff in the shared memory stats patch to make it work with\nthis. Just to then quite possibly deal with reverting that part.\n\n\nGiven the degree we're still designing stuff at this point, I think the\nappropriate thing is to revert the patch, and then try from there.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 18 Feb 2022 12:32:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 1:17 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>>\n>> Here is the summary of the discussion, changes, and plan.\n>>\n>> 1. Move some error information such as the error message to a new\n>> system catalog, pg_subscription_error. The pg_subscription_error table\n>> would have the following columns:\n>>\n>> * sesubid : subscription Oid.\n>> * serelid : relation Oid (NULL for apply worker).\n>> * seerrlsn : commit-LSN or the error transaction.\n>> * seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n>> * seerrmsg : error message\n>\n>\n> Not a fan of the \"se\" prefix but overall yes. We should include a timestamp.\n>\n\nHow about naming it pg_subscription_worker_error as Peter E. has\nsuggested in one of his emails? I find pg_subscription_error slightly\nodd as one could imagine that even the errors related to subscription\ncommands like Alter Subscription.\n\n>>\n>> The tuple is inserted or updated when an apply worker or a tablesync\n>> worker raises an error. If the same error occurs in a row, the update\n>> is skipped.\n>\n\nAre you going to query table to check if it is same error?\n\n>\n> I disagree with this - I would treat every new instance of an error to be important and insert on conflict (sesubid, serelid) the new entry, updating lsn/cmd/msg with the new values.\n>\n\nI don't think that will be a problem but what advantage are you\nenvisioning with updating the same info except for timestamp?\n\n>> The tuple is removed in the following cases:\n>>\n>> * the subscription is dropped.\n>> * the table is dropped.\n>>\n>> * the table is removed from the subscription.\n>> * the worker successfully committed a non-empty transaction.\n>\n>\n> Correct. This handles the \"end of sync worker\" just fine since its final action should be a successful commit of a non-empty transaction.\n>>\n\nI think for tablesync workers, we may need slightly different handling\nas there could probably be no transactions to apply apart from the\ninitial copy. Now, I think for tablesync worker, we can't postpone it\ntill after we update the rel state as SUBREL_STATE_SYNCDONE because if\nwe do it after that and there is some error updating/deleting the\ntuple, the tablesync worker won't be launched again and that entry\nwill remain in the system for a longer duration.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 19 Feb 2022 16:19:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Saturday, February 19, 2022, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sat, Feb 19, 2022 at 1:17 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >>\n> >> Here is the summary of the discussion, changes, and plan.\n> >>\n> >> 1. Move some error information such as the error message to a new\n> >> system catalog, pg_subscription_error. The pg_subscription_error table\n> >> would have the following columns:\n> >>\n> >> * sesubid : subscription Oid.\n> >> * serelid : relation Oid (NULL for apply worker).\n> >> * seerrlsn : commit-LSN or the error transaction.\n> >> * seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n> >> * seerrmsg : error message\n> >\n> >\n> > Not a fan of the \"se\" prefix but overall yes. We should include a\n> timestamp.\n> >\n>\n> How about naming it pg_subscription_worker_error as Peter E. has\n> suggested in one of his emails? I find pg_subscription_error slightly\n> odd as one could imagine that even the errors related to subscription\n> commands like Alter Subscription.\n>\n>\nAdding worker makes sense.\n\n\n> >>\n> >> The tuple is inserted or updated when an apply worker or a tablesync\n> >> worker raises an error. If the same error occurs in a row, the update\n> >> is skipped.\n> >\n>\n> Are you going to query table to check if it is same error?\n\n\nI don’t get the question, the quoted text is your which I disagree with.\nBut the error message is being captured in any case.\n\n>\n> >\n> > I disagree with this - I would treat every new instance of an error to\n> be important and insert on conflict (sesubid, serelid) the new entry,\n> updating lsn/cmd/msg with the new values.\n> >\n>\n> I don't think that will be a problem but what advantage are you\n> envisioning with updating the same info except for timestamp?\n\n\nOmission of timestamp (or any other non-key field we have) from the update\nis an oversight.\n\n\n> >> The tuple is removed in the following cases:\n> >>\n> >> * the subscription is dropped.\n> >> * the table is dropped.\n> >>\n> >> * the table is removed from the subscription.\n> >> * the worker successfully committed a non-empty transaction.\n> >\n> >\n> > Correct. This handles the \"end of sync worker\" just fine since its\n> final action should be a successful commit of a non-empty transaction.\n> >>\n>\n> I think for tablesync workers, we may need slightly different handling\n> as there could probably be no transactions to apply apart from the\n> initial copy. Now, I think for tablesync worker, we can't postpone it\n> till after we update the rel state as SUBREL_STATE_SYNCDONE because if\n> we do it after that and there is some error updating/deleting the\n> tuple, the tablesync worker won't be launched again and that entry\n> will remain in the system for a longer duration.\n>\n\nNot sure…but I don’t see how you can not have a non-empty transaction while\nstill having an error.\n\nAre subscriptions “dropped” upon a reboot? If not, that needs its own case\nfor row removal.\n\nDavid J.\n\nOn Saturday, February 19, 2022, Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, Feb 19, 2022 at 1:17 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>>\n>> Here is the summary of the discussion, changes, and plan.\n>>\n>> 1. Move some error information such as the error message to a new\n>> system catalog, pg_subscription_error. The pg_subscription_error table\n>> would have the following columns:\n>>\n>> * sesubid : subscription Oid.\n>> * serelid : relation Oid (NULL for apply worker).\n>> * seerrlsn : commit-LSN or the error transaction.\n>> * seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n>> * seerrmsg : error message\n>\n>\n> Not a fan of the \"se\" prefix but overall yes. We should include a timestamp.\n>\n\nHow about naming it pg_subscription_worker_error as Peter E. has\nsuggested in one of his emails? I find pg_subscription_error slightly\nodd as one could imagine that even the errors related to subscription\ncommands like Alter Subscription.\nAdding worker makes sense. \n>>\n>> The tuple is inserted or updated when an apply worker or a tablesync\n>> worker raises an error. If the same error occurs in a row, the update\n>> is skipped.\n>\n\nAre you going to query table to check if it is same error?I don’t get the question, the quoted text is your which I disagree with. But the error message is being captured in any case. \n\n>\n> I disagree with this - I would treat every new instance of an error to be important and insert on conflict (sesubid, serelid) the new entry, updating lsn/cmd/msg with the new values.\n>\n\nI don't think that will be a problem but what advantage are you\nenvisioning with updating the same info except for timestamp?Omission of timestamp (or any other non-key field we have) from the update is an oversight.\n\n>> The tuple is removed in the following cases:\n>>\n>> * the subscription is dropped.\n>> * the table is dropped.\n>>\n>> * the table is removed from the subscription.\n>> * the worker successfully committed a non-empty transaction.\n>\n>\n> Correct. This handles the \"end of sync worker\" just fine since its final action should be a successful commit of a non-empty transaction.\n>>\n\nI think for tablesync workers, we may need slightly different handling\nas there could probably be no transactions to apply apart from the\ninitial copy. Now, I think for tablesync worker, we can't postpone it\ntill after we update the rel state as SUBREL_STATE_SYNCDONE because if\nwe do it after that and there is some error updating/deleting the\ntuple, the tablesync worker won't be launched again and that entry\nwill remain in the system for a longer duration.\nNot sure…but I don’t see how you can not have a non-empty transaction while still having an error.Are subscriptions “dropped” upon a reboot? If not, that needs its own case for row removal.David J.",
"msg_date": "Sat, 19 Feb 2022 06:21:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 6:51 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Saturday, February 19, 2022, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Sat, Feb 19, 2022 at 1:17 AM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >>\n>> >>\n>> >> Here is the summary of the discussion, changes, and plan.\n>> >>\n>> >> 1. Move some error information such as the error message to a new\n>> >> system catalog, pg_subscription_error. The pg_subscription_error table\n>> >> would have the following columns:\n>> >>\n>> >> * sesubid : subscription Oid.\n>> >> * serelid : relation Oid (NULL for apply worker).\n>> >> * seerrlsn : commit-LSN or the error transaction.\n>> >> * seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n>> >> * seerrmsg : error message\n>> >\n>> >\n>> > Not a fan of the \"se\" prefix but overall yes. We should include a timestamp.\n>> >\n>>\n>> How about naming it pg_subscription_worker_error as Peter E. has\n>> suggested in one of his emails? I find pg_subscription_error slightly\n>> odd as one could imagine that even the errors related to subscription\n>> commands like Alter Subscription.\n>>\n>\n> Adding worker makes sense.\n>\n>>\n>> >>\n>> >> The tuple is inserted or updated when an apply worker or a tablesync\n>> >> worker raises an error. If the same error occurs in a row, the update\n>> >> is skipped.\n>> >\n>>\n>> Are you going to query table to check if it is same error?\n>\n>\n> I don’t get the question, the quoted text is your which I disagree with.\n>\n\nIt was Sawada-San's email and this question was for him.\n\n> But the error message is being captured in any case.\n>>\n>>\n>> >\n>> > I disagree with this - I would treat every new instance of an error to be important and insert on conflict (sesubid, serelid) the new entry, updating lsn/cmd/msg with the new values.\n>> >\n>>\n>> I don't think that will be a problem but what advantage are you\n>> envisioning with updating the same info except for timestamp?\n>\n>\n> Omission of timestamp (or any other non-key field we have) from the update is an oversight.\n>\n\nYeah, if we decide to keep timestamp which the original proposal\ndoesn't have then it makes sense to update again.\n\n>>\n>> >> The tuple is removed in the following cases:\n>> >>\n>> >> * the subscription is dropped.\n>> >> * the table is dropped.\n>> >>\n>> >> * the table is removed from the subscription.\n>> >> * the worker successfully committed a non-empty transaction.\n>> >\n>> >\n>> > Correct. This handles the \"end of sync worker\" just fine since its final action should be a successful commit of a non-empty transaction.\n>> >>\n>>\n>> I think for tablesync workers, we may need slightly different handling\n>> as there could probably be no transactions to apply apart from the\n>> initial copy. Now, I think for tablesync worker, we can't postpone it\n>> till after we update the rel state as SUBREL_STATE_SYNCDONE because if\n>> we do it after that and there is some error updating/deleting the\n>> tuple, the tablesync worker won't be launched again and that entry\n>> will remain in the system for a longer duration.\n>\n>\n> Not sure…but I don’t see how you can not have a non-empty transaction while still having an error.\n>\n> Are subscriptions “dropped” upon a reboot? If not, that needs its own case for row removal.\n>\n\nSubscriptions are not dropped automatically on reboot but I don't\nunderstand what you mean by \"that needs its own case for row removal\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 19 Feb 2022 19:06:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 5:32 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-02-18 17:26:04 +0900, Masahiko Sawada wrote:\n> > With this change, pg_stat_subscription_workers will be like:\n> >\n> > * subid\n> > * subname\n> > * subrelid\n> > * error_count\n> > * last_error_timestamp\n>\n> > This view will be extended by adding transaction statistics proposed\n> > on another thread[1].\n>\n> I do not agree with these bits. What's the point of these per-relation stats\n> at this poitns. You're just duplicating the normal relation pg_stats here.\n>\n> I really think we just should drop pg_stat_subscription_workers. Even if we\n> don't, we definitely should rename it, because it still isn't meaningfully\n> about workers.\n\nThe view has stats per subscription worker (i.e., apply worker and\ntablesync worker), not per relation. The subrelid is OID of the\nrelation that the tablesync worker is synchronizing. For the stats of\napply workers, it is null.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 19 Feb 2022 22:38:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 1:17 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> 5. Record skipped data to the system catalog, say\n>> pg_subscription_conflict_history so that there is a chance to analyze\n>> and recover.\n>\n>\n>> We can discuss the\n>> details in a new thread.\n>\n> Agreed - the \"before skipping\" consideration seems considerably more helpful; but wouldn't need to be persistent, it could just be a view. A permanent record probably would best be recorded in the logs - though if we get the pre-skip functionality the user can view directly and save the results wherever they wish or we can provide a function to spool the information to the log. I don't see persistent in-database storage being that desirable here.\n>\n\nWe can consider this but note that it could fill up a lot of LOG and\ndifficult to find/interpret information. Say after skipping and\nlogging half of the transaction data, there is some error like\n\"connection error\", it will then repeat the entire table data again.\nAlso, say the table has toast columns, we have some mechanism to write\nsuch data in tables (like by compressing, etc) but printing huge data\nin Logs would be tricky and it may not be easier to read it, we may\neven need some sort of tuple header, column header etc. Also, there\ncould be errors from other sessions in-between which we should be able\nto filter out but still it's may not be that clear.\n\n> If we only do something after the transaction has been skipped it may be useful to add an option to the skipping command to auto-disable the subscription after the transaction has been skipped and before any subsequent transactions are applied. This will give the user a chance to process the post-skipped information before restoring sync and having the system begin changing underneath them again.\n>\n\nI think it can be helpful and probably can be done as a separate patch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 19 Feb 2022 19:13:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 10:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 19, 2022 at 6:51 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Saturday, February 19, 2022, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Sat, Feb 19, 2022 at 1:17 AM David G. Johnston\n> >> <david.g.johnston@gmail.com> wrote:\n> >> >\n> >> > On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >> >>\n> >> >>\n> >> >> Here is the summary of the discussion, changes, and plan.\n> >> >>\n> >> >> 1. Move some error information such as the error message to a new\n> >> >> system catalog, pg_subscription_error. The pg_subscription_error table\n> >> >> would have the following columns:\n> >> >>\n> >> >> * sesubid : subscription Oid.\n> >> >> * serelid : relation Oid (NULL for apply worker).\n> >> >> * seerrlsn : commit-LSN or the error transaction.\n> >> >> * seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n> >> >> * seerrmsg : error message\n> >> >\n> >> >\n> >> > Not a fan of the \"se\" prefix but overall yes. We should include a timestamp.\n> >> >\n> >>\n> >> How about naming it pg_subscription_worker_error as Peter E. has\n> >> suggested in one of his emails? I find pg_subscription_error slightly\n> >> odd as one could imagine that even the errors related to subscription\n> >> commands like Alter Subscription.\n> >>\n> >\n> > Adding worker makes sense.\n\nAgreed.\n\n> >\n> >>\n> >> >>\n> >> >> The tuple is inserted or updated when an apply worker or a tablesync\n> >> >> worker raises an error. If the same error occurs in a row, the update\n> >> >> is skipped.\n> >> >\n> >>\n> >> Are you going to query table to check if it is same error?\n> >\n> >\n> > I don’t get the question, the quoted text is your which I disagree with.\n> >\n>\n> It was Sawada-San's email and this question was for him.\n\nWhat I wanted to say about how to insert/update the tuple to\npg_subscription_worker_error is that we insert a new tuple for the\nfirst time, then, when the next error occurred, the worker fetches the\ntuple and checks if the error (i.e., error-LSN, error-cmd, and\nerror-message) is exactly the same as previous one. If it is, we can\nskip updating it. But if we include the error-timestamp in the tuple,\nwe need to update it every time.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 19 Feb 2022 22:44:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 4:47 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Feb 18, 2022 at 1:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>>\n>> Here is the summary of the discussion, changes, and plan.\n>>\n>> 1. Move some error information such as the error message to a new\n>> system catalog, pg_subscription_error. The pg_subscription_error table\n>> would have the following columns:\n>>\n>> * sesubid : subscription Oid.\n>> * serelid : relation Oid (NULL for apply worker).\n>> * seerrlsn : commit-LSN or the error transaction.\n>> * seerrcmd : command (INSERT, UPDATE, etc.) of the error transaction.\n>> * seerrmsg : error message\n>\n>\n> Not a fan of the \"se\" prefix but overall yes. We should include a timestamp.\n>\n>>\n>> The tuple is inserted or updated when an apply worker or a tablesync\n>> worker raises an error. If the same error occurs in a row, the update\n>> is skipped.\n>\n>\n> I disagree with this - I would treat every new instance of an error to be important and insert on conflict (sesubid, serelid) the new entry, updating lsn/cmd/msg with the new values.\n>\n>> The tuple is removed in the following cases:\n>>\n>> * the subscription is dropped.\n>> * the table is dropped.\n>>\n>> * the table is removed from the subscription.\n>> * the worker successfully committed a non-empty transaction.\n>\n>\n> Correct. This handles the \"end of sync worker\" just fine since its final action should be a successful commit of a non-empty transaction.\n>>\n>>\n>> With this change, pg_stat_subscription_workers will be like:\n>>\n>> * subid\n>> * subname\n>> * subrelid\n>> * error_count\n>> * last_error_timestamp\n>>\n>> This view will be extended by adding transaction statistics proposed\n>> on another thread[1].\n>\n>\n> I haven't reviewed that thread but in-so-far as this one goes I would just drop this altogether. The error count, if desired, can be added to pg_subscription_error, and the timestamp should be added there as noted above.\n>\n> If this is useful for the feature on the other thread it can be reconstituted from there.\n>\n>>\n>> 2. Fix a bug in pg_stat_subscription_workers. As pointed out by\n>> Andres, there is a bug in pg_stat_subscription_workers; it doesn't\n>> drop entries for already-dropped tables. We need to fix it.\n>\n>\n> Given the above, this becomes an N/A.\n>\n>>\n>> 3. Show commit-LSN of the error transaction in errcontext. Currently,\n>> we show XID and commit timestamp in the errcontext. But given that we\n>> use LSN in pg_subscriptoin_error, it's better to show commit-LSN as\n>> well (or instead of error-XID).\n>\n>\n> Agreed, I think: what \"errcontext\" is this referring to?\n\nThe CONTEXT part in a log message. The apply worker and the tablesync\nworker write the details of the changes on an error in the CONTEXT\npart as follow:\n\nERROR: duplicate key value violates unique constraint \"test_pkey\"\nDETAIL: Key (id)=(1) already exists.\nCONTEXT: processing remote data during \"INSERT\" for replication\ntarget relation \"public.test\" in transaction 716 with commit timestamp\n2021-09-29 15:52:45.165754+00\n\n>>\n>>\n>> 5. Record skipped data to the system catalog, say\n>> pg_subscription_conflict_history so that there is a chance to analyze\n>> and recover.\n>\n>\n>> We can discuss the\n>> details in a new thread.\n>\n> Agreed - the \"before skipping\" consideration seems considerably more helpful; but wouldn't need to be persistent, it could just be a view. A permanent record probably would best be recorded in the logs - though if we get the pre-skip functionality the user can view directly and save the results wherever they wish or we can provide a function to spool the information to the log. I don't see persistent in-database storage being that desirable here.\n>\n> If we only do something after the transaction has been skipped it may be useful to add an option to the skipping command to auto-disable the subscription after the transaction has been skipped and before any subsequent transactions are applied. This will give the user a chance to process the post-skipped information before restoring sync and having the system begin changing underneath them again.\n>\n>>\n>>\n>> 4 and 5 might be better introduced together but I think since the user\n>> is able to check what changes will be skipped on the publisher side we\n>> can do 5 for v16.\n>\n>\n> How would one go about doing that (checking what changes will be skipped on the publisher side)?\n\nWe can copy the replication slot while changing the plugin by using\npg_copy_replication_slot(). Then we can check the changes by using\npg_logical_slot_peek_changes() with the copied slot.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 19 Feb 2022 22:48:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-19 22:38:19 +0900, Masahiko Sawada wrote:\n> On Sat, Feb 19, 2022 at 5:32 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-02-18 17:26:04 +0900, Masahiko Sawada wrote:\n> > > With this change, pg_stat_subscription_workers will be like:\n> > >\n> > > * subid\n> > > * subname\n> > > * subrelid\n> > > * error_count\n> > > * last_error_timestamp\n> >\n> > > This view will be extended by adding transaction statistics proposed\n> > > on another thread[1].\n> >\n> > I do not agree with these bits. What's the point of these per-relation stats\n> > at this poitns. You're just duplicating the normal relation pg_stats here.\n> >\n> > I really think we just should drop pg_stat_subscription_workers. Even if we\n> > don't, we definitely should rename it, because it still isn't meaningfully\n> > about workers.\n> \n> The view has stats per subscription worker (i.e., apply worker and\n> tablesync worker), not per relation. The subrelid is OID of the\n> relation that the tablesync worker is synchronizing. For the stats of\n> apply workers, it is null.\n\nThat's precisely the misuse of the stats subsystem that I'm complaining about\nhere. The whole design of pgstat (as it is today) only makes sense if you can\nloose a message and it doesn't matter much, because it's just an incremental\ncounter increment that's lost. And to be able properly prune dead pgstat\ncontents the underlying objects stats are kept around either need to be\npermanent (e.g. stats about WAL) or a record of objects needs to exist\n(e.g. stats about relations).\n\n\nEven leaving everything else aside, a key of (dboid, subid, subrelid), where\nsubrelid can be NULL, but where (dboid, subid) is *not* unique, imo is poor\nrelational design. What is the justification for mixing relation specific and\nnon-relation specific contents in this view?\n\n\nThe patch you referenced [1] should just store the stats in the\npg_stat_subscription view, not pg_stat_subscription_workers.\n\nIt *does* make sense to keep stats about the number of table syncs that failed\netc. But that should be a counter in pg_stat_subscription, not a row in\npg_stat_subscription_workers.\n\n\nIOW, we should just drop pg_stat_subscription_workers, and add a few counters\nto pg_stat_subscription.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/TYCPR01MB8373E658CEE48FE05505A120ED529%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\n",
"msg_date": "Sat, 19 Feb 2022 08:02:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 9:02 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Even leaving everything else aside, a key of (dboid, subid, subrelid),\n> where\n> subrelid can be NULL, but where (dboid, subid) is *not* unique, imo is poor\n> relational design. What is the justification for mixing relation specific\n> and\n> non-relation specific contents in this view?\n>\n\nThe to-be-created pg_subscription_worker_error has this same design.\nIt really is a worker-oriented view so the PK should ideally (ignoring\ndboid at the moment) be just workerid and subid and subrelid would just be\nattributes, of which subrelid is optional. But a worker's natural key is\n(subid, subrelid) so long as you accept that null has to be considered\nequal to itself. Not usually an ideal model to pick but it seems like this\nis one of those exceptions to the rule that works just fine. Feel free to\nuse InvalidOID instead of null if that makes things more consistent and\neasier to implement. The conceptual model is still the same. It doesn't\nseem to be beneficial to have tablesync and apply workers have their own\ndistinct relations. They are similar enough that they can share this one.\n\n\n> IOW, we should just drop pg_stat_subscription_workers, and add a few\n> counters\n> to pg_stat_subscription.\n>\n>\n+1\nDavid J.\n\nOn Sat, Feb 19, 2022 at 9:02 AM Andres Freund <andres@anarazel.de> wrote:\nEven leaving everything else aside, a key of (dboid, subid, subrelid), where\nsubrelid can be NULL, but where (dboid, subid) is *not* unique, imo is poor\nrelational design. What is the justification for mixing relation specific and\nnon-relation specific contents in this view?The to-be-created pg_subscription_worker_error has this same design.It really is a worker-oriented view so the PK should ideally (ignoring dboid at the moment) be just workerid and subid and subrelid would just be attributes, of which subrelid is optional. But a worker's natural key is (subid, subrelid) so long as you accept that null has to be considered equal to itself. Not usually an ideal model to pick but it seems like this is one of those exceptions to the rule that works just fine. Feel free to use InvalidOID instead of null if that makes things more consistent and easier to implement. The conceptual model is still the same. It doesn't seem to be beneficial to have tablesync and apply workers have their own distinct relations. They are similar enough that they can share this one.\n\nIOW, we should just drop pg_stat_subscription_workers, and add a few counters\nto pg_stat_subscription.+1David J.",
"msg_date": "Sat, 19 Feb 2022 09:16:40 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-19 09:16:40 -0700, David G. Johnston wrote:\n> On Sat, Feb 19, 2022 at 9:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > Even leaving everything else aside, a key of (dboid, subid, subrelid),\n> > where\n> > subrelid can be NULL, but where (dboid, subid) is *not* unique, imo is poor\n> > relational design. What is the justification for mixing relation specific\n> > and\n> > non-relation specific contents in this view?\n\n> The to-be-created pg_subscription_worker_error has this same design.\n> It really is a worker-oriented view so the PK should ideally (ignoring\n> dboid at the moment) be just workerid and subid and subrelid would just be\n> attributes, of which subrelid is optional. But a worker's natural key is\n> (subid, subrelid) so long as you accept that null has to be considered\n> equal to itself. Not usually an ideal model to pick but it seems like this\n> is one of those exceptions to the rule that works just fine. Feel free to\n> use InvalidOID instead of null if that makes things more consistent and\n> easier to implement. The conceptual model is still the same. It doesn't\n> seem to be beneficial to have tablesync and apply workers have their own\n> distinct relations. They are similar enough that they can share this one.\n\nI'm not so convinced. \"User space\" reasonably might want to know why a certain\nreplication hasn't yet become part of logical replication / why a refresh\nhasn't completed yet. For that something like\n\n SELECT *\n FROM pg_subscription ps JOIN pg_subscription_worker_error pwe ON (ps.oid = pwe.subid)\n WHERE subrelid = 'foo'::regclass;\n\nmakes sense.\n\nThe apply side is different from that. Yes, a \"subrelid = ...\" predicate would\nfilter out non-relation specific failures, but it still seems weird that apply\nfailures would even need to be filtered out for something like this.\n\nAnd conversely, monitoring for apply failures is conceptually looking for\nsomething different than tablesync.\n\nIMO the type of information you'd want for apply failures is substantially\ndifferent enough from worker failures that I don't really see the temptation\nto put them in the same table.\n\n\nI also still think that _worker shouldn't be part of any of the naming\nhere. It's an implementation detail that we use one worker for one tablesync\netc. It'd make sense for one apply worker to sync multiple small tables, and\nit'd make a lot of sense for multiple apply workers to collaborate on syncing\none large relation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 19 Feb 2022 08:37:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 9:37 AM Andres Freund <andres@anarazel.de> wrote:\n\n> IMO the type of information you'd want for apply failures is substantially\n>\ndifferent enough from worker failures that I don't really see the temptation\n> to put them in the same table.\n>\n>\nIt's an error message and a transaction LSN in both cases right now, along\nwith knowledge of whether said transaction only affects a single table\n(relid is not null) or not (relid is null). Do you have a concrete idea in\nmind that would make this separation need more obvious?\n\n\n> I also still think that _worker shouldn't be part of any of the naming\n> here. It's an implementation detail that we use one worker for one\n> tablesync\n> etc. It'd make sense for one apply worker to sync multiple small tables,\n> and\n> it'd make a lot of sense for multiple apply workers to collaborate on\n> syncing\n> one large relation.\n>\n\nGood point. The existing design doesn't actually require the \"worker\nstatus\" concept I described; so let's not have worker be part of the name.\n\nSo basically separate the proposed pg_subscription_error table into two: a\npg_subscription_tablesync_error and pg_subscription_apply_error. The\nformer having a relid field while the later does not. What fields belong on\neach?\n\nHow about we have it both ways. Two tables but provide a canonical view\nunion'ing them that a user can query to see whether any errors are\npresently affecting their system?\n\nDavid J.\n\nOn Sat, Feb 19, 2022 at 9:37 AM Andres Freund <andres@anarazel.de> wrote:IMO the type of information you'd want for apply failures is substantially\ndifferent enough from worker failures that I don't really see the temptation\nto put them in the same table.\nIt's an error message and a transaction LSN in both cases right now, along with knowledge of whether said transaction only affects a single table (relid is not null) or not (relid is null). Do you have a concrete idea in mind that would make this separation need more obvious?\n\nI also still think that _worker shouldn't be part of any of the naming\nhere. It's an implementation detail that we use one worker for one tablesync\netc. It'd make sense for one apply worker to sync multiple small tables, and\nit'd make a lot of sense for multiple apply workers to collaborate on syncing\none large relation.Good point. The existing design doesn't actually require the \"worker status\" concept I described; so let's not have worker be part of the name.So basically separate the proposed pg_subscription_error table into two: a pg_subscription_tablesync_error and pg_subscription_apply_error. The former having a relid field while the later does not. What fields belong on each?How about we have it both ways. Two tables but provide a canonical view union'ing them that a user can query to see whether any errors are presently affecting their system?David J.",
"msg_date": "Sat, 19 Feb 2022 10:04:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sun, Feb 20, 2022 at 1:02 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-02-19 22:38:19 +0900, Masahiko Sawada wrote:\n> > On Sat, Feb 19, 2022 at 5:32 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-02-18 17:26:04 +0900, Masahiko Sawada wrote:\n> > > > With this change, pg_stat_subscription_workers will be like:\n> > > >\n> > > > * subid\n> > > > * subname\n> > > > * subrelid\n> > > > * error_count\n> > > > * last_error_timestamp\n> > >\n> > > > This view will be extended by adding transaction statistics proposed\n> > > > on another thread[1].\n> > >\n> > > I do not agree with these bits. What's the point of these per-relation stats\n> > > at this poitns. You're just duplicating the normal relation pg_stats here.\n> > >\n> > > I really think we just should drop pg_stat_subscription_workers. Even if we\n> > > don't, we definitely should rename it, because it still isn't meaningfully\n> > > about workers.\n> >\n> > The view has stats per subscription worker (i.e., apply worker and\n> > tablesync worker), not per relation. The subrelid is OID of the\n> > relation that the tablesync worker is synchronizing. For the stats of\n> > apply workers, it is null.\n>\n> That's precisely the misuse of the stats subsystem that I'm complaining about\n> here. The whole design of pgstat (as it is today) only makes sense if you can\n> loose a message and it doesn't matter much, because it's just an incremental\n> counter increment that's lost. And to be able properly prune dead pgstat\n> contents the underlying objects stats are kept around either need to be\n> permanent (e.g. stats about WAL) or a record of objects needs to exist\n> (e.g. stats about relations).\n>\n>\n> Even leaving everything else aside, a key of (dboid, subid, subrelid), where\n> subrelid can be NULL, but where (dboid, subid) is *not* unique, imo is poor\n> relational design. What is the justification for mixing relation specific and\n> non-relation specific contents in this view?\n\nI think the current schema of the view with key (dboid, subid,\nsubrelid) comes from the fact that we store the same statistics for\nboth apply and tablesync. I think even if we have two separate views\nfor apply and tablesync, they would have almost the same columns\nexcept for their keys. Also, from the relational design point of view,\npg_locks has a somewhat similar table schema; its database and\nrelation columns can be NULL.\n\n>\n>\n> The patch you referenced [1] should just store the stats in the\n> pg_stat_subscription view, not pg_stat_subscription_workers.\n>\n> It *does* make sense to keep stats about the number of table syncs that failed\n> etc. But that should be a counter in pg_stat_subscription, not a row in\n> pg_stat_subscription_workers.\n\nWe have discussed using pg_stat_subscription before but concluded it's\nnot an appropriate place to store error information because it ends up\nkeeping cumulative stats mixed with non-cumulative stats. To take a\nprecedent, we used to store accumulative statistics such as spill_txns\nto pg_stat_replication, but then for the same reason we moved those\nstatistics to the new stats view, pg_stat_replication_slot. New\nsubscription statistics that we're introducing are cumulative\nstatistics whereas pg_stat_subscription is a dynamic statistics view.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 21 Feb 2022 12:56:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 10:07 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> I also still think that _worker shouldn't be part of any of the naming\n> here.\n>\n\nOkay, the other options that comes to mind for this are:\npg_subscription_replication_error, or\npg_subscription_lreplication_error, or pg_subscription_lrep_error? We\ncan use similar naming at another place (view) if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Feb 2022 10:38:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 19, 2022 at 10:35 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sat, Feb 19, 2022 at 9:37 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> IMO the type of information you'd want for apply failures is substantially\n>>\n>> different enough from worker failures that I don't really see the temptation\n>> to put them in the same table.\n>>\n>\n> It's an error message and a transaction LSN in both cases right now, along with knowledge of whether said transaction only affects a single table (relid is not null) or not (relid is null). Do you have a concrete idea in mind that would make this separation need more obvious?\n>\n\nI would also like to mention that in some cases, sync workers also\nbehaves like apply worker (after initial sync till it catches up with\nthe apply worker) and try to stream and apply changes similar to apply\nworker. The error during that phase will be no different than the\napply worker. One idea to make the separation more obvious is to\nintroduce 'worker_type' column similar to backend_type in\npg_stat_activity which will tell whether it is an apply worker or a\ntable sync worker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Feb 2022 10:40:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-21 12:56:46 +0900, Masahiko Sawada wrote:\n> > The patch you referenced [1] should just store the stats in the\n> > pg_stat_subscription view, not pg_stat_subscription_workers.\n> >\n> > It *does* make sense to keep stats about the number of table syncs that failed\n> > etc. But that should be a counter in pg_stat_subscription, not a row in\n> > pg_stat_subscription_workers.\n>\n> We have discussed using pg_stat_subscription before but concluded it's\n> not an appropriate place to store error information because it ends up\n> keeping cumulative stats mixed with non-cumulative stats.\n\nWell, as we've amply discussed, the non-cumulative stats shouldn't be in the\npgstat subsystem.\n\n\n> To take a precedent, we used to store accumulative statistics such as\n> spill_txns to pg_stat_replication, but then for the same reason we moved\n> those statistics to the new stats view, pg_stat_replication_slot. New\n> subscription statistics that we're introducing are cumulative statistics\n> whereas pg_stat_subscription is a dynamic statistics view.\n\nI'm happy to have cumulative subscriber stats somewhere in pgstats. But it\nshouldn't be split by worker or relation, and it shouldn't contain\nnon-cumulative error information.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Feb 2022 21:34:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 11:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-02-21 12:56:46 +0900, Masahiko Sawada wrote:\n>\n> > To take a precedent, we used to store accumulative statistics such as\n> > spill_txns to pg_stat_replication, but then for the same reason we moved\n> > those statistics to the new stats view, pg_stat_replication_slot. New\n> > subscription statistics that we're introducing are cumulative statistics\n> > whereas pg_stat_subscription is a dynamic statistics view.\n>\n> I'm happy to have cumulative subscriber stats somewhere in pgstats. But it\n> shouldn't be split by worker or relation, and it shouldn't contain\n> non-cumulative error information.\n>\n\nFair enough. Then, how about the following keeping the following information:\n\n* subid (subscription id)\n* subname (subscription name)\n* sync_error_count/sync_failure_count (number of timed table sync failed)\n* apply_error_count/apply_failure_count (number of times apply failed)\n* sync_success_count (number of table syncs successfully finished)\n* apply_commit_count (number of transactions applied successfully)\n* apply_rollback_count (number of transactions explicitly rolled back)\n* stats_reset (Time at which these statistics were last reset)\n\nThe view name could be pg_stat_subscription_lrep,\npg_stat_logical_replication, or something on those lines.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Feb 2022 12:39:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-21 12:39:31 +0530, Amit Kapila wrote:\n> Fair enough. Then, how about the following keeping the following information:\n\nMostly sounds good.\n\n\n> * subid (subscription id)\n> * subname (subscription name)\n\nComing from catalog, via join, I assume?\n\n\n> * sync_error_count/sync_failure_count (number of timed table sync failed)\n> * apply_error_count/apply_failure_count (number of times apply failed)\n\nYep.\n\n\n> * sync_success_count (number of table syncs successfully finished)\n\nThis one I'm not quite convinced by. You can't rely on precise counts with\npgstats and we should be able to get a better idea from somewhere more\npermanent which relations succeeded? But it also doesn't do much harm, so ...\n\n\n> * apply_commit_count (number of transactions applied successfully)\n> * apply_rollback_count (number of transactions explicitly rolled back)\n\nWhat does \"explicit\" mean here?\n\n\n> * stats_reset (Time at which these statistics were last reset)\n> \n> The view name could be pg_stat_subscription_lrep,\n> pg_stat_logical_replication, or something on those lines.\n\npg_stat_subscription_stats :)\n\n(I really dislike that we have pg_stat_ stuff that's not actually stats, but\nsomething describing the current state, but that ship has well sailed).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Feb 2022 23:48:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-02-21 12:39:31 +0530, Amit Kapila wrote:\n> > Fair enough. Then, how about the following keeping the following information:\n>\n> Mostly sounds good.\n>\n>\n> > * subid (subscription id)\n> > * subname (subscription name)\n>\n> Coming from catalog, via join, I assume?\n>\n\nThe subname would be from pg_subscription catalog similar to what we\nare doing now for pg_stat_subscription_workers.\n\n>\n> > * sync_error_count/sync_failure_count (number of timed table sync failed)\n> > * apply_error_count/apply_failure_count (number of times apply failed)\n>\n> Yep.\n>\n>\n> > * sync_success_count (number of table syncs successfully finished)\n>\n> This one I'm not quite convinced by. You can't rely on precise counts with\n> pgstats and we should be able to get a better idea from somewhere more\n> permanent which relations succeeded? But it also doesn't do much harm, so ...\n>\n\nWe can get precise information from pg_subscription_rel (rels that are\nin ready/finish_copy state) but OTOH, during refresh some of the rels\nwould have been dropped or if a user creates/refreshes publication\nwith copy_data = false, then we won't get information about how many\ntable syncs succeeded? I have also kept this to make the sync\ninformation look consistent considering we have sync_failure_count.\n\n>\n> > * apply_commit_count (number of transactions applied successfully)\n> > * apply_rollback_count (number of transactions explicitly rolled back)\n>\n> What does \"explicit\" mean here?\n>\n\nIt is for the Rollback Prepared case and probably for streaming of\nin-progress transactions that eventually get rolled back.\n\n>\n> > * stats_reset (Time at which these statistics were last reset)\n> >\n> > The view name could be pg_stat_subscription_lrep,\n> > pg_stat_logical_replication, or something on those lines.\n>\n> pg_stat_subscription_stats :)\n>\n\nHaving *stat* two times in the name sounds slightly odd to me but let\nus see what others think. One more option could be\npg_stat_subscription_replication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Feb 2022 14:49:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 2:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Feb 21, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > > The view name could be pg_stat_subscription_lrep,\n> > > pg_stat_logical_replication, or something on those lines.\n> >\n> > pg_stat_subscription_stats :)\n> >\n>\n> Having *stat* two times in the name sounds slightly odd to me but let\n> us see what others think. One more option could be\n> pg_stat_subscription_replication.\n>\n>\nAgreed.\n\npg_stat_subscription_activity\n\nWe already have pg_stat_activity (which may be an argument against the\nsuggestion...)\n\nDavid J.\n\nOn Mon, Feb 21, 2022 at 2:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Feb 21, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > The view name could be pg_stat_subscription_lrep,\n> > pg_stat_logical_replication, or something on those lines.\n>\n> pg_stat_subscription_stats :)\n>\n\nHaving *stat* two times in the name sounds slightly odd to me but let\nus see what others think. One more option could be\npg_stat_subscription_replication.Agreed.pg_stat_subscription_activityWe already have pg_stat_activity (which may be an argument against the suggestion...)David J.",
"msg_date": "Mon, 21 Feb 2022 09:07:07 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-21 14:49:01 +0530, Amit Kapila wrote:\n> On Mon, Feb 21, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > > * stats_reset (Time at which these statistics were last reset)\n> > >\n> > > The view name could be pg_stat_subscription_lrep,\n> > > pg_stat_logical_replication, or something on those lines.\n> >\n> > pg_stat_subscription_stats :)\n\n> Having *stat* two times in the name sounds slightly odd to me but let\n> us see what others think. One more option could be\n> pg_stat_subscription_replication.\n\nIt was a joke, making light of our bad naming in pg_stat_*, not a serious\nsuggestion...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Feb 2022 08:17:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sun, Feb 20, 2022 at 10:10 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Sat, Feb 19, 2022 at 10:35 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Sat, Feb 19, 2022 at 9:37 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> >>\n> >> IMO the type of information you'd want for apply failures is\n> substantially\n> >>\n> >> different enough from worker failures that I don't really see the\n> temptation\n> >> to put them in the same table.\n> >>\n> >\n> > It's an error message and a transaction LSN in both cases right now,\n> along with knowledge of whether said transaction only affects a single\n> table (relid is not null) or not (relid is null). Do you have a concrete\n> idea in mind that would make this separation need more obvious?\n> >\n>\n> I would also like to mention that in some cases, sync workers also\n> behaves like apply worker (after initial sync till it catches up with\n> the apply worker) and try to stream and apply changes similar to apply\n> worker. The error during that phase will be no different than the\n> apply worker. One idea to make the separation more obvious is to\n> introduce 'worker_type' column similar to backend_type in\n> pg_stat_activity which will tell whether it is an apply worker or a\n> table sync worker.\n>\n>\nThe point isn't to make the separation more obvious by specifying which\nworker type is doing the work. It is to make the concept of worker type\n(and identity) irrelevant. The end user cannot (and should not be able to)\naddress individual workers - only the subscription.\n\nEven while a sync worker is in synchronization mode (as opposed to whatever\nmode comes before synchronization mode) it still only affects a single\ntable. To the end user the distinction between the two modes is immaterial.\n\nThe statement \"will be no different than the apply worker\" doesn't make\nsense to me given that in a multiple-table subscription (the only kind\nwhere this matters...) you will have multiple table sync workers in\nsynchronization mode and they both cannot behave identically to an apply\nworker otherwise they would step on each other's toes. That two different\ntable-specific updates produce the same error shouldn't be a problem if\nthat is indeed what happens (though if the error is on tableA having the\nworker for tableB report the tableA error would be odd - but not\nproblematic).\n\nI'll admit I don't fully understand the details of this particular\nsynchronization interaction but I'm not see how the discussion of \"errors\nduring table-specific updates\" vs \"errors during whole transaction\napplication\" can be affected by it.\n\nDavid J.\n\nOn Sun, Feb 20, 2022 at 10:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, Feb 19, 2022 at 10:35 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sat, Feb 19, 2022 at 9:37 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> IMO the type of information you'd want for apply failures is substantially\n>>\n>> different enough from worker failures that I don't really see the temptation\n>> to put them in the same table.\n>>\n>\n> It's an error message and a transaction LSN in both cases right now, along with knowledge of whether said transaction only affects a single table (relid is not null) or not (relid is null). Do you have a concrete idea in mind that would make this separation need more obvious?\n>\n\nI would also like to mention that in some cases, sync workers also\nbehaves like apply worker (after initial sync till it catches up with\nthe apply worker) and try to stream and apply changes similar to apply\nworker. The error during that phase will be no different than the\napply worker. One idea to make the separation more obvious is to\nintroduce 'worker_type' column similar to backend_type in\npg_stat_activity which will tell whether it is an apply worker or a\ntable sync worker.The point isn't to make the separation more obvious by specifying which worker type is doing the work. It is to make the concept of worker type (and identity) irrelevant. The end user cannot (and should not be able to) address individual workers - only the subscription.Even while a sync worker is in synchronization mode (as opposed to whatever mode comes before synchronization mode) it still only affects a single table. To the end user the distinction between the two modes is immaterial.The statement \"will be no different than the apply worker\" doesn't make sense to me given that in a multiple-table subscription (the only kind where this matters...) you will have multiple table sync workers in synchronization mode and they both cannot behave identically to an apply worker otherwise they would step on each other's toes. That two different table-specific updates produce the same error shouldn't be a problem if that is indeed what happens (though if the error is on tableA having the worker for tableB report the tableA error would be odd - but not problematic).I'll admit I don't fully understand the details of this particular synchronization interaction but I'm not see how the discussion of \"errors during table-specific updates\" vs \"errors during whole transaction application\" can be affected by it.David J.",
"msg_date": "Mon, 21 Feb 2022 09:19:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 9:37 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Mon, Feb 21, 2022 at 2:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Mon, Feb 21, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> > > The view name could be pg_stat_subscription_lrep,\n>> > > pg_stat_logical_replication, or something on those lines.\n>> >\n>> > pg_stat_subscription_stats :)\n>> >\n>>\n>> Having *stat* two times in the name sounds slightly odd to me but let\n>> us see what others think. One more option could be\n>> pg_stat_subscription_replication.\n>>\n>\n> Agreed.\n>\n> pg_stat_subscription_activity\n>\n> We already have pg_stat_activity (which may be an argument against the suggestion...)\n>\n\nI don't know if that can be an argument against it but one can imagine\nthat we record other subscription changes like (change of\npublications, etc.). I personally feel it may be better to add\n'_replication' in some way like pg_stat_sub_replication_activity but I\nam fine either way.\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Feb 2022 09:07:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 21, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-02-21 12:39:31 +0530, Amit Kapila wrote:\n> > > Fair enough. Then, how about the following keeping the following information:\n> >\n> > Mostly sounds good.\n> >\n> >\n> > > * subid (subscription id)\n> > > * subname (subscription name)\n> >\n> > Coming from catalog, via join, I assume?\n> >\n>\n> The subname would be from pg_subscription catalog similar to what we\n> are doing now for pg_stat_subscription_workers.\n\nI've attached a patch that changes pg_stat_subscription_workers view.\nIt removes non-cumulative values such as error details such as\nerror-XID and the error message from the view, and consequently the\nview now has only cumulative statistics counters: apply_error_count\nand sync_error_count. Since the new view name is under discussion I\ntemporarily chose pg_stat_subscription_activity.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 22 Feb 2022 14:45:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 22, 2022 at 11:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached a patch that changes pg_stat_subscription_workers view.\n> It removes non-cumulative values such as error details such as\n> error-XID and the error message from the view, and consequently the\n> view now has only cumulative statistics counters: apply_error_count\n> and sync_error_count. Since the new view name is under discussion I\n> temporarily chose pg_stat_subscription_activity.\n>\n\nFew comments:\n=============\n1.\n--- a/src/backend/catalog/system_functions.sql\n+++ b/src/backend/catalog/system_functions.sql\n@@ -637,11 +637,9 @@ REVOKE EXECUTE ON FUNCTION\npg_stat_reset_single_table_counters(oid) FROM public;\n\n REVOKE EXECUTE ON FUNCTION\npg_stat_reset_single_function_counters(oid) FROM public;\n\n-REVOKE EXECUTE ON FUNCTION pg_stat_reset_replication_slot(text) FROM public;\n-\n-REVOKE EXECUTE ON FUNCTION pg_stat_reset_subscription_worker(oid) FROM public;\n+REVOKE EXECUTE ON FUNCTION\npg_stat_reset_single_subscription_counters(oid) FROM public;\n\n-REVOKE EXECUTE ON FUNCTION pg_stat_reset_subscription_worker(oid,\noid) FROM public;\n+REVOKE EXECUTE ON FUNCTION pg_stat_reset_replication_slot(text) FROM public;\n\nIs there a need to change anything about\npg_stat_reset_replication_slot() in this patch?\n\n2. Do we still need to use LATERAL in the view's query?\n\n3. Can we send error stats pgstat_report_stat() as that will be called\nvia proc_exit() path. We can set the phase (apply/sync) in\napply_error_callback_arg and then use that to send the appropriate\nmessage. I think this will obviate the need for try..catch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Feb 2022 15:23:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tuesday, February 22, 2022 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached a patch that changes pg_stat_subscription_workers view.\r\n> It removes non-cumulative values such as error details such as error-XID and\r\n> the error message from the view, and consequently the view now has only\r\n> cumulative statistics counters: apply_error_count and sync_error_count. Since\r\n> the new view name is under discussion I temporarily chose\r\n> pg_stat_subscription_activity.\r\nHi, thank you for sharing the patch.\r\n\r\n\r\nFew minor comments for v1.\r\n\r\n(1) commit message's typo\r\n\r\nThis commits changes the view so that it stores only statistics\r\ncounters: apply_error_count and sync_error_count.\r\n\r\n\"This commits\" -> \"This commit\"\r\n\r\n(2) minor improvement suggestion for the commit message\r\n\r\nI suggest that we touch the commit id 8d74fc9\r\nthat introduces the pg_stat_subscription_workers\r\nin the commit message, for better traceability. Below is an example.\r\n\r\nFrom:\r\nAs the result of the discussion, we've concluded that the stats\r\ncollector is not an appropriate place to store the error information of\r\nsubscription workers.\r\n\r\nTo:\r\nAs the result of the discussion about the view introduced by 8d74fc9,...\r\n\r\n(3) doc/src/sgml/logical-replication.sgml\r\n\r\nKindly refer to commit id 85c61ba for the detail.\r\nYou forgot \"the\" in the below sentence.\r\n\r\n@@ -346,8 +346,6 @@\r\n <para>\r\n A conflict will produce an error and will stop the replication; it must be\r\n resolved manually by the user. Details about the conflict can be found in\r\n- <link linkend=\"monitoring-pg-stat-subscription-workers\">\r\n- <structname>pg_stat_subscription_workers</structname></link> and the\r\n subscriber's server log.\r\n </para>\r\n\r\nFrom:\r\nsubscriber's server log.\r\nto:\r\nthe subscriber's server log.\r\n\r\n(4) doc/src/sgml/monitoring.sgml\r\n\r\n <row>\r\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n- <structfield>last_error_time</structfield> <type>timestamp with time zone</type>\r\n+ <structfield>sync_error_count</structfield> <type>uint8</type>\r\n </para>\r\n <para>\r\n- Last time at which this error occurred\r\n+ Number of times the error occurred during the initial data copy\r\n </para></entry>\r\n\r\nI supposed it might be better to use \"initial data sync\"\r\nor \"initial data synchronization\", rather than \"initial data copy\".\r\n\r\n(5) src/test/subscription/t/026_worker_stats.pl\r\n\r\n+# Truncate test_tab1 so that table sync can continue.\r\n+$node_subscriber->safe_psql('postgres', \"TRUNCATE test_tab1;\");\r\n\r\nThe second truncate is for apply, isn't it? Therefore, kindly change\r\n\r\nFrom:\r\nTruncate test_tab1 so that table sync can continue.\r\nTo:\r\nTruncate test_tab1 so that apply can continue.\r\n\r\n(6) src/test/subscription/t/026_worker_stats.pl\r\n\r\n+# Insert more data to test_tab1 on the subscriber and then on the publisher, raising an\r\n+# error on the subscriber due to violation of the unique constraint on test_tab1.\r\n+$node_subscriber->safe_psql('postgres', \"INSERT INTO test_tab1 VALUES (2)\");\r\n\r\nDid we need this insert ?\r\nIf you want to indicate the apply is working okay after the error of table sync is solved,\r\nwaiting for the max value in the test_tab1 becoming 2 on the subscriber by polling query\r\nwould work. But, I was not sure if this is essentially necessary for the testing purpose.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 22 Feb 2022 12:22:18 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 22, 2022 at 6:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 22, 2022 at 11:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached a patch that changes pg_stat_subscription_workers view.\n> > It removes non-cumulative values such as error details such as\n> > error-XID and the error message from the view, and consequently the\n> > view now has only cumulative statistics counters: apply_error_count\n> > and sync_error_count. Since the new view name is under discussion I\n> > temporarily chose pg_stat_subscription_activity.\n> >\n>\n> Few comments:\n> =============\n\nThank you for the comments!\n\n> 1.\n> --- a/src/backend/catalog/system_functions.sql\n> +++ b/src/backend/catalog/system_functions.sql\n> @@ -637,11 +637,9 @@ REVOKE EXECUTE ON FUNCTION\n> pg_stat_reset_single_table_counters(oid) FROM public;\n>\n> REVOKE EXECUTE ON FUNCTION\n> pg_stat_reset_single_function_counters(oid) FROM public;\n>\n> -REVOKE EXECUTE ON FUNCTION pg_stat_reset_replication_slot(text) FROM public;\n> -\n> -REVOKE EXECUTE ON FUNCTION pg_stat_reset_subscription_worker(oid) FROM public;\n> +REVOKE EXECUTE ON FUNCTION\n> pg_stat_reset_single_subscription_counters(oid) FROM public;\n>\n> -REVOKE EXECUTE ON FUNCTION pg_stat_reset_subscription_worker(oid,\n> oid) FROM public;\n> +REVOKE EXECUTE ON FUNCTION pg_stat_reset_replication_slot(text) FROM public;\n>\n> Is there a need to change anything about\n> pg_stat_reset_replication_slot() in this patch?\n\nIt doesn't change pg_stat_reset_replication_slot() but just changes\nthe order in order to put the modified function\npg_stat_reset_single_subscription_counters() closer to other similar\nfunctions such as pg_stat_reset_single_function_counters().\n\n>\n> 2. Do we still need to use LATERAL in the view's query?\n\nThere are some functions that use LATERAL in a similar way but it\nseems no need to put LATERAL before the function call. Will remove.\n\n> 3. Can we send error stats pgstat_report_stat() as that will be called\n> via proc_exit() path. We can set the phase (apply/sync) in\n> apply_error_callback_arg and then use that to send the appropriate\n> message. I think this will obviate the need for try..catch.\n\nIf we use pgstat_report_stat() to send subscription stats messages,\nall processes end up going through that path. It might not bring\noverhead in practice but I'd like to avoid it. And, since the apply\nworker also calls pgstat_report_stat() at the end of the transaction,\nwe might need to change pgstat_report_stat() so that it doesn't send\nthe subscription messages when it gets called at the end of the\ntransaction. I think it's likely that PG_TRY() and PG_CATCH() wil be\nadded for example, when the disable_on_error feature or the storing\nerror details feature is introduced, so obviating the need for them at\nthis point would not benefit much.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 22 Feb 2022 23:32:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 22, 2022 at 9:22 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, February 22, 2022 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached a patch that changes pg_stat_subscription_workers view.\n> > It removes non-cumulative values such as error details such as error-XID and\n> > the error message from the view, and consequently the view now has only\n> > cumulative statistics counters: apply_error_count and sync_error_count. Since\n> > the new view name is under discussion I temporarily chose\n> > pg_stat_subscription_activity.\n> Hi, thank you for sharing the patch.\n>\n>\n> Few minor comments for v1.\n\nThank you for the comments!\n\n>\n> (1) commit message's typo\n>\n> This commits changes the view so that it stores only statistics\n> counters: apply_error_count and sync_error_count.\n>\n> \"This commits\" -> \"This commit\"\n\nWill fix.\n\n>\n> (2) minor improvement suggestion for the commit message\n>\n> I suggest that we touch the commit id 8d74fc9\n> that introduces the pg_stat_subscription_workers\n> in the commit message, for better traceability. Below is an example.\n>\n> From:\n> As the result of the discussion, we've concluded that the stats\n> collector is not an appropriate place to store the error information of\n> subscription workers.\n>\n> To:\n> As the result of the discussion about the view introduced by 8d74fc9,...\n\nOkay, will add the commit reference.\n\n>\n> (3) doc/src/sgml/logical-replication.sgml\n>\n> Kindly refer to commit id 85c61ba for the detail.\n> You forgot \"the\" in the below sentence.\n>\n> @@ -346,8 +346,6 @@\n> <para>\n> A conflict will produce an error and will stop the replication; it must be\n> resolved manually by the user. Details about the conflict can be found in\n> - <link linkend=\"monitoring-pg-stat-subscription-workers\">\n> - <structname>pg_stat_subscription_workers</structname></link> and the\n> subscriber's server log.\n> </para>\n>\n> From:\n> subscriber's server log.\n> to:\n> the subscriber's server log.\n\nWill fix.\n\n>\n> (4) doc/src/sgml/monitoring.sgml\n>\n> <row>\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> - <structfield>last_error_time</structfield> <type>timestamp with time zone</type>\n> + <structfield>sync_error_count</structfield> <type>uint8</type>\n> </para>\n> <para>\n> - Last time at which this error occurred\n> + Number of times the error occurred during the initial data copy\n> </para></entry>\n>\n> I supposed it might be better to use \"initial data sync\"\n> or \"initial data synchronization\", rather than \"initial data copy\".\n\n\"Initial data synchronization\" sounds like the whole table\nsynchronization process including COPY and applying changes to catch\nup. But sync_error_count is incremented only during COPY so I used\n\"initial data copy\". What do you think?\n\n>\n> (5) src/test/subscription/t/026_worker_stats.pl\n>\n> +# Truncate test_tab1 so that table sync can continue.\n> +$node_subscriber->safe_psql('postgres', \"TRUNCATE test_tab1;\");\n>\n> The second truncate is for apply, isn't it? Therefore, kindly change\n>\n> From:\n> Truncate test_tab1 so that table sync can continue.\n> To:\n> Truncate test_tab1 so that apply can continue.\n\nRight, will fix.\n\n>\n> (6) src/test/subscription/t/026_worker_stats.pl\n>\n> +# Insert more data to test_tab1 on the subscriber and then on the publisher, raising an\n> +# error on the subscriber due to violation of the unique constraint on test_tab1.\n> +$node_subscriber->safe_psql('postgres', \"INSERT INTO test_tab1 VALUES (2)\");\n>\n> Did we need this insert ?\n> If you want to indicate the apply is working okay after the error of table sync is solved,\n> waiting for the max value in the test_tab1 becoming 2 on the subscriber by polling query\n> would work. But, I was not sure if this is essentially necessary for the testing purpose.\n\nYou're right, it's not necessary. Also, it seems better to change the\nTAP test file name from 026_worker_stats.pl to 026_stats.pl. Will\nincorporate these changes.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 22 Feb 2022 23:46:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tuesday, February 22, 2022 11:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Tue, Feb 22, 2022 at 9:22 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > (4) doc/src/sgml/monitoring.sgml\r\n> >\r\n> > <row>\r\n> > <entry role=\"catalog_table_entry\"><para\r\n> role=\"column_definition\">\r\n> > - <structfield>last_error_time</structfield> <type>timestamp with\r\n> time zone</type>\r\n> > + <structfield>sync_error_count</structfield> <type>uint8</type>\r\n> > </para>\r\n> > <para>\r\n> > - Last time at which this error occurred\r\n> > + Number of times the error occurred during the initial data\r\n> > + copy\r\n> > </para></entry>\r\n> >\r\n> > I supposed it might be better to use \"initial data sync\"\r\n> > or \"initial data synchronization\", rather than \"initial data copy\".\r\n> \r\n> \"Initial data synchronization\" sounds like the whole table synchronization\r\n> process including COPY and applying changes to catch up. But\r\n> sync_error_count is incremented only during COPY so I used \"initial data copy\".\r\n> What do you think?\r\nOkay. Please keep it as is.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 23 Feb 2022 01:13:45 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-22 14:45:19 +0900, Masahiko Sawada wrote:\n> I've attached a patch that changes pg_stat_subscription_workers view.\n\nThanks for working on this!\n\nWhy are the stats stored in the per-database stats file / as a second level\nbelow the database? While they're also associated with a database, it's a\nglobal catalog, so it seems to make more sense to have them \"live\" globally as\nwell?\n\nNot just from an aesthetical perspective, but there might also be cases where\nit's useful to send stats from the stats launcher. E.g. the number of times\nthe launcher couldn't start a worker because the max numbers of workers was\nalready active or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Feb 2022 18:14:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Feb 23, 2022 at 11:14 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-02-22 14:45:19 +0900, Masahiko Sawada wrote:\n> > I've attached a patch that changes pg_stat_subscription_workers view.\n>\n> Thanks for working on this!\n>\n> Why are the stats stored in the per-database stats file / as a second level\n> below the database? While they're also associated with a database, it's a\n> global catalog, so it seems to make more sense to have them \"live\" globally as\n> well?\n\nGood point. The reason why we used to use per-database stats file is\nthat we were storing some relation information there. But now that we\ndon't need to have such information, it makes more sense to have them\nlive globally. I'll change the patch accordingly.\n\n>\n> Not just from an aesthetical perspective, but there might also be cases where\n> it's useful to send stats from the stats launcher. E.g. the number of times\n> the launcher couldn't start a worker because the max numbers of workers was\n> already active or such.\n\nGood idea.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 23 Feb 2022 11:46:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 22, 2022 1:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached a patch that changes pg_stat_subscription_workers view.\r\n> It removes non-cumulative values such as error details such as\r\n> error-XID and the error message from the view, and consequently the\r\n> view now has only cumulative statistics counters: apply_error_count\r\n> and sync_error_count. Since the new view name is under discussion I\r\n> temporarily chose pg_stat_subscription_activity.\r\n> \r\n\r\nThanks for your patch.\r\n\r\nFew comments:\r\n\r\n1.\r\n+ <structfield>apply_error_count</structfield> <type>uint8</type>\r\n...\r\n+ <structfield>sync_error_count</structfield> <type>uint8</type>\r\n\r\nIt seems that Postgres has no data type named uint8, should we change it to\r\nbigint?\r\n\r\n2.\r\n+# Wait for the table sync error to be reported.\r\n+$node_subscriber->poll_query_until(\r\n+\t'postgres',\r\n+\tqq[\r\n+SELECT apply_error_count = 0 AND sync_error_count > 0\r\n+FROM pg_stat_subscription_activity\r\n+WHERE subname = 'tap_sub'\r\n+]) or die \"Timed out while waiting for table sync error\";\r\n\r\nWe want to check table sync error here, but do we need to check\r\n\"apply_error_count = 0\"? I am not sure if it is possible that the apply worker has\r\nan unexpected error, which would cause this test to fail.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Wed, 23 Feb 2022 03:00:08 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi. Below are my review comments for the v1 patch.\n\n======\n\n1. Commit message - wording\n\nAs the result of the discussion, we've concluded that the stats\ncollector is not an appropriate place to store the error information of\nsubscription workers.\n\nSUGGESTION\nIt was decided (refer to the Discussion link below) that the stats\ncollector is not an appropriate place to store the error information of\nsubscription workers.\n\n~~~\n\n2. Commit message - wording\n\nThis commits changes the view so that it stores only statistics\ncounters: apply_error_count and sync_error_count.\n\nSUGGESTION\n\"This commits changes the view\" --> \"This patch changes the\npg_stat_subscription_workers view\"\n\n~~~\n\n3. Commit message - wording\n\nRemoving these error details, since we don't need to record the\nerror information for apply workers and tablesync workers separately,\nthe view now has one entry per subscription.\n\nDID THIS MEAN...\nAfter removing these error details, there is no longer separate\ninformation for apply workers and tablesync workers, so the view now\nhas only one entry per subscription.\n\n--\n\nBut anyway, that is not entirely true either because those counters\nare separate information for the different workers, right?\n\n~~\n\n4. Commit message - wording\n\nAlso, it changes the view name to pg_stat_subscription_activity\nsince the word \"worker\" is an implementation detail that we use one\nworker for one tablesync.\n\nSUGGESTION\n\"Also, it changes\" --> \"The patch also changes\" ...\n\n~~~\n\n5. doc/src/sgml/monitoring.sgml - wording\n\n <para>\n- The <structname>pg_stat_subscription_workers</structname> view will contain\n- one row per subscription worker on which errors have occurred, for workers\n- applying logical replication changes and workers handling the initial data\n- copy of the subscribed tables. The statistics entry is removed when the\n+ The <structname>pg_stat_subscription_activity</structname> view will contain\n+ one row per subscription. The statistics entry is removed when the\n corresponding subscription is dropped.\n </para>\n\nSUGGESTION\n\"The statistics entry is removed\" --> \"This row is removed\"\n\n~~~\n\n6. doc/src/sgml/monitoring.sgml - why two counters?\n\nPlease forgive this noob question...\n\nI see there are 2 error_count columns (one for each kind of worker)\nbut I was wondering why it is useful for users to be able to\ndistinguish if the error came from the tablesync workers or from the\napply workers? Do you have any example?\n\nAlso, IIRC sometimes the tablesync might actually do a few \"apply\"\nchanges itself... so the distinction may become a bit fuzzy...\n\n~~~\n\n7. src/backend/postmaster/pgstat.c - comment\n\n@@ -1313,13 +1312,13 @@ pgstat_vacuum_stat(void)\n }\n\n /*\n- * Repeat for subscription workers. Similarly, we needn't bother in the\n- * common case where no subscription workers' stats are being collected.\n+ * Repeat for subscription. Similarly, we needn't bother in the common\n+ * case where no subscription stats are being collected.\n */\n\ntypo?\n\n\"Repeat for subscription.\" --> \"Repeat for subscriptions.\"\n\n~~~\n\n8. src/backend/postmaster/pgstat.c\n\n@@ -3000,32 +2968,29 @@ pgstat_fetch_stat_funcentry(Oid func_id)\n\n /*\n * ---------\n- * pgstat_fetch_stat_subworker_entry() -\n+ * pgstat_fetch_stat_subentry() -\n *\n * Support function for the SQL-callable pgstat* functions. Returns\n- * the collected statistics for subscription worker or NULL.\n+ * the collected statistics for subscription or NULL.\n * ---------\n */\n-PgStat_StatSubWorkerEntry *\n-pgstat_fetch_stat_subworker_entry(Oid subid, Oid subrelid)\n+PgStat_StatSubEntry *\n+pgstat_fetch_stat_subentry(Oid subid)\n\nThere seems some kind of inconsistency because the member name is\ncalled \"subscriptions\" but sometimes it seems singular.\n\nSome places (e.g. pgstat_vacuum_stat) will iterate multiple results,\nbut then other places (like this function) just return to a single\n\"subscription\" (or \"entry\").\n\nI suspect all the code may be fine; probably it is just some\ninconsistent (singular/plural) comments that have confused things a\nbit.\n\n~~~\n\n9. src/backend/replication/logical/worker.c - subscription id\n\n+ /* Report the worker failed during table synchronization */\n+ pgstat_report_subscription_error(MyLogicalRepWorker->subid, false);\n\nand\n\n+ /* Report the worker failed during the application of the change */\n+ pgstat_report_subscription_error(MyLogicalRepWorker->subid, true);\n\n\nWhy don't these use MySubscription->oid instead of MyLogicalRepWorker->subid?\n\n~~~\n\n10. src/include/pgstat.h - enum order\n\n@@ -84,8 +84,8 @@ typedef enum StatMsgType\n PGSTAT_MTYPE_REPLSLOT,\n PGSTAT_MTYPE_CONNECT,\n PGSTAT_MTYPE_DISCONNECT,\n+ PGSTAT_MTYPE_SUBSCRIPTIONERROR,\n PGSTAT_MTYPE_SUBSCRIPTIONPURGE,\n- PGSTAT_MTYPE_SUBWORKERERROR,\n } StatMsgType;\n\nThis change rearranges the enum order. Maybe it is safer not to do this?\n\n~~~\n\n11. src/include/pgstat.h\n\n@@ -767,8 +747,8 @@ typedef union PgStat_Msg\n PgStat_MsgReplSlot msg_replslot;\n PgStat_MsgConnect msg_connect;\n PgStat_MsgDisconnect msg_disconnect;\n+ PgStat_MsgSubscriptionError msg_subscriptionerror;\n PgStat_MsgSubscriptionPurge msg_subscriptionpurge;\n- PgStat_MsgSubWorkerError msg_subworkererror;\n } PgStat_Msg;\n\nThis change also rearranges the order. Maybe there was no good reason\nto do that?\n\n~~~\n\n12. src/include/pgstat.h - PgStat_StatDBEntry\n\n@@ -823,16 +803,12 @@ typedef struct PgStat_StatDBEntry\n TimestampTz stats_timestamp; /* time of db stats file update */\n\n /*\n- * tables, functions, and subscription workers must be last in the struct,\n- * because we don't write the pointers out to the stats file.\n- *\n- * subworkers is the hash table of PgStat_StatSubWorkerEntry which stores\n- * statistics of logical replication workers: apply worker and table sync\n- * worker.\n+ * tables, functions, and subscription must be last in the struct, because\n+ * we don't write the pointers out to the stats file.\n */\n\nShould that say \"tables, functions, and subscriptions\" (plural)\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 23 Feb 2022 14:07:45 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, Feb 22, 2022 at 8:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Feb 22, 2022 at 6:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > 3. Can we send error stats pgstat_report_stat() as that will be called\n> > via proc_exit() path. We can set the phase (apply/sync) in\n> > apply_error_callback_arg and then use that to send the appropriate\n> > message. I think this will obviate the need for try..catch.\n>\n> If we use pgstat_report_stat() to send subscription stats messages,\n> all processes end up going through that path. It might not bring\n> overhead in practice but I'd like to avoid it.\n>\n\nI am not sure about overhead but I see another problem if we use that\napproach. In the exit path, logicalrep_worker_onexit() will get called\nbefore pgstat_report_stat() and that will clear the\nMyLogicalRepWorker->subid, so we won't know the id for which to send\nstats. So, the way patch is doing seems reasonable to me unless\nsomeone has better ideas.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 23 Feb 2022 09:17:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Below are my review comments just for the v1 patch test code.\n\n======\n\n1. \"table sync error\" versus \"tablesync error\"\n\n+# Wait for the table sync error to be reported.\n+$node_subscriber->poll_query_until(\n+ 'postgres',\n+ qq[\n+SELECT apply_error_count = 0 AND sync_error_count > 0\n+FROM pg_stat_subscription_activity\n+WHERE subname = 'tap_sub'\n+]) or die \"Timed out while waiting for table sync error\";\n+\n+# Truncate test_tab1 so that table sync can continue.\n+$node_subscriber->safe_psql('postgres', \"TRUNCATE test_tab1;\");\n+\n # Wait for initial table sync for test_tab1 to finish.\n\nIMO all these \"table sync\" should be changed to \"tablesync\", because a\ntable \"sync error\" sounds like something completely different to a\n\"tablesync error\".\n\nSUGGESTIONS\n- \"Wait for the table sync error to be reported.\" --> \"Wait for the\ntablesync error to be reported.\"\n- \"Timed out while waiting for table sync error\" --> \"Timed out while\nwaiting for tablesync error\"\n- \"Truncate test_tab1 so that table sync can continue.\" --> \"Truncate\ntest_tab1 so that tablesync worker can fun to completion.\"\n- \"Wait for initial table sync for test_tab1 to finish.\" --> \"Wait for\nthe tablesync worker of test_tab1 to finish.\"\n\n~~~\n\n2. Unnecessary INSERT VALUES (2)?\n\n(I think this is a duplicate of what [Osumi] #6 reported)\n\n+# Insert more data to test_tab1 on the subscriber and then on the\npublisher, raising an\n+# error on the subscriber due to violation of the unique constraint\non test_tab1.\n+$node_subscriber->safe_psql('postgres', \"INSERT INTO test_tab1 VALUES (2)\");\n\nWhy does the test do INSERT data (2)? There is already data (1) from\nthe tablesync which will cause an apply worker PK violation when\nanother VALUES (1) is published.\n\nNote, the test comment also needs to change...\n\n~~~\n\n3. Wait for the apply worker error\n\n+# Wait for the apply error to be reported.\n+$node_subscriber->poll_query_until(\n+ 'postgres',\n+ qq[\n+SELECT apply_error_count > 0 AND sync_error_count > 0\n+FROM pg_stat_subscription_activity\n+WHERE subname = 'tap_sub'\n+]) or die \"Timed out while waiting for apply error\";\n\nThis test is only for apply worker errors. So why is the test SQL\nchecking \"AND sync_error_count > 0\"?\n\n(This is similar to what [Tang] #2 reported, but I think she was\nreferring to the other tablesync test)\n\n~~~\n\n4. Wrong worker?\n\n(looks like a duplicate of what [Osumi] #5 already)\n\n+\n+# Truncate test_tab1 so that table sync can continue.\n+$node_subscriber->safe_psql('postgres', \"TRUNCATE test_tab1;\");\n\n $node_subscriber->stop('fast');\n $node_publisher->stop('fast');\n\nCut/paste error? Aren't you doing TRUNCATE here so the apply worker\ncan continue; not the tablesync worker (which already completed)\n\n\"Truncate test_tab1 so that table sync can continue.\" --> \"Truncate\ntest_tab1 so that the apply worker can continue.\"\n\n------\n[Osumi] https://www.postgresql.org/message-id/CAD21AoBRt%3DcyKsZP83rcMkHnT498gHH0TEP34fZBrGCxT-Ahwg%40mail.gmail.com\n[Tang] https://www.postgresql.org/message-id/TYCPR01MB612840D018FEBD38268CC83BFB3C9%40TYCPR01MB6128.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 23 Feb 2022 17:20:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 23, 2022 at 12:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi. Below are my review comments for the v1 patch.\n\nThank you for the comments! I've attached the latest version patch\nthat incorporated all comments I got so far. The primary change from\nthe previous version is that the subscription statistics live globally\nrather than per-database.\n\n>\n> ======\n>\n> 1. Commit message - wording\n>\n> As the result of the discussion, we've concluded that the stats\n> collector is not an appropriate place to store the error information of\n> subscription workers.\n>\n> SUGGESTION\n> It was decided (refer to the Discussion link below) that the stats\n> collector is not an appropriate place to store the error information of\n> subscription workers.\n\nFixed.\n\n>\n> ~~~\n>\n> 2. Commit message - wording\n>\n> This commits changes the view so that it stores only statistics\n> counters: apply_error_count and sync_error_count.\n>\n> SUGGESTION\n> \"This commits changes the view\" --> \"This patch changes the\n> pg_stat_subscription_workers view\"\n\nFixed.\n\n>\n> ~~~\n>\n> 3. Commit message - wording\n>\n> Removing these error details, since we don't need to record the\n> error information for apply workers and tablesync workers separately,\n> the view now has one entry per subscription.\n>\n> DID THIS MEAN...\n> After removing these error details, there is no longer separate\n> information for apply workers and tablesync workers, so the view now\n> has only one entry per subscription.\n>\n> --\n>\n> But anyway, that is not entirely true either because those counters\n> are separate information for the different workers, right?\n\nRight. Since I also made the subscription statistics a cluster-wide\nstatistics, I've changed this part accordingly.\n\n>\n> ~~\n>\n> 4. Commit message - wording\n>\n> Also, it changes the view name to pg_stat_subscription_activity\n> since the word \"worker\" is an implementation detail that we use one\n> worker for one tablesync.\n>\n> SUGGESTION\n> \"Also, it changes\" --> \"The patch also changes\" ...\n\nFixed.\n\n>\n> ~~~\n>\n> 5. doc/src/sgml/monitoring.sgml - wording\n>\n> <para>\n> - The <structname>pg_stat_subscription_workers</structname> view will contain\n> - one row per subscription worker on which errors have occurred, for workers\n> - applying logical replication changes and workers handling the initial data\n> - copy of the subscribed tables. The statistics entry is removed when the\n> + The <structname>pg_stat_subscription_activity</structname> view will contain\n> + one row per subscription. The statistics entry is removed when the\n> corresponding subscription is dropped.\n> </para>\n>\n> SUGGESTION\n> \"The statistics entry is removed\" --> \"This row is removed\"\n\nOn second thoughts, this sentence is not necessary since it's obvious\nand descriptions of other stats view don't mention it.\n\n>\n> ~~~\n>\n> 6. doc/src/sgml/monitoring.sgml - why two counters?\n>\n> Please forgive this noob question...\n>\n> I see there are 2 error_count columns (one for each kind of worker)\n> but I was wondering why it is useful for users to be able to\n> distinguish if the error came from the tablesync workers or from the\n> apply workers? Do you have any example?\n>\n> Also, IIRC sometimes the tablesync might actually do a few \"apply\"\n> changes itself... so the distinction may become a bit fuzzy...\n\nI think that the tablesync phase and the apply phase can fail for\ndifferent reasons. So these values would be a good indicator for users\nto check if each phase works fine.\n\nAfter more thoughts, I think it's better to increment sync_error_count\nalso when a tablesync worker fails while applying the changes. These\ncounters will correspond to the error information entries that will be\nstored in a system catalog.\n\n>\n> ~~~\n>\n> 7. src/backend/postmaster/pgstat.c - comment\n>\n> @@ -1313,13 +1312,13 @@ pgstat_vacuum_stat(void)\n> }\n>\n> /*\n> - * Repeat for subscription workers. Similarly, we needn't bother in the\n> - * common case where no subscription workers' stats are being collected.\n> + * Repeat for subscription. Similarly, we needn't bother in the common\n> + * case where no subscription stats are being collected.\n> */\n>\n> typo?\n>\n> \"Repeat for subscription.\" --> \"Repeat for subscriptions.\"\n\nFixed.\n\n>\n> ~~~\n>\n> 8. src/backend/postmaster/pgstat.c\n>\n> @@ -3000,32 +2968,29 @@ pgstat_fetch_stat_funcentry(Oid func_id)\n>\n> /*\n> * ---------\n> - * pgstat_fetch_stat_subworker_entry() -\n> + * pgstat_fetch_stat_subentry() -\n> *\n> * Support function for the SQL-callable pgstat* functions. Returns\n> - * the collected statistics for subscription worker or NULL.\n> + * the collected statistics for subscription or NULL.\n> * ---------\n> */\n> -PgStat_StatSubWorkerEntry *\n> -pgstat_fetch_stat_subworker_entry(Oid subid, Oid subrelid)\n> +PgStat_StatSubEntry *\n> +pgstat_fetch_stat_subentry(Oid subid)\n>\n> There seems some kind of inconsistency because the member name is\n> called \"subscriptions\" but sometimes it seems singular.\n>\n> Some places (e.g. pgstat_vacuum_stat) will iterate multiple results,\n> but then other places (like this function) just return to a single\n> \"subscription\" (or \"entry\").\n>\n> I suspect all the code may be fine; probably it is just some\n> inconsistent (singular/plural) comments that have confused things a\n> bit.\n\nFixed.\n\n>\n> ~~~\n>\n> 9. src/backend/replication/logical/worker.c - subscription id\n>\n> + /* Report the worker failed during table synchronization */\n> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, false);\n>\n> and\n>\n> + /* Report the worker failed during the application of the change */\n> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, true);\n>\n>\n> Why don't these use MySubscription->oid instead of MyLogicalRepWorker->subid?\n\nIt's just because we used to use MyLogicalRepWorker->subid, is there\nany particular reason why we should use MySubscription->oid here?\n\n>\n> ~~~\n>\n> 10. src/include/pgstat.h - enum order\n>\n> @@ -84,8 +84,8 @@ typedef enum StatMsgType\n> PGSTAT_MTYPE_REPLSLOT,\n> PGSTAT_MTYPE_CONNECT,\n> PGSTAT_MTYPE_DISCONNECT,\n> + PGSTAT_MTYPE_SUBSCRIPTIONERROR,\n> PGSTAT_MTYPE_SUBSCRIPTIONPURGE,\n> - PGSTAT_MTYPE_SUBWORKERERROR,\n> } StatMsgType;\n>\n> This change rearranges the enum order. Maybe it is safer not to do this?\n>\n\nI assume you're concerned about binary compatibility or something. I\nthink it should not be a problem since both\nPGSTAT_MTYPE_SUBWORKERERROR and PGSTAT_MTYPE_SUBSCRIPTIONPURGE are\nintroduced to PG15.\n\n> ~~~\n>\n> 11. src/include/pgstat.h\n>\n> @@ -767,8 +747,8 @@ typedef union PgStat_Msg\n> PgStat_MsgReplSlot msg_replslot;\n> PgStat_MsgConnect msg_connect;\n> PgStat_MsgDisconnect msg_disconnect;\n> + PgStat_MsgSubscriptionError msg_subscriptionerror;\n> PgStat_MsgSubscriptionPurge msg_subscriptionpurge;\n> - PgStat_MsgSubWorkerError msg_subworkererror;\n> } PgStat_Msg;\n>\n> This change also rearranges the order. Maybe there was no good reason\n> to do that?\n\nIt's for keeping the alphabetical order within subscription-related messages.\n\n>\n> ~~~\n>\n> 12. src/include/pgstat.h - PgStat_StatDBEntry\n>\n> @@ -823,16 +803,12 @@ typedef struct PgStat_StatDBEntry\n> TimestampTz stats_timestamp; /* time of db stats file update */\n>\n> /*\n> - * tables, functions, and subscription workers must be last in the struct,\n> - * because we don't write the pointers out to the stats file.\n> - *\n> - * subworkers is the hash table of PgStat_StatSubWorkerEntry which stores\n> - * statistics of logical replication workers: apply worker and table sync\n> - * worker.\n> + * tables, functions, and subscription must be last in the struct, because\n> + * we don't write the pointers out to the stats file.\n> */\n>\n> Should that say \"tables, functions, and subscriptions\" (plural)\n\nThis part is removed in the latest patch.\n\nOn Wed, Feb 23, 2022 at 12:00 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n>\n> Thanks for your patch.\n>\n> Few comments:\n>\n> 1.\n> + <structfield>apply_error_count</structfield> <type>uint8</type>\n> ...\n> + <structfield>sync_error_count</structfield> <type>uint8</type>\n>\n> It seems that Postgres has no data type named uint8, should we change it to\n> bigint?\n\nRight, fixed.\n\n>\n> 2.\n> +# Wait for the table sync error to be reported.\n> +$node_subscriber->poll_query_until(\n> + 'postgres',\n> + qq[\n> +SELECT apply_error_count = 0 AND sync_error_count > 0\n> +FROM pg_stat_subscription_activity\n> +WHERE subname = 'tap_sub'\n> +]) or die \"Timed out while waiting for table sync error\";\n>\n> We want to check table sync error here, but do we need to check\n> \"apply_error_count = 0\"? I am not sure if it is possible that the apply worker has\n> an unexpected error, which would cause this test to fail.\n\nYeah, it seems better not to have this condition, fixed.\n\nOn Wed, Feb 23, 2022 at 3:21 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Below are my review comments just for the v1 patch test code.\n>\n> ======\n> 1. \"table sync error\" versus \"tablesync error\"\n>\n> +# Wait for the table sync error to be reported.\n> +$node_subscriber->poll_query_until(\n> + 'postgres',\n> + qq[\n> +SELECT apply_error_count = 0 AND sync_error_count > 0\n> +FROM pg_stat_subscription_activity\n> +WHERE subname = 'tap_sub'\n> +]) or die \"Timed out while waiting for table sync error\";\n> +\n> +# Truncate test_tab1 so that table sync can continue.\n> +$node_subscriber->safe_psql('postgres', \"TRUNCATE test_tab1;\");\n> +\n> # Wait for initial table sync for test_tab1 to finish.\n>\n> IMO all these \"table sync\" should be changed to \"tablesync\", because a\n> table \"sync error\" sounds like something completely different to a\n> \"tablesync error\".\n>\n> SUGGESTIONS\n> - \"Wait for the table sync error to be reported.\" --> \"Wait for the\n> tablesync error to be reported.\"\n> - \"Timed out while waiting for table sync error\" --> \"Timed out while\n> waiting for tablesync error\"\n> - \"Truncate test_tab1 so that table sync can continue.\" --> \"Truncate\n> test_tab1 so that tablesync worker can fun to completion.\"\n> - \"Wait for initial table sync for test_tab1 to finish.\" --> \"Wait for\n> the tablesync worker of test_tab1 to finish.\"\n\nFixed.\n\n>\n> ~~~\n>\n> 3. Wait for the apply worker error\n>\n> +# Wait for the apply error to be reported.\n> +$node_subscriber->poll_query_until(\n> + 'postgres',\n> + qq[\n> +SELECT apply_error_count > 0 AND sync_error_count > 0\n> +FROM pg_stat_subscription_activity\n> +WHERE subname = 'tap_sub'\n> +]) or die \"Timed out while waiting for apply error\";\n>\n> This test is only for apply worker errors. So why is the test SQL\n> checking \"AND sync_error_count > 0\"?\n>\n> (This is similar to what [Tang] #2 reported, but I think she was\n> referring to the other tablesync test)\n\nFixed.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 24 Feb 2022 10:32:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 at 7:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > ~~~\n> >\n> > 6. doc/src/sgml/monitoring.sgml - why two counters?\n> >\n> > Please forgive this noob question...\n> >\n> > I see there are 2 error_count columns (one for each kind of worker)\n> > but I was wondering why it is useful for users to be able to\n> > distinguish if the error came from the tablesync workers or from the\n> > apply workers? Do you have any example?\n> >\n> > Also, IIRC sometimes the tablesync might actually do a few \"apply\"\n> > changes itself... so the distinction may become a bit fuzzy...\n>\n> I think that the tablesync phase and the apply phase can fail for\n> different reasons. So these values would be a good indicator for users\n> to check if each phase works fine.\n>\n> After more thoughts, I think it's better to increment sync_error_count\n> also when a tablesync worker fails while applying the changes.\n>\n\nThis sounds reasonable to me because even if we are applying the\nchanges in tablesync worker, it is only for that particular table. So,\nit seems okay to increment it under category with the description:\n\"Number of times the error occurred during the initial table\nsynchronization\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Feb 2022 08:48:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 9:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Thank you for the comments! I've attached the latest version patch\r\n> that incorporated all comments I got so far. The primary change from\r\n> the previous version is that the subscription statistics live globally\r\n> rather than per-database.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nFew comments:\r\n\r\n1. \r\nI think we should add some doc for column stats_reset in pg_stat_subscription_activity view.\r\n\r\n2.\r\n+CREATE VIEW pg_stat_subscription_activity AS\r\n SELECT\r\n- w.subid,\r\n+ a.subid,\r\n s.subname,\r\n...\r\n+ a.apply_error_count,\r\n+ a.sync_error_count,\r\n+\ta.stats_reset\r\n+ FROM pg_subscription as s,\r\n+ pg_stat_get_subscription_activity(oid) as a;\r\n\r\nThe line \"a.stats_reset\" uses a Tab, and we'd better use spaces here.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Thu, 24 Feb 2022 07:20:49 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi. Below are my review comments for the v2 patch.\n\n======\n\n1. Commit message\n\nThis patch changes the pg_stat_subscription_workers view (introduced\nby commit 8d74fc9) so that it stores only statistics counters:\napply_error_count and sync_error_count, and has one entry for\nsubscription.\n\nSUGGESTION\n\"and has one entry for subscription.\" --> \"and has one entry for each\nsubscription.\"\n\n~~~\n\n2. Commit message\n\nAfter removing these error details, there are no longer relation\ninformation, so the subscription statistics are now a cluster-wide\nstatistics.\n\nSUGGESTION\n\"there are no longer relation information,\" --> \"there is no longer\nany relation information,\"\n\n~~~\n\n3. doc/src/sgml/monitoring.sgml\n\n- <para>\n- The error message\n+ Number of times the error occurred during the application of changes\n </para></entry>\n </row>\n\n <row>\n <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n- <structfield>last_error_time</structfield> <type>timestamp\nwith time zone</type>\n+ <structfield>sync_error_count</structfield> <type>bigint</type>\n </para>\n <para>\n- Last time at which this error occurred\n+ Number of times the error occurred during the initial table\n+ synchronization\n </para></entry>\n\nSUGGESTION (both places)\n\"Number of times the error occurred ...\" --> \"Number of times an error\noccurred ...\"\n\n~~~\n\n4. doc/src/sgml/monitoring.sgml - missing column\n\n(duplicate - also reported by [Tang-v2])\n\nThe PG docs for the new \"stats_reset\" column are missing.\n\n~~~\n\n5. src/backend/catalog/system_views.sql - whitespace\n\n(duplicate - also reported by [Tang-v2])\n\n- JOIN pg_subscription s ON (w.subid = s.oid);\n+ a.apply_error_count,\n+ a.sync_error_count,\n+ a.stats_reset\n+ FROM pg_subscription as s,\n+ pg_stat_get_subscription_activity(oid) as a;\n\ninconsistent tab/space indenting for 'a.stats_reset'.\n\n~~~\n\n6. src/backend/postmaster/pgstat.c - function name\n\n+/* ----------\n+ * pgstat_reset_subscription_counter() -\n+ *\n+ * Tell the statistics collector to reset a single subscription\n+ * counter, or all subscription counters (when subid is InvalidOid).\n+ *\n+ * Permission checking for this function is managed through the normal\n+ * GRANT system.\n+ * ----------\n+ */\n+void\n+pgstat_reset_subscription_counter(Oid subid)\n\nSUGGESTION (function name)\n\"pgstat_reset_subscription_counter\" -->\n\"pgstat_reset_subscription_counters\" (plural)\n\n~~\n\n7. src/backend/postmaster/pgstat.c - pgstat_recv_resetsubcounter\n\n@@ -5645,6 +5598,51 @@\npgstat_recv_resetreplslotcounter(PgStat_MsgResetreplslotcounter *msg,\n }\n }\n\n+/* ----------\n+ * pgstat_recv_resetsubcounter() -\n+ *\n+ * Reset some subscription statistics of the cluster.\n+ * ----------\n+ */\n+static void\n+pgstat_recv_resetsubcounter(PgStat_MsgResetsubcounter *msg, int len)\n\n\n\"Reset some\" seems a bit vague. Why not describe that it is all or\nnone according to the msg->m_subid?\n\n~~~\n\n8. src/backend/postmaster/pgstat.c - pgstat_recv_resetsubcounter\n\n+ if (!OidIsValid(msg->m_subid))\n+ {\n+ HASH_SEQ_STATUS sstat;\n+\n+ /* Clear all subscription counters */\n+ hash_seq_init(&sstat, subscriptionStatHash);\n+ while ((subentry = (PgStat_StatSubEntry *) hash_seq_search(&sstat)) != NULL)\n+ pgstat_reset_subscription(subentry, ts);\n+ }\n+ else\n+ {\n+ /* Get the subscription statistics to reset */\n+ subentry = pgstat_get_subscription_entry(msg->m_subid, false);\n+\n+ /*\n+ * Nothing to do if the given subscription entry is not found. This\n+ * could happen when the subscription with the subid is removed and\n+ * the corresponding statistics entry is also removed before receiving\n+ * the reset message.\n+ */\n+ if (!subentry)\n+ return;\n+\n+ /* Reset the stats for the requested replication slot */\n+ pgstat_reset_subscription(subentry, ts);\n+ }\n+}\n\nWhy not reverse the if/else?\n\nChecking OidIsValid(...) seems more natural than checking !OidIsValid(...)\n\n~~~\n\n9. src/backend/postmaster/pgstat.c - pgstat_recv_subscription_purge\n\nstatic void\npgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n{\n/* Return if we don't have replication subscription statistics */\nif (subscriptionStatHash == NULL)\nreturn;\n\n/* Remove from hashtable if present; we don't care if it's not */\n(void) hash_search(subscriptionStatHash, (void *) &(msg->m_subid),\n HASH_REMOVE, NULL);\n}\n\nSUGGESTION\nWouldn't the above code be simpler written like:\n\nif (subscriptionStatHash)\n{\n/* Remove from hashtable if present; we don't care if it's not */\n(void) hash_search(subscriptionStatHash, (void *) &(msg->m_subid),\n HASH_REMOVE, NULL);\n}\n~~~\n\n10. src/backend/replication/logical/worker.c\n\n(from my previous [Peter-v1] #9)\n\n>> + /* Report the worker failed during table synchronization */\n>> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, false);\n>>\n>> and\n>>\n>> + /* Report the worker failed during the application of the change */\n>> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, true);\n>>\n>>\n>> Why don't these use MySubscription->oid instead of MyLogicalRepWorker->subid?\n\n> It's just because we used to use MyLogicalRepWorker->subid, is there\n> any particular reason why we should use MySubscription->oid here?\n\nI felt MySubscription->oid is a more natural and more direct way of\nexpressing the same thing.\n\nConsider: \"the oid of the current subscription\" versus \"the oid of\nthe subscription of the current worker\". IMO the first one is simpler.\nYMMV.\n\nAlso, it is shorter :)\n\n~~~\n\n11. src/include/pgstat.h - enum order\n\n(follow-on from my previous v1 review comment #10)\n\n>I assume you're concerned about binary compatibility or something. I\n>think it should not be a problem since both\n>PGSTAT_MTYPE_SUBWORKERERROR and PGSTAT_MTYPE_SUBSCRIPTIONPURGE are\n>introduced to PG15.\n\nYes, maybe it is OK for those ones. But now in v2 there is a new\nPGSTAT_MTYPE_RESETSUBCOUNTER.\n\nShouldn't at least that one be put at the end for the same reason?\n\n~~~\n\n12. src/include/pgstat.h - PgStat_MsgResetsubcounter\n\nMaybe a better name for this is \"PgStat_MsgResetsubcounters\" (plural)?\n\n~~~\n\n\n13. src/test/subscription/t/026_worker_stats.pl - missing test?\n\nShouldn't there also be some test to reset the counters to confirm\nthat they really do get reset to zero?\n\n~~~\n\n14. src/tools/pgindent/typedefs.list\n\nPgStat_MsgResetsubcounter (from pgstat.h) is missing?\n\n------\n\n15. pg_stat_subscription_activity view name?\n\nHas the view name already been decided or still under discussion - I\nwas not sure?\n\nIf is it already decided then fine, but if not then my vote would be\nfor something different like:\ne.g.1 - pg_stat_subscription_errors\ne.g.2 - pg_stat_subscription_counters\ne.g.3 - pg_stat_subscription_metrics\n\nMaybe \"activity\" was chosen to be deliberately vague in case some\nfuture unknown stats columns get added? But it means now there is a\ncorresponding function \"pg_stat_reset_subscription_activity\", when in\npractice you don't really reset activity - what you want to do is\nreset some statistics *about* the activity... so it all seems a bit\nodd to me.\n\n------\n[Tang-v2] https://www.postgresql.org/message-id/OS0PR01MB6113769B17E90ADC9ACA14B2FB3D9%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n[Peter-v1] https://www.postgresql.org/message-id/CAHut%2BPtH-uN5rbGRh-%3DkCd8xvQYDf_JCcjLcVjW3OXGz6T%2BxCw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 24 Feb 2022 19:53:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On 21.02.22 17:17, Andres Freund wrote:\n> Hi,\n> \n> On 2022-02-21 14:49:01 +0530, Amit Kapila wrote:\n>> On Mon, Feb 21, 2022 at 1:18 PM Andres Freund <andres@anarazel.de> wrote:\n>>>> * stats_reset (Time at which these statistics were last reset)\n>>>>\n>>>> The view name could be pg_stat_subscription_lrep,\n>>>> pg_stat_logical_replication, or something on those lines.\n>>>\n>>> pg_stat_subscription_stats :)\n> \n>> Having *stat* two times in the name sounds slightly odd to me but let\n>> us see what others think. One more option could be\n>> pg_stat_subscription_replication.\n> \n> It was a joke, making light of our bad naming in pg_stat_*, not a serious\n> suggestion...\n\nI think pg_stat_subscription_stats is actually the least worst option.\n\nUnless we want to consider renaming pg_stat_subscription (which is \nactually more like pg_stat_subscription_activity). But I think that \nshould be avoided.\n\n\n\n",
"msg_date": "Thu, 24 Feb 2022 10:20:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On 23.02.22 03:14, Andres Freund wrote:\n> Why are the stats stored in the per-database stats file / as a second level\n> below the database? While they're also associated with a database, it's a\n> global catalog, so it seems to make more sense to have them \"live\" globally as\n> well?\n\npg_subscription being a global catalog is a bit of a lie for the benefit \nof the worker launcher, but it can be treated as a per-database catalog \nfor practical purposes.\n\n> Not just from an aesthetical perspective, but there might also be cases where\n> it's useful to send stats from the stats launcher. E.g. the number of times\n> the launcher couldn't start a worker because the max numbers of workers was\n> already active or such.\n\nThat's a reasonable point, however.\n\n\n",
"msg_date": "Thu, 24 Feb 2022 10:48:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On 24.02.22 02:32, Masahiko Sawada wrote:\n> On Wed, Feb 23, 2022 at 12:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> Hi. Below are my review comments for the v1 patch.\n> \n> Thank you for the comments! I've attached the latest version patch\n> that incorporated all comments I got so far. The primary change from\n> the previous version is that the subscription statistics live globally\n> rather than per-database.\n\nI don't think the name pg_stat_subscription_activity is a good choice.\n\nWe have a view called pg_stat_activity, which is very well known. From \nthat perspective, \"activity\" means what is happening right now or what \nhas happened most recently. The reworked view in this patch does not \ncontain that (we already have pg_stat_subscription for that), but it \ncontains accumulated counters.\n\n\n",
"msg_date": "Thu, 24 Feb 2022 10:53:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 at 6:53 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 24.02.22 02:32, Masahiko Sawada wrote:\n> > On Wed, Feb 23, 2022 at 12:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >>\n> >> Hi. Below are my review comments for the v1 patch.\n> >\n> > Thank you for the comments! I've attached the latest version patch\n> > that incorporated all comments I got so far. The primary change from\n> > the previous version is that the subscription statistics live globally\n> > rather than per-database.\n>\n> I don't think the name pg_stat_subscription_activity is a good choice.\n>\n> We have a view called pg_stat_activity, which is very well known. From\n> that perspective, \"activity\" means what is happening right now or what\n> has happened most recently. The reworked view in this patch does not\n> contain that (we already have pg_stat_subscription for that), but it\n> contains accumulated counters.\n\nRight.\n\nWhat pg_stat_subscription shows is rather suitable for the name\npg_stat_subscription_activity than the reworked view. But switching\nthese names would also not be a good idea. I think it's better to use\n\"subscription\" in the view name since it shows actually statistics for\nsubscriptions and subscription OID is the key. I personally prefer\npg_stat_subscription_counters among the ideas that have been proposed\nso far, but I'd like to hear opinions and votes.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 24 Feb 2022 20:46:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 at 2:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 9. src/backend/postmaster/pgstat.c - pgstat_recv_subscription_purge\n>\n> static void\n> pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n> {\n> /* Return if we don't have replication subscription statistics */\n> if (subscriptionStatHash == NULL)\n> return;\n>\n> /* Remove from hashtable if present; we don't care if it's not */\n> (void) hash_search(subscriptionStatHash, (void *) &(msg->m_subid),\n> HASH_REMOVE, NULL);\n> }\n>\n> SUGGESTION\n> Wouldn't the above code be simpler written like:\n>\n> if (subscriptionStatHash)\n> {\n> /* Remove from hashtable if present; we don't care if it's not */\n> (void) hash_search(subscriptionStatHash, (void *) &(msg->m_subid),\n> HASH_REMOVE, NULL);\n> }\n> ~~~\n>\n\nI think we can write that way as well but I would prefer the way it is\ncurrently in the patch as we use a similar pattern in nearby code (ex.\npgstat_recv_resetreplslotcounter) and at other places in the code base\nas well.\n\n\n> 10. src/backend/replication/logical/worker.c\n>\n> (from my previous [Peter-v1] #9)\n>\n> >> + /* Report the worker failed during table synchronization */\n> >> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, false);\n> >>\n> >> and\n> >>\n> >> + /* Report the worker failed during the application of the change */\n> >> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, true);\n> >>\n> >>\n> >> Why don't these use MySubscription->oid instead of MyLogicalRepWorker->subid?\n>\n> > It's just because we used to use MyLogicalRepWorker->subid, is there\n> > any particular reason why we should use MySubscription->oid here?\n>\n> I felt MySubscription->oid is a more natural and more direct way of\n> expressing the same thing.\n>\n> Consider: \"the oid of the current subscription\" versus \"the oid of\n> the subscription of the current worker\". IMO the first one is simpler.\n> YMMV.\n>\n\nI think we can use either but maybe MySubscription->oid would be\nslightly better here as the same is used in nearby code as well.\n\n> Also, it is shorter :)\n>\n> ~~~\n>\n> 11. src/include/pgstat.h - enum order\n>\n> (follow-on from my previous v1 review comment #10)\n>\n> >I assume you're concerned about binary compatibility or something. I\n> >think it should not be a problem since both\n> >PGSTAT_MTYPE_SUBWORKERERROR and PGSTAT_MTYPE_SUBSCRIPTIONPURGE are\n> >introduced to PG15.\n>\n> Yes, maybe it is OK for those ones. But now in v2 there is a new\n> PGSTAT_MTYPE_RESETSUBCOUNTER.\n>\n> Shouldn't at least that one be put at the end for the same reason?\n>\n> ~~~\n>\n\nI don't see the reason to put that at end. It is better to add it near\nto similar RESET enums.\n\n>\n> 13. src/test/subscription/t/026_worker_stats.pl - missing test?\n>\n> Shouldn't there also be some test to reset the counters to confirm\n> that they really do get reset to zero?\n>\n> ~~~\n>\n\nI think we avoid writing tests for stats for each and every case as it\nis not reliable in nature (the message can be lost). If we can find a\nreliable way then it is okay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Feb 2022 17:35:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 at 9:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 24, 2022 at 2:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > 10. src/backend/replication/logical/worker.c\n> >\n> > (from my previous [Peter-v1] #9)\n> >\n> > >> + /* Report the worker failed during table synchronization */\n> > >> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, false);\n> > >>\n> > >> and\n> > >>\n> > >> + /* Report the worker failed during the application of the change */\n> > >> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, true);\n> > >>\n> > >>\n> > >> Why don't these use MySubscription->oid instead of MyLogicalRepWorker->subid?\n> >\n> > > It's just because we used to use MyLogicalRepWorker->subid, is there\n> > > any particular reason why we should use MySubscription->oid here?\n> >\n> > I felt MySubscription->oid is a more natural and more direct way of\n> > expressing the same thing.\n> >\n> > Consider: \"the oid of the current subscription\" versus \"the oid of\n> > the subscription of the current worker\". IMO the first one is simpler.\n> > YMMV.\n> >\n>\n> I think we can use either but maybe MySubscription->oid would be\n> slightly better here as the same is used in nearby code as well.\n\nOkay, will change.\n\n> >\n> > 13. src/test/subscription/t/026_worker_stats.pl - missing test?\n> >\n> > Shouldn't there also be some test to reset the counters to confirm\n> > that they really do get reset to zero?\n> >\n> > ~~~\n> >\n>\n> I think we avoid writing tests for stats for each and every case as it\n> is not reliable in nature (the message can be lost). If we can find a\n> reliable way then it is okay.\n\nYeah, the messages can even be out-of-order. Particularly, in this\ntest, the apply worker and table sync worker keep reporting the\nmessages, it's quite possible that the test becomes unstable. I\nremember we removed unstable tests of resetting statistics before\n(e.g., see fc6950913).\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 24 Feb 2022 21:17:55 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "\nOn 24.02.22 12:46, Masahiko Sawada wrote:\n>> We have a view called pg_stat_activity, which is very well known. From\n>> that perspective, \"activity\" means what is happening right now or what\n>> has happened most recently. The reworked view in this patch does not\n>> contain that (we already have pg_stat_subscription for that), but it\n>> contains accumulated counters.\n> Right.\n> \n> What pg_stat_subscription shows is rather suitable for the name\n> pg_stat_subscription_activity than the reworked view. But switching\n> these names would also not be a good idea. I think it's better to use\n> \"subscription\" in the view name since it shows actually statistics for\n> subscriptions and subscription OID is the key. I personally prefer\n> pg_stat_subscription_counters among the ideas that have been proposed\n> so far, but I'd like to hear opinions and votes.\n\n_counters will fail if there is something not a counter (such as \nlast-timestamp-of-something).\n\nEarlier, pg_stat_subscription_stats was mentioned, which doesn't have \nthat problem.\n\n\n",
"msg_date": "Thu, 24 Feb 2022 13:23:55 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 at 9:23 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n>\n> On 24.02.22 12:46, Masahiko Sawada wrote:\n> >> We have a view called pg_stat_activity, which is very well known. From\n> >> that perspective, \"activity\" means what is happening right now or what\n> >> has happened most recently. The reworked view in this patch does not\n> >> contain that (we already have pg_stat_subscription for that), but it\n> >> contains accumulated counters.\n> > Right.\n> >\n> > What pg_stat_subscription shows is rather suitable for the name\n> > pg_stat_subscription_activity than the reworked view. But switching\n> > these names would also not be a good idea. I think it's better to use\n> > \"subscription\" in the view name since it shows actually statistics for\n> > subscriptions and subscription OID is the key. I personally prefer\n> > pg_stat_subscription_counters among the ideas that have been proposed\n> > so far, but I'd like to hear opinions and votes.\n>\n> _counters will fail if there is something not a counter (such as\n> last-timestamp-of-something).\n>\n> Earlier, pg_stat_subscription_stats was mentioned, which doesn't have\n> that problem.\n\nAh, I had misunderstood your comment. Right, _counter could be a\nblocker for the future changes.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 24 Feb 2022 21:51:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-24 13:23:55 +0100, Peter Eisentraut wrote:\n> On 24.02.22 12:46, Masahiko Sawada wrote:\n> > > We have a view called pg_stat_activity, which is very well known. From\n> > > that perspective, \"activity\" means what is happening right now or what\n> > > has happened most recently. The reworked view in this patch does not\n> > > contain that (we already have pg_stat_subscription for that), but it\n> > > contains accumulated counters.\n> > Right.\n> > \n> > What pg_stat_subscription shows is rather suitable for the name\n> > pg_stat_subscription_activity than the reworked view. But switching\n> > these names would also not be a good idea. I think it's better to use\n> > \"subscription\" in the view name since it shows actually statistics for\n> > subscriptions and subscription OID is the key. I personally prefer\n> > pg_stat_subscription_counters among the ideas that have been proposed\n> > so far, but I'd like to hear opinions and votes.\n> \n> _counters will fail if there is something not a counter (such as\n> last-timestamp-of-something).\n> \n> Earlier, pg_stat_subscription_stats was mentioned, which doesn't have that\n> problem.\n\nWe really should try to fix this in a more general way at some point. We have\nway too many different things mixed up in pg_stat_*.\n\nI'd like to get something like the patch in soon though, we can still change\nthe name later. I've been blocked behind this stuff for weeks, and it's\ngetting really painful...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 24 Feb 2022 06:45:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nThank you for the comments!\n\nOn Thu, Feb 24, 2022 at 4:20 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Thu, Feb 24, 2022 9:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for the comments! I've attached the latest version patch\n> > that incorporated all comments I got so far. The primary change from\n> > the previous version is that the subscription statistics live globally\n> > rather than per-database.\n> >\n>\n> Thanks for updating the patch.\n>\n> Few comments:\n>\n> 1.\n> I think we should add some doc for column stats_reset in pg_stat_subscription_activity view.\n\nAdded.\n\n>\n> 2.\n> +CREATE VIEW pg_stat_subscription_activity AS\n> SELECT\n> - w.subid,\n> + a.subid,\n> s.subname,\n> ...\n> + a.apply_error_count,\n> + a.sync_error_count,\n> + a.stats_reset\n> + FROM pg_subscription as s,\n> + pg_stat_get_subscription_activity(oid) as a;\n>\n> The line \"a.stats_reset\" uses a Tab, and we'd better use spaces here.\n\nFixed.\n\nOn Thu, Feb 24, 2022 at 5:54 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi. Below are my review comments for the v2 patch.\n>\n> ======\n>\n> 1. Commit message\n>\n> This patch changes the pg_stat_subscription_workers view (introduced\n> by commit 8d74fc9) so that it stores only statistics counters:\n> apply_error_count and sync_error_count, and has one entry for\n> subscription.\n>\n> SUGGESTION\n> \"and has one entry for subscription.\" --> \"and has one entry for each\n> subscription.\"\n\nFixed.\n\n>\n> ~~~\n>\n> 2. Commit message\n>\n> After removing these error details, there are no longer relation\n> information, so the subscription statistics are now a cluster-wide\n> statistics.\n>\n> SUGGESTION\n> \"there are no longer relation information,\" --> \"there is no longer\n> any relation information,\"\n\nFixed.\n\n>\n> ~~~\n>\n> 3. doc/src/sgml/monitoring.sgml\n>\n> - <para>\n> - The error message\n> + Number of times the error occurred during the application of changes\n> </para></entry>\n> </row>\n>\n> <row>\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> - <structfield>last_error_time</structfield> <type>timestamp\n> with time zone</type>\n> + <structfield>sync_error_count</structfield> <type>bigint</type>\n> </para>\n> <para>\n> - Last time at which this error occurred\n> + Number of times the error occurred during the initial table\n> + synchronization\n> </para></entry>\n>\n> SUGGESTION (both places)\n> \"Number of times the error occurred ...\" --> \"Number of times an error\n> occurred ...\"\n\nFixed.\n\n>\n> ~~~\n>\n> 4. doc/src/sgml/monitoring.sgml - missing column\n>\n> (duplicate - also reported by [Tang-v2])\n>\n> The PG docs for the new \"stats_reset\" column are missing.\n>\n> ~~~\n>\n> 5. src/backend/catalog/system_views.sql - whitespace\n>\n> (duplicate - also reported by [Tang-v2])\n>\n> - JOIN pg_subscription s ON (w.subid = s.oid);\n> + a.apply_error_count,\n> + a.sync_error_count,\n> + a.stats_reset\n> + FROM pg_subscription as s,\n> + pg_stat_get_subscription_activity(oid) as a;\n>\n> inconsistent tab/space indenting for 'a.stats_reset'.\n>\n> ~~~\n>\n> 6. src/backend/postmaster/pgstat.c - function name\n>\n> +/* ----------\n> + * pgstat_reset_subscription_counter() -\n> + *\n> + * Tell the statistics collector to reset a single subscription\n> + * counter, or all subscription counters (when subid is InvalidOid).\n> + *\n> + * Permission checking for this function is managed through the normal\n> + * GRANT system.\n> + * ----------\n> + */\n> +void\n> +pgstat_reset_subscription_counter(Oid subid)\n>\n> SUGGESTION (function name)\n> \"pgstat_reset_subscription_counter\" -->\n> \"pgstat_reset_subscription_counters\" (plural)\n\nFixed.\n\n>\n> ~~\n>\n> 7. src/backend/postmaster/pgstat.c - pgstat_recv_resetsubcounter\n>\n> @@ -5645,6 +5598,51 @@\n> pgstat_recv_resetreplslotcounter(PgStat_MsgResetreplslotcounter *msg,\n> }\n> }\n>\n> +/* ----------\n> + * pgstat_recv_resetsubcounter() -\n> + *\n> + * Reset some subscription statistics of the cluster.\n> + * ----------\n> + */\n> +static void\n> +pgstat_recv_resetsubcounter(PgStat_MsgResetsubcounter *msg, int len)\n>\n>\n> \"Reset some\" seems a bit vague. Why not describe that it is all or\n> none according to the msg->m_subid?\n\nI think it reset none, one, or all statistics, actually. Given other\npgstat_recv_reset* functions also have similar comments, I think we\ncan use it rather than elaborating.\n\n>\n> ~~~\n>\n> 8. src/backend/postmaster/pgstat.c - pgstat_recv_resetsubcounter\n>\n> + if (!OidIsValid(msg->m_subid))\n> + {\n> + HASH_SEQ_STATUS sstat;\n> +\n> + /* Clear all subscription counters */\n> + hash_seq_init(&sstat, subscriptionStatHash);\n> + while ((subentry = (PgStat_StatSubEntry *) hash_seq_search(&sstat)) != NULL)\n> + pgstat_reset_subscription(subentry, ts);\n> + }\n> + else\n> + {\n> + /* Get the subscription statistics to reset */\n> + subentry = pgstat_get_subscription_entry(msg->m_subid, false);\n> +\n> + /*\n> + * Nothing to do if the given subscription entry is not found. This\n> + * could happen when the subscription with the subid is removed and\n> + * the corresponding statistics entry is also removed before receiving\n> + * the reset message.\n> + */\n> + if (!subentry)\n> + return;\n> +\n> + /* Reset the stats for the requested replication slot */\n> + pgstat_reset_subscription(subentry, ts);\n> + }\n> +}\n>\n> Why not reverse the if/else?\n>\n> Checking OidIsValid(...) seems more natural than checking !OidIsValid(...)\n\nYes, but it's because we use the same pattern in the near function\n(see pgstat_recv_resetreplslotcounter()).\n\n>\n> ~~~\n>\n> 9. src/backend/postmaster/pgstat.c - pgstat_recv_subscription_purge\n>\n> static void\n> pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n> {\n> /* Return if we don't have replication subscription statistics */\n> if (subscriptionStatHash == NULL)\n> return;\n>\n> /* Remove from hashtable if present; we don't care if it's not */\n> (void) hash_search(subscriptionStatHash, (void *) &(msg->m_subid),\n> HASH_REMOVE, NULL);\n> }\n>\n> SUGGESTION\n> Wouldn't the above code be simpler written like:\n>\n> if (subscriptionStatHash)\n> {\n> /* Remove from hashtable if present; we don't care if it's not */\n> (void) hash_search(subscriptionStatHash, (void *) &(msg->m_subid),\n> HASH_REMOVE, NULL);\n> }\n\nSimilarly, as Amit also mentioned, there is a similar pattern in the\nnear function. So keep it as it is\n\n> ~~~\n>\n> 10. src/backend/replication/logical/worker.c\n>\n> (from my previous [Peter-v1] #9)\n>\n> >> + /* Report the worker failed during table synchronization */\n> >> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, false);\n> >>\n> >> and\n> >>\n> >> + /* Report the worker failed during the application of the change */\n> >> + pgstat_report_subscription_error(MyLogicalRepWorker->subid, true);\n> >>\n> >>\n> >> Why don't these use MySubscription->oid instead of MyLogicalRepWorker->subid?\n>\n> > It's just because we used to use MyLogicalRepWorker->subid, is there\n> > any particular reason why we should use MySubscription->oid here?\n>\n> I felt MySubscription->oid is a more natural and more direct way of\n> expressing the same thing.\n>\n> Consider: \"the oid of the current subscription\" versus \"the oid of\n> the subscription of the current worker\". IMO the first one is simpler.\n> YMMV.\n>\n> Also, it is shorter :)\n\nChanged.\n\n>\n> ~~~\n>\n> 11. src/include/pgstat.h - enum order\n>\n> (follow-on from my previous v1 review comment #10)\n>\n> >I assume you're concerned about binary compatibility or something. I\n> >think it should not be a problem since both\n> >PGSTAT_MTYPE_SUBWORKERERROR and PGSTAT_MTYPE_SUBSCRIPTIONPURGE are\n> >introduced to PG15.\n>\n> Yes, maybe it is OK for those ones. But now in v2 there is a new\n> PGSTAT_MTYPE_RESETSUBCOUNTER.\n>\n> Shouldn't at least that one be put at the end for the same reason?\n\nI think it's better to put it near similar RESET enums.\n\n>\n> ~~~\n>\n> 12. src/include/pgstat.h - PgStat_MsgResetsubcounter\n>\n> Maybe a better name for this is \"PgStat_MsgResetsubcounters\" (plural)?\n\nI think it's better to be consistent with other similar message\nstructs (e.g., msg_resetsharedcounter and msg_resetslrucounter).\n\n>\n> ~~~\n>\n> 14. src/tools/pgindent/typedefs.list\n>\n> PgStat_MsgResetsubcounter (from pgstat.h) is missing?\n>\n\nAdded.\n\n> ------\n>\n> 15. pg_stat_subscription_activity view name?\n>\n> Has the view name already been decided or still under discussion - I\n> was not sure?\n>\n> If is it already decided then fine, but if not then my vote would be\n> for something different like:\n> e.g.1 - pg_stat_subscription_errors\n> e.g.2 - pg_stat_subscription_counters\n> e.g.3 - pg_stat_subscription_metrics\n>\n> Maybe \"activity\" was chosen to be deliberately vague in case some\n> future unknown stats columns get added? But it means now there is a\n> corresponding function \"pg_stat_reset_subscription_activity\", when in\n> practice you don't really reset activity - what you want to do is\n> reset some statistics *about* the activity... so it all seems a bit\n> odd to me.\n\nYes, it still needs discussion.\n\nI've attached the updated patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 25 Feb 2022 00:49:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Below are my review comments for the v3 patch.\n\n======\n\n1. Commit message\n\n(An earlier review comment [Peter-v2] #2 was only partly fixed)\n\n\"there are no longer any relation information\" --> \"there is no longer\nany relation information\"\n\n~~~\n\n2. doc/src/sgml/monitoring.sgml\n\n+ <entry><structname>pg_stat_subscription_activity</structname><indexterm><primary>pg_stat_subscription_activity</primary></indexterm></entry>\n+ <entry>One row per subscription, showing statistics about subscription\n+ activity.\n+ See <link linkend=\"monitoring-pg-stat-subscription-activity\">\n+ <structname>pg_stat_subscription_activity</structname></link>\nfor details.\n </entry>\n </row>\n\nCurrently these stats are only about errors. These are not really\nstatistics about \"activity\" though. Probably it is better just to\navoid that word altogether?\n\nSUGGESTIONS\n\ne.g.1. \"One row per subscription, showing statistics about\nsubscription activity.\" --> \"One row per subscription, showing\nstatistics about errors.\"\n\ne.g.2. \"One row per subscription, showing statistics about\nsubscription activity.\" --> \"One row per subscription, showing\nstatistics about that subscription.\"\n\n-----\n[Peter-v2] https://www.postgresql.org/message-id/CAHut%2BPv%3DVmXtHmPKp4fg8VDF%2BTQP6xWgL91Jn-hrqg5QObfCZA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 25 Feb 2022 12:56:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 at 9:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > 6. src/backend/postmaster/pgstat.c - function name\n> >\n> > +/* ----------\n> > + * pgstat_reset_subscription_counter() -\n> > + *\n> > + * Tell the statistics collector to reset a single subscription\n> > + * counter, or all subscription counters (when subid is InvalidOid).\n> > + *\n> > + * Permission checking for this function is managed through the normal\n> > + * GRANT system.\n> > + * ----------\n> > + */\n> > +void\n> > +pgstat_reset_subscription_counter(Oid subid)\n> >\n> > SUGGESTION (function name)\n> > \"pgstat_reset_subscription_counter\" -->\n> > \"pgstat_reset_subscription_counters\" (plural)\n>\n> Fixed.\n>\n\nWe don't use the plural form in other similar cases like\npgstat_reset_replslot_counter, pgstat_reset_slru_counter, so why do it\ndifferently here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Feb 2022 10:56:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Tue, 25 Jan 2022 at 01:32, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I was looking the shared memory stats patch again.\n\nCan you point me to this thread? I looked for it but couldn't find it.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 25 Feb 2022 09:52:49 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Fri, Feb 25, 2022, at 11:52 AM, Greg Stark wrote:\n> On Tue, 25 Jan 2022 at 01:32, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > I was looking the shared memory stats patch again.\n> \n> Can you point me to this thread? I looked for it but couldn't find it.\nhttps://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Feb 25, 2022, at 11:52 AM, Greg Stark wrote:On Tue, 25 Jan 2022 at 01:32, Andres Freund <andres@anarazel.de> wrote:>> Hi,>> I was looking the shared memory stats patch again.Can you point me to this thread? I looked for it but couldn't find it.https://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 25 Feb 2022 16:25:01 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-25 16:25:01 -0300, Euler Taveira wrote:\n> On Fri, Feb 25, 2022, at 11:52 AM, Greg Stark wrote:\n> > On Tue, 25 Jan 2022 at 01:32, Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I was looking the shared memory stats patch again.\n> > \n> > Can you point me to this thread? I looked for it but couldn't find it.\n\n> https://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp\n\nI'll post a rebased version as soon as this is resolved... I have a local one,\nbut it just works by nuking a bunch of tests / #ifdefing out code related to\nthis.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Feb 2022 11:32:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Thu, Feb 24, 2022 at 9:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n\nI have reviewed the latest version and made a few changes along with\nfixing some of the pending comments by Peter Smith. The changes are as\nfollows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError as\nthat is not required now; (b) changed the struct name\nPgStat_MsgSubscriptionPurge to PgStat_MsgSubscriptionDrop to make it\nsimilar to DropDb; (c) changed the view name to\npg_stat_subscription_stats, we can reconsider it in future if there is\na consensus on some other name, accordingly changed the reset function\nname to pg_stat_reset_subscription_stats; (d) moved some of the newly\nadded subscription stats functions adjacent to slots to main the\nconsistency in code; (e) changed comments at few places; (f) added\nLATERAL back to system_views query as we refer pg_subscription's oid\nin the function call, previously that was not clear.\n\nDo let me know what you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 26 Feb 2022 08:21:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Fri, Feb 25, 2022 at 7:26 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Below are my review comments for the v3 patch.\n>\n...\n> 2. doc/src/sgml/monitoring.sgml\n>\n> + <entry><structname>pg_stat_subscription_activity</structname><indexterm><primary>pg_stat_subscription_activity</primary></indexterm></entry>\n> + <entry>One row per subscription, showing statistics about subscription\n> + activity.\n> + See <link linkend=\"monitoring-pg-stat-subscription-activity\">\n> + <structname>pg_stat_subscription_activity</structname></link>\n> for details.\n> </entry>\n> </row>\n>\n> Currently these stats are only about errors. These are not really\n> statistics about \"activity\" though. Probably it is better just to\n> avoid that word altogether?\n>\n> SUGGESTIONS\n>\n> e.g.1. \"One row per subscription, showing statistics about\n> subscription activity.\" --> \"One row per subscription, showing\n> statistics about errors.\"\n>\n\nI preferred this one and made another change suggested by you in the\nlatest version posted by me. Thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 26 Feb 2022 08:24:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Saturday, February 26, 2022 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> I have reviewed the latest version and made a few changes along with fixing\r\n> some of the pending comments by Peter Smith. The changes are as\r\n> follows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError as that is\r\n> not required now; (b) changed the struct name PgStat_MsgSubscriptionPurge\r\n> to PgStat_MsgSubscriptionDrop to make it similar to DropDb; (c) changed the\r\n> view name to pg_stat_subscription_stats, we can reconsider it in future if there\r\n> is a consensus on some other name, accordingly changed the reset function\r\n> name to pg_stat_reset_subscription_stats; (d) moved some of the newly\r\n> added subscription stats functions adjacent to slots to main the consistency in\r\n> code; (e) changed comments at few places; (f) added LATERAL back to\r\n> system_views query as we refer pg_subscription's oid in the function call,\r\n> previously that was not clear.\r\n> \r\n> Do let me know what you think of the attached?\r\nHi, thank you for updating the patch !\r\n\r\n\r\nI have a couple of comments on v4.\r\n\r\n(1)\r\n\r\nI'm not sure if I'm correct, but I'd say the sync_error_count\r\ncan come next to the subname as the order of columns.\r\nI felt there's case that the column order is somewhat\r\nrelated to the time/processing order (I imagined\r\npg_stat_replication's LSN related columns).\r\nIf this was right, table sync related column could be the\r\nfirst column as a counter within this patch.\r\n\r\n\r\n(2) doc/src/sgml/monitoring.sgml\r\n\r\n+ Resets statistics for a single subscription shown in the\r\n+ <structname>pg_stat_subscription_stats</structname> view to zero. If\r\n+ the argument is <literal>NULL</literal>, reset statistics for all\r\n+ subscriptions.\r\n </para>\r\n\r\nI felt we could improve the first sentence.\r\n\r\nFrom:\r\nResets statistics for a single subscription shown in the..\r\n\r\nTo(idea1):\r\nResets statistics for a single subscription defined by the argument to zero.\r\n\r\nOr,\r\nTo(idea2):\r\nResets statistics to zero for a single subscription or for all subscriptions.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Sat, 26 Feb 2022 08:05:17 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 26, 2022 at 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 24, 2022 at 9:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> I have reviewed the latest version and made a few changes along with\n> fixing some of the pending comments by Peter Smith.\n\nThank you for updating the patch!\n\n> The changes are as\n> follows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError as\n> that is not required now; (b) changed the struct name\n> PgStat_MsgSubscriptionPurge to PgStat_MsgSubscriptionDrop to make it\n> similar to DropDb; (c) changed the view name to\n> pg_stat_subscription_stats, we can reconsider it in future if there is\n> a consensus on some other name, accordingly changed the reset function\n> name to pg_stat_reset_subscription_stats; (d) moved some of the newly\n> added subscription stats functions adjacent to slots to main the\n> consistency in code; (e) changed comments at few places;\n\nAgreed.\n\n> (f) added\n> LATERAL back to system_views query as we refer pg_subscription's oid\n> in the function call, previously that was not clear.\n\nI think LATERAL is still unnecessary as you pointed out before. The\ndocumentation[1] says,\n\nLATERAL can also precede a function-call FROM item, but in this case\nit is a noise word, because the function expression can refer to\nearlier FROM items in any case.\n\nThe rest looks good to me.\n\nRegards,\n\n[1] https://www.postgresql.org/docs/devel/sql-select.html\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Feb 2022 10:18:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Sat, Feb 26, 2022 at 1:35 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Saturday, February 26, 2022 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I have reviewed the latest version and made a few changes along with fixing\n> > some of the pending comments by Peter Smith. The changes are as\n> > follows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError as that is\n> > not required now; (b) changed the struct name PgStat_MsgSubscriptionPurge\n> > to PgStat_MsgSubscriptionDrop to make it similar to DropDb; (c) changed the\n> > view name to pg_stat_subscription_stats, we can reconsider it in future if there\n> > is a consensus on some other name, accordingly changed the reset function\n> > name to pg_stat_reset_subscription_stats; (d) moved some of the newly\n> > added subscription stats functions adjacent to slots to main the consistency in\n> > code; (e) changed comments at few places; (f) added LATERAL back to\n> > system_views query as we refer pg_subscription's oid in the function call,\n> > previously that was not clear.\n> >\n> > Do let me know what you think of the attached?\n> Hi, thank you for updating the patch !\n>\n>\n> I have a couple of comments on v4.\n>\n> (1)\n>\n> I'm not sure if I'm correct, but I'd say the sync_error_count\n> can come next to the subname as the order of columns.\n> I felt there's case that the column order is somewhat\n> related to the time/processing order (I imagined\n> pg_stat_replication's LSN related columns).\n> If this was right, table sync related column could be the\n> first column as a counter within this patch.\n>\n\nI am not sure if there is such a correlation but even if it is there\nit doesn't seem to fit here completely as sync errors can happen after\napply errors in multiple ways like via Alter Subscription ... Refresh\n...\n\nSo, I don't see the need to change the order here. What do you or others think?\n\n>\n> (2) doc/src/sgml/monitoring.sgml\n>\n> + Resets statistics for a single subscription shown in the\n> + <structname>pg_stat_subscription_stats</structname> view to zero. If\n> + the argument is <literal>NULL</literal>, reset statistics for all\n> + subscriptions.\n> </para>\n>\n> I felt we could improve the first sentence.\n>\n> From:\n> Resets statistics for a single subscription shown in the..\n>\n> To(idea1):\n> Resets statistics for a single subscription defined by the argument to zero.\n>\n\nOkay, I can use this one.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 28 Feb 2022 08:03:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 26, 2022 at 1:35 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Saturday, February 26, 2022 11:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I have reviewed the latest version and made a few changes along with fixing\n> > > some of the pending comments by Peter Smith. The changes are as\n> > > follows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError as that is\n> > > not required now; (b) changed the struct name PgStat_MsgSubscriptionPurge\n> > > to PgStat_MsgSubscriptionDrop to make it similar to DropDb; (c) changed the\n> > > view name to pg_stat_subscription_stats, we can reconsider it in future if there\n> > > is a consensus on some other name, accordingly changed the reset function\n> > > name to pg_stat_reset_subscription_stats; (d) moved some of the newly\n> > > added subscription stats functions adjacent to slots to main the consistency in\n> > > code; (e) changed comments at few places; (f) added LATERAL back to\n> > > system_views query as we refer pg_subscription's oid in the function call,\n> > > previously that was not clear.\n> > >\n> > > Do let me know what you think of the attached?\n> > Hi, thank you for updating the patch !\n> >\n> >\n> > I have a couple of comments on v4.\n> >\n> > (1)\n> >\n> > I'm not sure if I'm correct, but I'd say the sync_error_count\n> > can come next to the subname as the order of columns.\n> > I felt there's case that the column order is somewhat\n> > related to the time/processing order (I imagined\n> > pg_stat_replication's LSN related columns).\n> > If this was right, table sync related column could be the\n> > first column as a counter within this patch.\n> >\n>\n> I am not sure if there is such a correlation but even if it is there\n> it doesn't seem to fit here completely as sync errors can happen after\n> apply errors in multiple ways like via Alter Subscription ... Refresh\n> ...\n>\n> So, I don't see the need to change the order here. What do you or others think?\n\nI'm also not sure about it, both sound good to me. Probably we can\nchange the order later.\n\n>\n> >\n> > (2) doc/src/sgml/monitoring.sgml\n> >\n> > + Resets statistics for a single subscription shown in the\n> > + <structname>pg_stat_subscription_stats</structname> view to zero. If\n> > + the argument is <literal>NULL</literal>, reset statistics for all\n> > + subscriptions.\n> > </para>\n> >\n> > I felt we could improve the first sentence.\n> >\n> > From:\n> > Resets statistics for a single subscription shown in the..\n> >\n> > To(idea1):\n> > Resets statistics for a single subscription defined by the argument to zero.\n> >\n>\n> Okay, I can use this one.\n\nAre you going to remove the part \"shown in the\npg_stat_subsctiption_stats view\"? I think it's better to keep it in\norder to make it clear which statistics the function resets as we have\npg_stat_subscription and pg_stat_subscription_stats.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Feb 2022 11:46:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 8:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Feb 28, 2022 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > (2) doc/src/sgml/monitoring.sgml\n> > >\n> > > + Resets statistics for a single subscription shown in the\n> > > + <structname>pg_stat_subscription_stats</structname> view to zero. If\n> > > + the argument is <literal>NULL</literal>, reset statistics for all\n> > > + subscriptions.\n> > > </para>\n> > >\n> > > I felt we could improve the first sentence.\n> > >\n> > > From:\n> > > Resets statistics for a single subscription shown in the..\n> > >\n> > > To(idea1):\n> > > Resets statistics for a single subscription defined by the argument to zero.\n> > >\n> >\n> > Okay, I can use this one.\n>\n> Are you going to remove the part \"shown in the\n> pg_stat_subsctiption_stats view\"? I think it's better to keep it in\n> order to make it clear which statistics the function resets as we have\n> pg_stat_subscription and pg_stat_subscription_stats.\n>\n\nHow about the following:\n\"Resets statistics for a single subscription defined by the argument\nshown in the <structname>pg_stat_subscription_stats</structname> view\nto zero. If the argument is <literal>NULL</literal>, reset statistics\nfor all subscriptions.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 28 Feb 2022 08:22:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 28, 2022 at 8:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Feb 28, 2022 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > >\n> > > > (2) doc/src/sgml/monitoring.sgml\n> > > >\n> > > > + Resets statistics for a single subscription shown in the\n> > > > + <structname>pg_stat_subscription_stats</structname> view to zero. If\n> > > > + the argument is <literal>NULL</literal>, reset statistics for all\n> > > > + subscriptions.\n> > > > </para>\n> > > >\n> > > > I felt we could improve the first sentence.\n> > > >\n> > > > From:\n> > > > Resets statistics for a single subscription shown in the..\n> > > >\n> > > > To(idea1):\n> > > > Resets statistics for a single subscription defined by the argument to zero.\n> > > >\n> > >\n> > > Okay, I can use this one.\n> >\n> > Are you going to remove the part \"shown in the\n> > pg_stat_subsctiption_stats view\"? I think it's better to keep it in\n> > order to make it clear which statistics the function resets as we have\n> > pg_stat_subscription and pg_stat_subscription_stats.\n> >\n>\n> How about the following:\n> \"Resets statistics for a single subscription defined by the argument\n> shown in the <structname>pg_stat_subscription_stats</structname> view\n> to zero. If the argument is <literal>NULL</literal>, reset statistics\n> for all subscriptions.\"\n\nSounds good but I'm not sure it's correct in terms of English grammar.\nShouldn't it be something like \"subscription that is defined by the\nargument and shown in the pg_stat_subscription_stats\"?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 28 Feb 2022 12:14:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Monday, February 28, 2022 11:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Sat, Feb 26, 2022 at 1:35 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Saturday, February 26, 2022 11:51 AM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > I have reviewed the latest version and made a few changes along with\r\n> > > fixing some of the pending comments by Peter Smith. The changes are\r\n> > > as\r\n> > > follows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError as\r\n> > > that is not required now; (b) changed the struct name\r\n> > > PgStat_MsgSubscriptionPurge to PgStat_MsgSubscriptionDrop to make it\r\n> > > similar to DropDb; (c) changed the view name to\r\n> > > pg_stat_subscription_stats, we can reconsider it in future if there\r\n> > > is a consensus on some other name, accordingly changed the reset\r\n> > > function name to pg_stat_reset_subscription_stats; (d) moved some of\r\n> > > the newly added subscription stats functions adjacent to slots to\r\n> > > main the consistency in code; (e) changed comments at few places;\r\n> > > (f) added LATERAL back to system_views query as we refer\r\n> pg_subscription's oid in the function call, previously that was not clear.\r\n> > >\r\n> > > Do let me know what you think of the attached?\r\n> > Hi, thank you for updating the patch !\r\n> > I have a couple of comments on v4.\r\n> >\r\n> > (1)\r\n> >\r\n> > I'm not sure if I'm correct, but I'd say the sync_error_count can come\r\n> > next to the subname as the order of columns.\r\n> > I felt there's case that the column order is somewhat related to the\r\n> > time/processing order (I imagined pg_stat_replication's LSN related\r\n> > columns).\r\n> > If this was right, table sync related column could be the first column\r\n> > as a counter within this patch.\r\n> >\r\n> \r\n> I am not sure if there is such a correlation but even if it is there it doesn't seem\r\n> to fit here completely as sync errors can happen after apply errors in multiple\r\n> ways like via Alter Subscription ... Refresh ...\r\n> \r\n> So, I don't see the need to change the order here. What do you or others think?\r\nIn the alter subscription case, any errors after the table sync would increment\r\napply_error_count.\r\n\r\nI mentioned this, because this point of view would impact on the doc read by users\r\nand internal source codes for developers.\r\nI had a concern that when we extend and increase a lot of statistics (not only for this view,\r\nbut also other statistics in general), writing doc for statistics needs some alignment for better\r\nreadability.\r\n\r\n*But*, as you mentioned, in case we don't have such a correlation, I'm okay with the current patch.\r\nThank you for replying.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 28 Feb 2022 03:19:22 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Below are my comments for the v4 patch.\n\nThese are only nitpicking comments now; Otherwise, it LGTM.\n\n(Sorry, now I see there are some overlaps with comments posted in the\nlast 20 mins so take or leave these as you wish)\n\n======\n\n1. doc/src/sgml/monitoring.sgml\n\n- <para>\n- OID of the relation that the worker was processing when the\n- error occurred\n+ Number of times an error occurred during the application of changes\n </para></entry>\n </row>\n\nBEFORE\nNumber of times an error occurred during the application of changes\nSUGGESTED\nNumber of times an error occurred while applying changes\n\n~~~\n\n2. doc/src/sgml/monitoring.sgml\n\n+ Resets statistics for a single subscription shown in the\n+ <structname>pg_stat_subscription_stats</structname> view to zero. If\n+ the argument is <literal>NULL</literal>, reset statistics for all\n+ subscriptions.\n </para>\n\nSUGGESTED (simpler description, more similar to pg_stat_reset_replication_slot)\nReset statistics to zero for a single subscription. If the argument is\n<literal>NULL</literal>, reset statistics for all subscriptions.\n\n~~~\n\n3. src/backend/replication/logical/worker.c - comment\n\n+ /* */\n+ pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());\n\nBEFORE\nReport the worker failed during the application of the change\nSUGGESTED\nReport the worker failed while applying changes\n\n~~~\n\n4. src/include/pgstat.h - comment\n\n+typedef struct PgStat_MsgResetsubcounter\n+{\n+ PgStat_MsgHdr m_hdr;\n+ Oid m_subid; /* InvalidOid for clearing all subscription\n+ * stats */\n+} PgStat_MsgResetsubcounter;\n\nBEFORE\nInvalidOid for clearing all subscription stats\nSUGGESTED\nInvalidOid means reset all subscription stats\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 28 Feb 2022 14:29:13 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 8:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> 2. doc/src/sgml/monitoring.sgml\n>\n> + Resets statistics for a single subscription shown in the\n> + <structname>pg_stat_subscription_stats</structname> view to zero. If\n> + the argument is <literal>NULL</literal>, reset statistics for all\n> + subscriptions.\n> </para>\n>\n> SUGGESTED (simpler description, more similar to pg_stat_reset_replication_slot)\n> Reset statistics to zero for a single subscription. If the argument is\n> <literal>NULL</literal>, reset statistics for all subscriptions.\n>\n\nAs discussed, it is better to keep the view name in this description\nimportant as we have another view (pg_stat_susbcription) as well. So,\nI am planning to retain the current wording.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 28 Feb 2022 09:20:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 8:49 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, February 28, 2022 11:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Sat, Feb 26, 2022 at 1:35 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Saturday, February 26, 2022 11:51 AM Amit Kapila\n> > <amit.kapila16@gmail.com> wrote:\n> > > > I have reviewed the latest version and made a few changes along with\n> > > > fixing some of the pending comments by Peter Smith. The changes are\n> > > > as\n> > > > follows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError as\n> > > > that is not required now; (b) changed the struct name\n> > > > PgStat_MsgSubscriptionPurge to PgStat_MsgSubscriptionDrop to make it\n> > > > similar to DropDb; (c) changed the view name to\n> > > > pg_stat_subscription_stats, we can reconsider it in future if there\n> > > > is a consensus on some other name, accordingly changed the reset\n> > > > function name to pg_stat_reset_subscription_stats; (d) moved some of\n> > > > the newly added subscription stats functions adjacent to slots to\n> > > > main the consistency in code; (e) changed comments at few places;\n> > > > (f) added LATERAL back to system_views query as we refer\n> > pg_subscription's oid in the function call, previously that was not clear.\n> > > >\n> > > > Do let me know what you think of the attached?\n> > > Hi, thank you for updating the patch !\n> > > I have a couple of comments on v4.\n> > >\n> > > (1)\n> > >\n> > > I'm not sure if I'm correct, but I'd say the sync_error_count can come\n> > > next to the subname as the order of columns.\n> > > I felt there's case that the column order is somewhat related to the\n> > > time/processing order (I imagined pg_stat_replication's LSN related\n> > > columns).\n> > > If this was right, table sync related column could be the first column\n> > > as a counter within this patch.\n> > >\n> >\n> > I am not sure if there is such a correlation but even if it is there it doesn't seem\n> > to fit here completely as sync errors can happen after apply errors in multiple\n> > ways like via Alter Subscription ... Refresh ...\n> >\n> > So, I don't see the need to change the order here. What do you or others think?\n> In the alter subscription case, any errors after the table sync would increment\n> apply_error_count.\n>\n\nSure, but the point I was trying to explain was that there is no\ncertainty in the order of these errors.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 28 Feb 2022 09:27:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Monday, February 28, 2022 12:57 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> On Mon, Feb 28, 2022 at 8:49 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Monday, February 28, 2022 11:34 AM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > On Sat, Feb 26, 2022 at 1:35 PM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > On Saturday, February 26, 2022 11:51 AM Amit Kapila\r\n> > > <amit.kapila16@gmail.com> wrote:\r\n> > > > > I have reviewed the latest version and made a few changes along\r\n> > > > > with fixing some of the pending comments by Peter Smith. The\r\n> > > > > changes are as\r\n> > > > > follows: (a) Removed m_databaseid in PgStat_MsgSubscriptionError\r\n> > > > > as that is not required now; (b) changed the struct name\r\n> > > > > PgStat_MsgSubscriptionPurge to PgStat_MsgSubscriptionDrop to\r\n> > > > > make it similar to DropDb; (c) changed the view name to\r\n> > > > > pg_stat_subscription_stats, we can reconsider it in future if\r\n> > > > > there is a consensus on some other name, accordingly changed the\r\n> > > > > reset function name to pg_stat_reset_subscription_stats; (d)\r\n> > > > > moved some of the newly added subscription stats functions\r\n> > > > > adjacent to slots to main the consistency in code; (e) changed\r\n> > > > > comments at few places;\r\n> > > > > (f) added LATERAL back to system_views query as we refer\r\n> > > pg_subscription's oid in the function call, previously that was not clear.\r\n> > > > >\r\n> > > > > Do let me know what you think of the attached?\r\n> > > > Hi, thank you for updating the patch !\r\n> > > > I have a couple of comments on v4.\r\n> > > >\r\n> > > > (1)\r\n> > > >\r\n> > > > I'm not sure if I'm correct, but I'd say the sync_error_count can\r\n> > > > come next to the subname as the order of columns.\r\n> > > > I felt there's case that the column order is somewhat related to\r\n> > > > the time/processing order (I imagined pg_stat_replication's LSN\r\n> > > > related columns).\r\n> > > > If this was right, table sync related column could be the first\r\n> > > > column as a counter within this patch.\r\n> > > >\r\n> > >\r\n> > > I am not sure if there is such a correlation but even if it is there\r\n> > > it doesn't seem to fit here completely as sync errors can happen\r\n> > > after apply errors in multiple ways like via Alter Subscription ... Refresh ...\r\n> > >\r\n> > > So, I don't see the need to change the order here. What do you or others\r\n> think?\r\n> > In the alter subscription case, any errors after the table sync would\r\n> > increment apply_error_count.\r\n> >\r\n> \r\n> Sure, but the point I was trying to explain was that there is no certainty in the\r\n> order of these errors.\r\nI got it. Thank you so much for your explanation.\r\n\r\n\r\nI don't have other new comments on this patch.\r\nIt looks good to me as well.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 28 Feb 2022 04:06:34 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 8:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Feb 28, 2022 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > (2) doc/src/sgml/monitoring.sgml\n> > >\n> > > + Resets statistics for a single subscription shown in the\n> > > + <structname>pg_stat_subscription_stats</structname> view to zero. If\n> > > + the argument is <literal>NULL</literal>, reset statistics for all\n> > > + subscriptions.\n> > > </para>\n> > >\n> > > I felt we could improve the first sentence.\n> > >\n> > > From:\n> > > Resets statistics for a single subscription shown in the..\n> > >\n> > > To(idea1):\n> > > Resets statistics for a single subscription defined by the argument to zero.\n> > >\n> >\n> > Okay, I can use this one.\n>\n> Are you going to remove the part \"shown in the\n> pg_stat_subsctiption_stats view\"? I think it's better to keep it in\n> order to make it clear which statistics the function resets as we have\n> pg_stat_subscription and pg_stat_subscription_stats.\n>\n\nI decided to keep this part of the docs as it is and fixed a few other\nminor comments raised by you and Peter. Additionally, I have bumped\nthe PGSTAT_FILE_FORMAT_ID. I'll push this patch tomorrow unless there\nare any other major comments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 28 Feb 2022 10:01:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Feb 28, 2022 at 8:17 AM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Mon, Feb 28, 2022 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > >\r\n> > > > (2) doc/src/sgml/monitoring.sgml\r\n> > > >\r\n> > > > + Resets statistics for a single subscription shown in the\r\n> > > > + <structname>pg_stat_subscription_stats</structname> view to zero. If\r\n> > > > + the argument is <literal>NULL</literal>, reset statistics for all\r\n> > > > + subscriptions.\r\n> > > > </para>\r\n> > > >\r\n> > > > I felt we could improve the first sentence.\r\n> > > >\r\n> > > > From:\r\n> > > > Resets statistics for a single subscription shown in the..\r\n> > > >\r\n> > > > To(idea1):\r\n> > > > Resets statistics for a single subscription defined by the argument to zero.\r\n> > > >\r\n> > >\r\n> > > Okay, I can use this one.\r\n> >\r\n> > Are you going to remove the part \"shown in the\r\n> > pg_stat_subsctiption_stats view\"? I think it's better to keep it in\r\n> > order to make it clear which statistics the function resets as we have\r\n> > pg_stat_subscription and pg_stat_subscription_stats.\r\n> >\r\n> \r\n> I decided to keep this part of the docs as it is and fixed a few other\r\n> minor comments raised by you and Peter. Additionally, I have bumped\r\n> the PGSTAT_FILE_FORMAT_ID. I'll push this patch tomorrow unless there\r\n> are any other major comments.\r\n> \r\n\r\nThanks for your patch. I have finished the review/test for this patch.\r\nThe patch LGTM.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Mon, 28 Feb 2022 07:45:37 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 1:15 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Feb 28, 2022 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I decided to keep this part of the docs as it is and fixed a few other\n> > minor comments raised by you and Peter. Additionally, I have bumped\n> > the PGSTAT_FILE_FORMAT_ID. I'll push this patch tomorrow unless there\n> > are any other major comments.\n> >\n>\n> Thanks for your patch. I have finished the review/test for this patch.\n> The patch LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Mar 2022 07:35:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-02 07:35:46 +0530, Amit Kapila wrote:\n> Pushed.\n\nThanks!\n\nWorking on rebasing shared memory stats over this. Feels *much* better so far.\n\nWhile rebasing, I was wondering why pgstat_reset_subscription_counter() has\n\"all subscription counters\" support. We don't have a function to reset all\nfunction stats or such either.\n\nI'm asking because support for that is what currently prevents sub stats from\nusing the more general function for reset. It's an acceptable amount of code,\nbut if we don't really need it I'd rather not have it / add it in a more\ngeneral way if we want it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Mar 2022 21:09:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 10:39 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Working on rebasing shared memory stats over this. Feels *much* better so far.\n>\n\nGood to hear that it helps. BTW, there is another patch [1] that\nextends this view. I think that patch is still not ready but once it\nis ready (which I expect to happen sometime in this CF), it might be\ngood if you would be able to check whether it has any major problem\nwith your integration.\n\n> While rebasing, I was wondering why pgstat_reset_subscription_counter() has\n> \"all subscription counters\" support. We don't have a function to reset all\n> function stats or such either.\n>\n\nWe have similar thing for srlu (pg_stat_reset_slru) and slots\n(pg_stat_reset_replication_slot). For functions and tables, one can\nuse pg_stat_reset. Similarly, we have pg_stat_reset_shared() which\nreset stats like WAL. This matches more with slru/slots, so we\nprovidied it via pg_stat_reset_subscription_stats.\n\n[1] - https://www.postgresql.org/message-id/TYWPR01MB8362B30A904274A911C0B1CCED039%40TYWPR01MB8362.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Mar 2022 12:39:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-02 12:39:57 +0530, Amit Kapila wrote:\n> On Wed, Mar 2, 2022 at 10:39 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Working on rebasing shared memory stats over this. Feels *much* better so far.\n> >\n> \n> Good to hear that it helps. BTW, there is another patch [1] that\n> extends this view. I think that patch is still not ready but once it\n> is ready (which I expect to happen sometime in this CF), it might be\n> good if you would be able to check whether it has any major problem\n> with your integration.\n\nI skimmed it briefly, and I don't see an architectural conflict. I'm not\nconvinced it's worth adding that information, but that's a separate\ndiscussion.\n\n\n> > While rebasing, I was wondering why pgstat_reset_subscription_counter() has\n> > \"all subscription counters\" support. We don't have a function to reset all\n> > function stats or such either.\n> >\n> \n> We have similar thing for srlu (pg_stat_reset_slru) and slots\n> (pg_stat_reset_replication_slot).\n\nNeither should have been added imo. We're already at 9 different reset\nfunctions. Without a unified function to reset all stats, pretty much the only\nactually relevant operation. And having pg_stat_reset_shared() with variable\n'reset' systems but then also pg_stat_reset_slru() and\npg_stat_reset_subscription_stats() is absurd.\n\nThis is just making something incomprehensible evermore incomprehensible.\n\n\n> For functions and tables, one can use pg_stat_reset.\n\nNot in isolation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Mar 2022 23:56:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 1:26 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > > While rebasing, I was wondering why pgstat_reset_subscription_counter() has\n> > > \"all subscription counters\" support. We don't have a function to reset all\n> > > function stats or such either.\n> > >\n> >\n> > We have similar thing for srlu (pg_stat_reset_slru) and slots\n> > (pg_stat_reset_replication_slot).\n>\n> Neither should have been added imo. We're already at 9 different reset\n> functions.\n>\n\nAs per [1], we have 7.\n\n>\n> And having pg_stat_reset_shared() with variable\n> 'reset' systems but then also pg_stat_reset_slru() and\n> pg_stat_reset_subscription_stats() is absurd.\n>\n\nI don't know. I feel if for some subsystem, we have a way to reset a\nsingle counter like for slru or slots, one would prefer to use the\nsame function to reset all stats of that subsytem. Now, for WAL,\nbgwriter, etc., we don't want to reset any specific counter, so doing\nit via a shared function is okay but not for others.\n\n[1] - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-STATS-FUNCTIONS\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Mar 2022 16:08:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-25 11:32:24 -0800, Andres Freund wrote:\n> On 2022-02-25 16:25:01 -0300, Euler Taveira wrote:\n> > On Fri, Feb 25, 2022, at 11:52 AM, Greg Stark wrote:\n> > > On Tue, 25 Jan 2022 at 01:32, Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > I was looking the shared memory stats patch again.\n> > > \n> > > Can you point me to this thread? I looked for it but couldn't find it.\n> \n> > https://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp\n> \n> I'll post a rebased version as soon as this is resolved... I have a local one,\n> but it just works by nuking a bunch of tests / #ifdefing out code related to\n> this.\n\nNow that the pg_stat_subscription_workers changes have been committed, I've\nposted a rebased version to the above thread.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 2 Mar 2022 18:21:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Design of pg_stat_subscription_workers vs pgstats"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit 6e0cb3dec1 allowed postgres_fdw.application_name to include escape sequences %a (application name), %d (database name), %u (user name) and %p (pid). In addition to them, I'd like to support the escape sequence (e.g., %C) for cluster name there. This escape sequence is helpful to investigate where each remote transactions came from. Thought?\n\nPatch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 25 Jan 2022 16:02:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "At Tue, 25 Jan 2022 16:02:39 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Hi,\n> \n> Commit 6e0cb3dec1 allowed postgres_fdw.application_name to include\n> escape sequences %a (application name), %d (database name), %u (user\n> name) and %p (pid). In addition to them, I'd like to support the\n> escape sequence (e.g., %C) for cluster name there. This escape\n> sequence is helpful to investigate where each remote transactions came\n> from. Thought?\n> \n> Patch attached.\n\nI don't object to adding more meaningful replacements, but more escape\nsequence makes me anxious about the increased easiness of exceeding\nthe size limit of application_name. Considering that it is used to\nidentify fdw-initinator server, we might need to add padding (or\nrather truncating) option in the escape sequence syntax, then warn\nabout truncated application_names for safety.\n\nIs the reason for 'C' in upper-case to avoid possible conflict with\n'c' of log_line_prefix? I'm not sure that preventive measure is worth\ndoing. Looking the escape-sequence spec alone, it seems to me rather\nstrange that an upper-case letter is used in spite of its lower-case\nis not used yet.\n\nOtherwise all looks fine to me except the lack of documentation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 27 Jan 2022 17:10:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "\n\nOn 2022/01/27 17:10, Kyotaro Horiguchi wrote:\n> At Tue, 25 Jan 2022 16:02:39 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Hi,\n>>\n>> Commit 6e0cb3dec1 allowed postgres_fdw.application_name to include\n>> escape sequences %a (application name), %d (database name), %u (user\n>> name) and %p (pid). In addition to them, I'd like to support the\n>> escape sequence (e.g., %C) for cluster name there. This escape\n>> sequence is helpful to investigate where each remote transactions came\n>> from. Thought?\n>>\n>> Patch attached.\n> \n> I don't object to adding more meaningful replacements, but more escape\n> sequence makes me anxious about the increased easiness of exceeding\n> the size limit of application_name.\n\nIf this is really an issue, it might be time to reconsider the size limit of application_name. If it's considered too short, the patch that enlarges it should be proposed separately.\n\n> Considering that it is used to\n> identify fdw-initinator server, we might need to add padding (or\n> rather truncating) option in the escape sequence syntax, then warn\n> about truncated application_names for safety.\n\nI failed to understand this. Could you tell me why we might need to add padding option here?\n\n> Is the reason for 'C' in upper-case to avoid possible conflict with\n> 'c' of log_line_prefix?\n\nYes.\n\n> I'm not sure that preventive measure is worth\n> doing. Looking the escape-sequence spec alone, it seems to me rather\n> strange that an upper-case letter is used in spite of its lower-case\n> is not used yet.\n\nI have no strong opinion about using %C. If there is better character for the escape sequence, I'm happy to use it. So what character is more proper? %c?\n\n> Otherwise all looks fine to me except the lack of documentation.\n\nThe patch updated postgres-fdw.sgml, but you imply there are other documents that the patch should update? Could you tell me where the patch should update?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 27 Jan 2022 19:26:39 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "Hi,\n\n\nThank you for developing this feature.\nI think adding escape sequence for cluster_name is useful too.\n\n> Is the reason for 'C' in upper-case to avoid possible conflict with\n> 'c' of log_line_prefix? I'm not sure that preventive measure is worth\n> doing. Looking the escape-sequence spec alone, it seems to me rather\n> strange that an upper-case letter is used in spite of its lower-case\n> is not used yet.\n\nI think %c of log_line_prefix (Session ID) is also useful for postgres_fdw.application_name.\nTherefore, how about adding both %c (Session ID) and %C (cluster_name)?\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Fri, 28 Jan 2022 05:07:28 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "On Thu, Jan 27, 2022 at 3:10 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Is the reason for 'C' in upper-case to avoid possible conflict with\n> 'c' of log_line_prefix? I'm not sure that preventive measure is worth\n> doing. Looking the escape-sequence spec alone, it seems to me rather\n> strange that an upper-case letter is used in spite of its lower-case\n> is not used yet.\n\nIt's good to be consistent, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Jan 2022 09:10:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "On 2022/01/28 14:07, r.takahashi_2@fujitsu.com wrote:\n> I think %c of log_line_prefix (Session ID) is also useful for postgres_fdw.application_name.\n> Therefore, how about adding both %c (Session ID) and %C (cluster_name)?\n\n+1\n\nAttached is the updated version of the patch. It adds those escape sequences %c and %C.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 7 Feb 2022 23:03:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "Hi,\n\n\nThank you for updating the patch.\nI agree with the documentation and program.\n\nHow about adding the test for %c (Session ID)?\n(Adding the test for %C (cluster_name) seems difficult.)\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Wed, 9 Feb 2022 00:19:03 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "Sorry for missing this.\n\nAt Thu, 27 Jan 2022 19:26:39 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> On 2022/01/27 17:10, Kyotaro Horiguchi wrote:\n> > I don't object to adding more meaningful replacements, but more escape\n> > sequence makes me anxious about the increased easiness of exceeding\n> > the size limit of application_name.\n> \n> If this is really an issue, it might be time to reconsider the size\n> limit of application_name. If it's considered too short, the patch\n> that enlarges it should be proposed separately.\n\nThat makes sense.\n\n> > Considering that it is used to\n> > identify fdw-initinator server, we might need to add padding (or\n> > rather truncating) option in the escape sequence syntax, then warn\n> > about truncated application_names for safety.\n> \n> I failed to understand this. Could you tell me why we might need to\n> add padding option here?\n\nMy point was \"truncating\" option, which limits the length of the\nreplacement string. But expanding the application_name limit is more\nsensible.\n\n> > Is the reason for 'C' in upper-case to avoid possible conflict with\n> > 'c' of log_line_prefix?\n> \n> Yes.\n> \n> > I'm not sure that preventive measure is worth\n> > doing. Looking the escape-sequence spec alone, it seems to me rather\n> > strange that an upper-case letter is used in spite of its lower-case\n> > is not used yet.\n> \n> I have no strong opinion about using %C. If there is better character\n> for the escape sequence, I'm happy to use it. So what character is\n> more proper? %c?\n\nI think so.\n\n> > Otherwise all looks fine to me except the lack of documentation.\n> \n> The patch updated postgres-fdw.sgml, but you imply there are other\n> documents that the patch should update? Could you tell me where the\n> patch should update?\n\nMmm. I should have missed that part.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 09 Feb 2022 16:55:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "On 2022/02/09 9:19, r.takahashi_2@fujitsu.com wrote:\n> Hi,\n> \n> \n> Thank you for updating the patch.\n> I agree with the documentation and program.\n> \n> How about adding the test for %c (Session ID)?\n> (Adding the test for %C (cluster_name) seems difficult.)\n\nOk, I added the tests for %c and %C escape sequences.\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 10 Feb 2022 23:42:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "Hi Fujii san,\n\n\nThank you for updating the patch.\nI have no additional comments.\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Mon, 14 Feb 2022 23:52:58 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
},
{
"msg_contents": "\n\nOn 2022/02/15 8:52, r.takahashi_2@fujitsu.com wrote:\n> Hi Fujii san,\n> \n> \n> Thank you for updating the patch.\n> I have no additional comments.\n\nThanks for the review! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 18 Feb 2022 11:40:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Support escape sequence for cluster_name in\n postgres_fdw.application_name"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.