threads
listlengths
1
2.99k
[ { "msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n    if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count        \n  OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );   \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n    {\n        myNumColumns = PQnfields(mySqlResultsPG);\n        myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n        myCurrentRowNum = 0 ;\n    }\n\n\n \nRegards\nTarkeshwar", "msg_date": "Thu, 17 Oct 2019 11:16:29 +0000", "msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>", "msg_from_op": true, "msg_subject": "Can you please tell us how set this prefetch attribute in following\n lines." }, { "msg_contents": "On Thu, 2019-10-17 at 11:16 +0000, M Tarkeshwar Rao wrote:\r\n> [EXTERNAL SOURCE]\r\n> \r\n> \r\n> \r\n> Hi all,\r\n> \r\n> How to fetch certain number of tuples from a postgres table.\r\n> \r\n> Same I am doing in oracle using following lines by setting prefetch attribute.\r\n> \r\n> For oracle\r\n> // Prepare query\r\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\r\n> // Get statement type\r\n> OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\r\n> // Set prefetch count \r\n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \r\n> // Execute query\r\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\r\n> \r\n> \r\n> For Postgres\r\n> \r\n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\r\n> \r\n> mySqlResultsPG = PQexec(connection, aSqlStatement);\r\n> if((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\r\n> if ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\r\n> {\r\n> myNumColumns = PQnfields(mySqlResultsPG);\r\n> myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\r\n> myCurrentRowNum = 0 ;\r\n> }\r\n> \r\n> \r\n> Regards\r\n> Tarkeshwar\r\n> \r\n\r\ndeclare a cursor and fetch\r\n\r\nhttps://books.google.com/books?id=Nc5ZT2X5mOcC&pg=PA405&lpg=PA405&dq=pqexec+fetch&source=bl&ots=8P8w5JemcL&sig=ACfU3U0POGGSP0tYTrs5oxykJdOeffaspA&hl=en&sa=X&ved=2ahUKEwjevbmA2KPlAhXukOAKHaBIBcoQ6AEwCnoECDEQAQ#v=onepage&q=pqexec%20fetch&f=false\r\n\r\n\r\n", "msg_date": "Thu, 17 Oct 2019 16:18:42 +0000", "msg_from": "Reid Thompson <Reid.Thompson@omnicell.com>", "msg_from_op": false, "msg_subject": "Re: Can you please tell us how set this prefetch attribute in\n following lines." }, { "msg_contents": "On Thu, 2019-10-17 at 11:16 +0000, M Tarkeshwar Rao wrote:\n> How to fetch certain number of tuples from a postgres table.\n> \n> Same I am doing in oracle using following lines by setting prefetch attribute.\n> \n> For oracle\n> // Prepare query\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n> // Get statement type\n> OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n> // Set prefetch count \n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \n> // Execute query\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n> \n> For Postgres\n> \n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n> \n> mySqlResultsPG = PQexec(connection, aSqlStatement);\n> \n> if((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\n> if ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n> {\n> myNumColumns = PQnfields(mySqlResultsPG);\n> myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n> myCurrentRowNum = 0 ;\n> }\n\nThe C API doesn't offer anything like Oracle prefetch to force prefetching of a certain\nnumber of result rows.\n\nIn the PostgreSQL code you show above, the whole result set will be fetched in one go\nand cached in client RAM, so in a way you have \"prefetch all\".\n\nThe alternative thet the C API gives you is PQsetSingleRowMode(), which, when called,\nwill return the result rows one by one, as they arrive from the server.\nThat disables prefetching.\n\nIf you want to prefetch only a certain number of rows, you can use the DECLARE and\nFETCH SQL statements to create a cursor in SQL and fetch it in batches.\n\nThis workaround has the down side that the current query shown in \"pg_stat_activity\"\nor \"pg_stat_statements\" is always something like \"FETCH 32\", and you are left to guess\nwhich statement actually caused the problem.\n\n\nIf you are willing to bypass the C API and directly speak the network protocol with\nthe server, you can do better. This is documented in\nhttps://www.postgresql.org/docs/current/protocol.html\n\nThe \"Execute\" ('E') message allows you to send an integer with the maximum number of\nrows to return (0 means everything), so that does exactly what you want.\n\nThe backend will send a \"PortalSuspended\" ('s') to indicate that there is more to come,\nand you keep sending \"Execute\" until you get a \"CommandComplete\" ('C').\n\nI you feel hacky you could write C API support for that...\n\n\nIf you use that or a cursor, PostgreSQL will know that you are executing a cursor\nand will plan its queries differently: it will assume that only \"cursor_tuple_fraction\"\n(default 0.1) of your result set is actually fetched and prefer fast startup plans.\nIf you don't want that, because you are fetching batches as fast as you can without\nlengthy intermediate client processing, you might want to set the parameter to 1.0.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Thu, 17 Oct 2019 19:05:57 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Can you please tell us how set this prefetch attribute in\n following lines." }, { "msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n    if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count        \n  OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );   \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n    {\n        myNumColumns = PQnfields(mySqlResultsPG);\n        myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n        myCurrentRowNum = 0 ;\n    }\n \nRegards\nTarkeshwar", "msg_date": "Fri, 18 Oct 2019 03:43:38 +0000", "msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>", "msg_from_op": true, "msg_subject": "Can you please tell us how set this prefetch attribute in following\n lines." }, { "msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n    if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count        \n  OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );   \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n    {\n        myNumColumns = PQnfields(mySqlResultsPG);\n        myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n        myCurrentRowNum = 0 ;\n    }\n \nRegards\nTarkeshwar", "msg_date": "Fri, 18 Oct 2019 03:47:13 +0000", "msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>", "msg_from_op": true, "msg_subject": "RE: Can you please tell us how set this prefetch attribute in\n following lines." }, { "msg_contents": "Hi all,\n\n\n\nHow to fetch certain number of tuples from a postgres table.\n\n\n\nSame I am doing in oracle using following lines by setting prefetch attribute.\n\n\n\nFor oracle\n// Prepare query\n if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count\n OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n\n\nFor Postgres\n\n\n\nCan you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\n\nmySqlResultsPG = PQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n {\n myNumColumns = PQnfields(mySqlResultsPG);\n myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n myCurrentRowNum = 0 ;\n }\n\n\n\nRegards\n\nTarkeshwar\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nHow to fetch certain number of tuples from a postgres table.\n\n \nSame I am doing in oracle\nusing following lines by setting \nprefetch attribute.\n \nFor oracle\n\n// Prepare query\n    if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n// Get statement type\n OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n// Set prefetch count        \n  OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );   \n// Execute query\nstatus = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n \n \nFor Postgres\n\n \nCan you please tell us how set this prefetch attribute in following lines. Is\nPQexec\nreturns all the rows from the table?\n \nmySqlResultsPG =\nPQexec(connection, aSqlStatement);\nif((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || (PQstatus(connection) != CONNECTION_OK)){}\nif ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\n    {\n        myNumColumns = PQnfields(mySqlResultsPG);\n        myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\n        myCurrentRowNum = 0 ;\n    }\n \nRegards\nTarkeshwar", "msg_date": "Fri, 18 Oct 2019 03:47:49 +0000", "msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>", "msg_from_op": true, "msg_subject": "Can you please tell us how set this prefetch attribute in following\n lines." }, { "msg_contents": "On Fri, Oct 18, 2019 at 03:47:49AM +0000, M Tarkeshwar Rao wrote:\n> How to fetch certain number of tuples from a postgres table.\n> \n> Same I am doing in oracle using following lines by setting prefetch attribute.\n> \n> For oracle\n> // Prepare query\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text *)aSqlStatement,\n> // Get statement type\n> OCIAttrGet( (void *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\n> // Set prefetch count\n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError );\n> // Execute query\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, iters, 0, NULL, NULL, OCI_DEFAULT );\n> \n> For Postgres\n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\n\nYes, PQexec reads everything at once into a buffer on the library.\nhttps://www.postgresql.org/docs/current/libpq-exec.html\n\nI think you want this:\nhttps://www.postgresql.org/docs/current/libpq-async.html\n|Another frequently-desired feature that can be obtained with PQsendQuery and PQgetResult is retrieving large query results a row at a time. This is discussed in Section 33.5.\nhttps://www.postgresql.org/docs/current/libpq-single-row-mode.html\n\nNote this does not naively send \"get one row\" requests to the server on each\ncall. Rather, I believe it behaves at a protocol layer exactly the same as\nPQexec(), but each library call returns only a single row. When it runs out of\nrows, it requests from the server another packet full of rows, which are saved\nfor future iterations.\n\nThe effect is constant memory use for arbitrarily large result set with same\nnumber of network roundtrips as PQexec(). You'd do something like:\n\nPQsendQuery(conn)\nPQsetSingleRowMode(conn)\nwhile(res = PQgetResult(conn)) {\n\t...\n\tPQclear(res)\n}\n\nJustin\n\n\n", "msg_date": "Fri, 18 Oct 2019 11:15:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Can you please tell us how set this prefetch attribute in\n following lines." }, { "msg_contents": "Thanks Thompson. Your inputs are very valuable and we successfully implemented it and results are very good. \r\n\r\nBut I am getting following error message. Can you please suggest why this is coming and what is the remedy for this.\r\n\r\nError Details\r\n-----------------\r\nFailed to execute the sql command close: \r\nmycursor_4047439616_1571970686004430275FATAL: terminating connection due to conflict with recovery\r\nDETAIL: User query might have needed to see row versions that must be removed.\r\nHINT: In a moment you should be able to reconnect to the database and repeat your command.\r\n\r\nRegards\r\nTarkeshwar\r\n\r\n-----Original Message-----\r\nFrom: Reid Thompson <Reid.Thompson@omnicell.com> \r\nSent: Thursday, October 17, 2019 9:49 PM\r\nTo: pgsql-general@lists.postgresql.org\r\nCc: Reid Thompson <Reid.Thompson@omnicell.com>\r\nSubject: Re: Can you please tell us how set this prefetch attribute in following lines.\r\n\r\nOn Thu, 2019-10-17 at 11:16 +0000, M Tarkeshwar Rao wrote:\r\n> [EXTERNAL SOURCE]\r\n> \r\n> \r\n> \r\n> Hi all,\r\n> \r\n> How to fetch certain number of tuples from a postgres table.\r\n> \r\n> Same I am doing in oracle using following lines by setting prefetch attribute.\r\n> \r\n> For oracle\r\n> // Prepare query\r\n> if( OCIStmtPrepare( myOciStatement, myOciError, (text \r\n> *)aSqlStatement, // Get statement type OCIAttrGet( (void \r\n> *)myOciStatement, OCI_HTYPE_STMT, &statement_type, 0, OCI_ATTR_STMT_TYPE, myOciError );\r\n> // Set prefetch count \r\n> OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, &prefetch, 0, OCI_ATTR_PREFETCH_ROWS, myOciError ); \r\n> // Execute query\r\n> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, \r\n> iters, 0, NULL, NULL, OCI_DEFAULT );\r\n> \r\n> \r\n> For Postgres\r\n> \r\n> Can you please tell us how set this prefetch attribute in following lines. Is PQexec returns all the rows from the table?\r\n> \r\n> mySqlResultsPG = PQexec(connection, aSqlStatement);\r\n> if((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || \r\n> (PQstatus(connection) != CONNECTION_OK)){} if ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))\r\n> {\r\n> myNumColumns = PQnfields(mySqlResultsPG);\r\n> myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);\r\n> myCurrentRowNum = 0 ;\r\n> }\r\n> \r\n> \r\n> Regards\r\n> Tarkeshwar\r\n> \r\n\r\ndeclare a cursor and fetch\r\n\r\nhttps://protect2.fireeye.com/v1/url?k=d75a6ab6-8b8e60bf-d75a2a2d-86740465fc08-fa8f74c15b35a3fd&q=1&e=7b7df498-f187-408a-a07c-07b1c5f6f868&u=https%3A%2F%2Fbooks.google.com%2Fbooks%3Fid%3DNc5ZT2X5mOcC%26pg%3DPA405%26lpg%3DPA405%26dq%3Dpqexec%2Bfetch%26source%3Dbl%26ots%3D8P8w5JemcL%26sig%3DACfU3U0POGGSP0tYTrs5oxykJdOeffaspA%26hl%3Den%26sa%3DX%26ved%3D2ahUKEwjevbmA2KPlAhXukOAKHaBIBcoQ6AEwCnoECDEQAQ%23v%3Donepage%26q%3Dpqexec%2520fetch%26f%3Dfalse\r\n\r\n\r\n", "msg_date": "Wed, 30 Oct 2019 16:47:27 +0000", "msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>", "msg_from_op": true, "msg_subject": "RE: Can you please tell us how set this prefetch attribute in\n following lines." } ]
[ { "msg_contents": "I'm not sure if this can be considered a bug or not, but it is perhaps\nunexpected. I found that when using a view that is simply select * from\ntable, then doing INSERT ... ON CONFLICT ON CONSTRAINT constraint_name on\nthat view, it does not find the constraint and errors out. But it does\nfind the constraint if one lists the columns instead.\n\nI did not find any mention of this specifically in the docs, or any\ndiscussion on this topic after a brief search, and I have already asked my\nstakeholder to change to using the column list as better practice anyway.\nBut in any case, I wanted to know if this is a known issue or not.\n\nThanks!\nJeremy\n\nI'm not sure if this can be considered a bug or not, but it is perhaps unexpected.  I found that when using a view that is simply select * from table, then doing INSERT ... ON CONFLICT ON CONSTRAINT constraint_name on that view, it does not find the constraint and errors out.  But it does find the constraint if one lists the columns instead.I did not find any mention of this specifically in the docs, or any discussion on this topic after a brief search, and I have already asked my stakeholder to change to using the column list as better practice anyway.  But in any case, I wanted to know if this is a known issue or not.Thanks!Jeremy", "msg_date": "Thu, 17 Oct 2019 12:49:38 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "UPSERT on view does not find constraint by name" }, { "msg_contents": "Jeremy Finzel <finzelj@gmail.com> writes:\n> I'm not sure if this can be considered a bug or not, but it is perhaps\n> unexpected. I found that when using a view that is simply select * from\n> table, then doing INSERT ... ON CONFLICT ON CONSTRAINT constraint_name on\n> that view, it does not find the constraint and errors out. But it does\n> find the constraint if one lists the columns instead.\n\nI'm confused by this report. The view wouldn't have any constraints,\nand experimenting shows that the parser won't let you name a\nconstraint of the underlying table here. So would you provide a\nconcrete example of what you're talking about?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Oct 2019 10:42:25 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UPSERT on view does not find constraint by name" }, { "msg_contents": "On Fri, Oct 18, 2019 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeremy Finzel <finzelj@gmail.com> writes:\n> > I'm not sure if this can be considered a bug or not, but it is perhaps\n> > unexpected. I found that when using a view that is simply select * from\n> > table, then doing INSERT ... ON CONFLICT ON CONSTRAINT constraint_name on\n> > that view, it does not find the constraint and errors out. But it does\n> > find the constraint if one lists the columns instead.\n>\n> I'm confused by this report. The view wouldn't have any constraints,\n> and experimenting shows that the parser won't let you name a\n> constraint of the underlying table here. So would you provide a\n> concrete example of what you're talking about?\n>\n> regards, tom lane\n>\n\nApologies for the lack of clarity. Here is a simple example of what I mean:\n\ntest=# CREATE TEMP TABLE foo (id int primary key);\nCREATE TABLE\ntest=# \\d foo\n Table \"pg_temp_4.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n id | integer | | not null |\nIndexes:\n \"foo_pkey\" PRIMARY KEY, btree (id)\n\ntest=# CREATE VIEW bar AS SELECT * FROM foo;\nNOTICE: view \"bar\" will be a temporary view\nCREATE VIEW\ntest=# INSERT INTO foo (id)\ntest-# VALUES (1)\ntest-# ON CONFLICT ON CONSTRAINT foo_pkey\ntest-# DO NOTHING;\nINSERT 0 1\ntest=# INSERT INTO foo (id)\nVALUES (1)\nON CONFLICT ON CONSTRAINT foo_pkey\nDO NOTHING;\nINSERT 0 0\ntest=# INSERT INTO foo (id)\nVALUES (1)\nON CONFLICT ON CONSTRAINT foo_pkey\nDO NOTHING;\nINSERT 0 0\ntest=# INSERT INTO bar (id)\nVALUES (1)\nON CONFLICT ON CONSTRAINT foo_pkey\nDO NOTHING;\nERROR: constraint \"foo_pkey\" for table \"bar\" does not exist\ntest=# INSERT INTO bar (id)\nVALUES (1)\nON CONFLICT (id)\nDO NOTHING;\nINSERT 0 0\n\n\n\nOf interest are the last 2 statements above. ON CONFLICT on the constraint\nname does not work, but it does work by field name. I'm not saying it\n*should* work both ways, but I'm more wondering if this is\nknown/expected/desired behavior.\n\nThe point of interest for us is that we frequently preserve a table's\n\"public API\" by instead swapping out a table for a view as above, in order\nfor instance to rebuild a table behind the scenes without breaking table\nusage. Above case is a rare example where that doesn't work, and which in\nany case I advise (as does the docs) that they do not use on conflict on\nconstraint, but rather to list the field names instead.\n\nThanks,\nJeremy\n\nOn Fri, Oct 18, 2019 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeremy Finzel <finzelj@gmail.com> writes:\n> I'm not sure if this can be considered a bug or not, but it is perhaps\n> unexpected.  I found that when using a view that is simply select * from\n> table, then doing INSERT ... ON CONFLICT ON CONSTRAINT constraint_name on\n> that view, it does not find the constraint and errors out.  But it does\n> find the constraint if one lists the columns instead.\n\nI'm confused by this report.  The view wouldn't have any constraints,\nand experimenting shows that the parser won't let you name a\nconstraint of the underlying table here.  So would you provide a\nconcrete example of what you're talking about?\n\n                        regards, tom laneApologies for the lack of clarity.  Here is a simple example of what I mean:test=# CREATE TEMP TABLE foo (id int primary key);CREATE TABLEtest=# \\d foo               Table \"pg_temp_4.foo\" Column |  Type   | Collation | Nullable | Default--------+---------+-----------+----------+--------- id     | integer |           | not null |Indexes:    \"foo_pkey\" PRIMARY KEY, btree (id)test=# CREATE VIEW bar AS SELECT * FROM foo;NOTICE:  view \"bar\" will be a temporary viewCREATE VIEWtest=# INSERT INTO foo (id)test-# VALUES (1)test-# ON CONFLICT ON CONSTRAINT foo_pkeytest-# DO NOTHING;INSERT 0 1test=# INSERT INTO foo (id)VALUES (1)ON CONFLICT ON CONSTRAINT foo_pkeyDO NOTHING;INSERT 0 0test=# INSERT INTO foo (id)VALUES (1)ON CONFLICT ON CONSTRAINT foo_pkeyDO NOTHING;INSERT 0 0test=# INSERT INTO bar (id)VALUES (1)ON CONFLICT ON CONSTRAINT foo_pkeyDO NOTHING;ERROR:  constraint \"foo_pkey\" for table \"bar\" does not existtest=# INSERT INTO bar (id)VALUES (1)ON CONFLICT (id)DO NOTHING;INSERT 0 0Of interest are the last 2 statements above.  ON CONFLICT on the constraint name does not work, but it does work by field name.  I'm not saying it *should* work both ways, but I'm more wondering if this is known/expected/desired behavior.The point of interest for us is that we frequently preserve a table's \"public API\" by instead swapping out a table for a view as above, in order for instance to rebuild a table behind the scenes without breaking table usage.  Above case is a rare example where that doesn't work, and which in any case I advise (as does the docs) that they do not use on conflict on constraint, but rather to list the field names instead.Thanks,Jeremy", "msg_date": "Fri, 18 Oct 2019 07:59:04 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "Re: UPSERT on view does not find constraint by name" }, { "msg_contents": "Jeremy Finzel <finzelj@gmail.com> writes:\n> test=# CREATE TEMP TABLE foo (id int primary key);\n> CREATE TABLE\n> test=# CREATE VIEW bar AS SELECT * FROM foo;\n> NOTICE: view \"bar\" will be a temporary view\n> CREATE VIEW\n> ...\n> test=# INSERT INTO bar (id)\n> VALUES (1)\n> ON CONFLICT ON CONSTRAINT foo_pkey\n> DO NOTHING;\n> ERROR: constraint \"foo_pkey\" for table \"bar\" does not exist\n> test=# INSERT INTO bar (id)\n> VALUES (1)\n> ON CONFLICT (id)\n> DO NOTHING;\n> INSERT 0 0\n\n> Of interest are the last 2 statements above. ON CONFLICT on the constraint\n> name does not work, but it does work by field name. I'm not saying it\n> *should* work both ways, but I'm more wondering if this is\n> known/expected/desired behavior.\n\nThe first case looks perfectly normal to me: there is no \"foo_pkey\"\nconstraint associated with the \"bar\" view. It is interesting that\nthe second case drills down to find there's an underlying constraint,\nbut that seems like a bit of a hack :-(.\n\nPoking at it a little more closely, it seems like the first case\ninvolves a parse-time constraint lookup, while the second case\npostpones the lookup to plan time, and so the second case works\nbecause the view has already been expanded into a direct reference\nto the underlying table. Maybe it wasn't good to do those cases\ndifferently. I can't get too excited about it though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Oct 2019 15:22:00 +0200", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UPSERT on view does not find constraint by name" } ]
[ { "msg_contents": "Hello,\n\nJehan-Guillaume (in Cc) reported me today a problem with logical\nreplication, where in case of network issue the walsender is correctly\nterminating at the given wal_sender_timeout but the logical worker\nkept waiting indefinitely.\n\nThe issue is apparently a simple thinko, the timestamp of the last\nreceived activity being unconditionally set at the beginning of the\nmain processing loop, making any reasonable timeout setting\nineffective. Trivial patch to fix the problem attached.", "msg_date": "Thu, 17 Oct 2019 20:00:15 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Non working timeout detection in logical worker" }, { "msg_contents": "On Thu, Oct 17, 2019 at 08:00:15PM +0200, Julien Rouhaud wrote:\n> Jehan-Guillaume (in Cc) reported me today a problem with logical\n> replication, where in case of network issue the walsender is correctly\n> terminating at the given wal_sender_timeout but the logical worker\n> kept waiting indefinitely.\n> \n> The issue is apparently a simple thinko, the timestamp of the last\n> received activity being unconditionally set at the beginning of the\n> main processing loop, making any reasonable timeout setting\n> ineffective. Trivial patch to fix the problem attached.\n\nRight, good catch. That's indeed incorrect. The current code would\njust keep resetting the timeout if walrcv_receive() returns 0 roughly\nonce per NAPTIME_PER_CYCLE. The ping sent to the server once reaching\nhalf of wal_receiver_timeout was also broken because of that.\n\nIn short, applied and back-patched down to 10.\n--\nMichael", "msg_date": "Fri, 18 Oct 2019 14:32:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Non working timeout detection in logical worker" }, { "msg_contents": "On Fri, Oct 18, 2019 at 7:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 17, 2019 at 08:00:15PM +0200, Julien Rouhaud wrote:\n> > Jehan-Guillaume (in Cc) reported me today a problem with logical\n> > replication, where in case of network issue the walsender is correctly\n> > terminating at the given wal_sender_timeout but the logical worker\n> > kept waiting indefinitely.\n> >\n> > The issue is apparently a simple thinko, the timestamp of the last\n> > received activity being unconditionally set at the beginning of the\n> > main processing loop, making any reasonable timeout setting\n> > ineffective. Trivial patch to fix the problem attached.\n>\n> Right, good catch. That's indeed incorrect. The current code would\n> just keep resetting the timeout if walrcv_receive() returns 0 roughly\n> once per NAPTIME_PER_CYCLE. The ping sent to the server once reaching\n> half of wal_receiver_timeout was also broken because of that.\n>\n> In short, applied and back-patched down to 10.\n\nThanks Michael!\n\n\n", "msg_date": "Fri, 18 Oct 2019 07:47:13 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Non working timeout detection in logical worker" }, { "msg_contents": "On Fri, 18 Oct 2019 07:47:13 +0200\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Fri, Oct 18, 2019 at 7:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Oct 17, 2019 at 08:00:15PM +0200, Julien Rouhaud wrote: \n> > > Jehan-Guillaume (in Cc) reported me today a problem with logical\n> > > replication, where in case of network issue the walsender is correctly\n> > > terminating at the given wal_sender_timeout but the logical worker\n> > > kept waiting indefinitely.\n> > >\n> > > The issue is apparently a simple thinko, the timestamp of the last\n> > > received activity being unconditionally set at the beginning of the\n> > > main processing loop, making any reasonable timeout setting\n> > > ineffective. Trivial patch to fix the problem attached. \n> >\n> > Right, good catch. That's indeed incorrect. The current code would\n> > just keep resetting the timeout if walrcv_receive() returns 0 roughly\n> > once per NAPTIME_PER_CYCLE. The ping sent to the server once reaching\n> > half of wal_receiver_timeout was also broken because of that.\n> >\n> > In short, applied and back-patched down to 10. \n> \n> Thanks Michael!\n\nThank you both guys!\n\n\n", "msg_date": "Fri, 18 Oct 2019 14:47:13 +0200", "msg_from": "\"Jehan-Guillaume (ioguix) de Rorthais\" <ioguix@free.fr>", "msg_from_op": false, "msg_subject": "Re: Non working timeout detection in logical worker" } ]
[ { "msg_contents": "Greetings,\n\nlibpq since PostgreSQL-12 has stricter checks for integer values in\nconnection parameters. They were introduced by commit\nhttps://github.com/postgres/postgres/commit/e7a2217978d9cbb2149bfcb4ef1e45716cfcbefb\n.\n\nHowever in case of \"connect_timeout\" such an invalid integer value leads\nto a connection status other than CONNECTION_OK or CONNECTION_BAD. The\nwrong parameter is therefore not properly reported to user space. This\npatch fixes this by explicit setting CONNECTION_BAD.\n\nThe issue was raised on ruby-pg: https://github.com/ged/ruby-pg/issues/302\n\nIt originally came up at Heroku:\nhttps://github.com/heroku/stack-images/issues/147\n\n-- \n\nKind Regards,\nLars Kanis", "msg_date": "Thu, 17 Oct 2019 20:04:19 +0200", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "libpq: Fix wrong connection status on invalid \"connect_timeout\"" }, { "msg_contents": "I verified that all other integer parameters properly set CONNECTION_BAD\nin case of invalid values. These are:\n\n* port\n* keepalives_idle\n* keepalives_interval\n* keepalives_count\n* tcp_user_timeout\n\nThat's why I changed connectDBComplete() only, instead of setting the\nstatus directly in parse_int_param().\n\n--\n\nKind Regards,\nLars Kanis\n\n\nAm 17.10.19 um 20:04 schrieb Lars Kanis:\n> Greetings,\n>\n> libpq since PostgreSQL-12 has stricter checks for integer values in\n> connection parameters. They were introduced by commit\n> https://github.com/postgres/postgres/commit/e7a2217978d9cbb2149bfcb4ef1e45716cfcbefb\n> .\n>\n> However in case of \"connect_timeout\" such an invalid integer value leads\n> to a connection status other than CONNECTION_OK or CONNECTION_BAD. The\n> wrong parameter is therefore not properly reported to user space. This\n> patch fixes this by explicit setting CONNECTION_BAD.\n>\n> The issue was raised on ruby-pg: https://github.com/ged/ruby-pg/issues/302\n>\n> It originally came up at Heroku:\n> https://github.com/heroku/stack-images/issues/147\n>\n-- \n--\nKind Regards,\nLars Kanis\n\n\n\n\n", "msg_date": "Thu, 17 Oct 2019 22:10:17 +0200", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix wrong connection status on invalid \"connect_timeout\"" }, { "msg_contents": "On Thu, Oct 17, 2019 at 10:10:17PM +0200, Lars Kanis wrote:\n> That's why I changed connectDBComplete() only, instead of setting the\n> status directly in parse_int_param().\n\nYes, you shouldn't do that as the keepalive parameters and\ntcp_user_timeout have some specific handling when it comes to defaults\ndepending on the platform and we have some retry logic when specifying\nmultiple hosts.\n\nNow, there is actually more to it than it looks at first glance. Your\npatch is pointing out at a failure within the regression tests of the\nECPG driver, as any parameters part of a connection string may have\ntrailing spaces which are considered as incorrect by the patch,\ncausing the connection to fail.\n\nIn short, on HEAD this succeeds but would fail with your patch:\n$ psql 'postgresql:///postgres?host=/tmp&connect_timeout=14 &port=5432'\npsql: error: could not connect to server: invalid integer value \"14 \"\nfor connection option \"connect_timeout\"\n\nParameter names are more restrictive, as URLs don't allow leading or\ntrailing spaces for them. On HEAD, we allow leading spaces for\ninteger parameters as the parsing uses strtol(3), but not for the\ntrailing spaces, which is a bit crazy and I think that we had better\nnot break that if the parameter value correctly defines a proper\ninteger. So attached is a patch to skip trailing whitespaces as well,\nwhich also fixes the issue with ECPG. I have refactored the parsing\nlogic a bit while on it. The comment at the top of parse_int_param()\nneeds to be reworked a bit more.\n\nWe could add some TAP tests for that, but I don't see a good area to\ncheck after connection parameters. We have tests for multi-host\nstrings in 001_stream_rep.pl but that already feels misplaced as those\ntests are for recovery. Perhaps we could add directly regression\ntests for libpq. I'll start a new thread about that once we are done\nhere, the topic is larger.\n\n(Note to self: Ed Morley needs to be credited for the report as well.)\n--\nMichael", "msg_date": "Fri, 18 Oct 2019 12:06:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix wrong connection status on invalid \"connect_timeout\"" }, { "msg_contents": "Am 18.10.19 um 05:06 schrieb Michael Paquier:\n\n> So attached is a patch to skip trailing whitespaces as well,\n> which also fixes the issue with ECPG. I have refactored the parsing\n> logic a bit while on it. The comment at the top of parse_int_param()\n> needs to be reworked a bit more.\n\nI tested this and it looks good to me. Maybe you could omit some\nredundant 'end' checks, as in the attached patch. Or was your intention\nto verify non-NULL 'end'?\n\n\n> Perhaps we could add directly regression\n> tests for libpq. I'll start a new thread about that once we are done\n> here, the topic is larger.\n\nWe have around 650 tests on ruby-pg to ensure everything runs as\nexpected and I always wondered how the API of libpq is being verified.\n\n\n--\nKind Regards,\nLars Kanis", "msg_date": "Fri, 18 Oct 2019 14:01:23 +0200", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix wrong connection status on invalid \"connect_timeout\"" }, { "msg_contents": "Am 18.10.19 um 05:06 schrieb Michael Paquier:\n\n> So attached is a patch to skip trailing whitespaces as well,\n> which also fixes the issue with ECPG. I have refactored the parsing\n> logic a bit while on it. The comment at the top of parse_int_param()\n> needs to be reworked a bit more.\n\nI tested this and it looks good to me. Maybe you could omit some\nredundant 'end' checks, as in the attached patch. Or was your intention\nto verify non-NULL 'end'?\n\n\n> Perhaps we could add directly regression\n> tests for libpq. I'll start a new thread about that once we are done\n> here, the topic is larger.\n\nWe have around 650 tests on ruby-pg to ensure everything runs as\nexpected and I always wondered how the API of libpq is being verified.\n\n\n--\nKind Regards,\nLars Kanis", "msg_date": "Fri, 18 Oct 2019 14:01:23 +0200", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix wrong connection status on invalid \"connect_timeout\"" }, { "msg_contents": "Am 18.10.19 um 05:06 schrieb Michael Paquier:\n\n> So attached is a patch to skip trailing whitespaces as well,\n> which also fixes the issue with ECPG. I have refactored the parsing\n> logic a bit while on it. The comment at the top of parse_int_param()\n> needs to be reworked a bit more.\n\nI tested this and it looks good to me. Maybe you could omit some\nredundant 'end' checks, as in the attached patch. Or was your intention\nto verify non-NULL 'end'?\n\n\n> Perhaps we could add directly regression\n> tests for libpq. I'll start a new thread about that once we are done\n> here, the topic is larger.\n\nWe have around 650 tests on ruby-pg to ensure everything runs as\nexpected and I always wondered how the API of libpq is being verified.\n\n\n--\nKind Regards,\nLars Kanis", "msg_date": "Fri, 18 Oct 2019 14:01:23 +0200", "msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>", "msg_from_op": true, "msg_subject": "Re: libpq: Fix wrong connection status on invalid \"connect_timeout\"" }, { "msg_contents": "On Fri, Oct 18, 2019 at 02:01:23PM +0200, Lars Kanis wrote:\n> Am 18.10.19 um 05:06 schrieb Michael Paquier:\n>> So attached is a patch to skip trailing whitespaces as well,\n>> which also fixes the issue with ECPG. I have refactored the parsing\n>> logic a bit while on it. The comment at the top of parse_int_param()\n>> needs to be reworked a bit more.\n> \n> I tested this and it looks good to me. Maybe you could omit some\n> redundant 'end' checks, as in the attached patch. Or was your intention\n> to verify non-NULL 'end'?\n\nYes. Here are the connection patterns I have tested. These now pass:\n'postgresql:///postgres?host=/tmp&port=5432 &user=postgres'\n'postgresql:///postgres?host=/tmp&port= 5432&user=postgres'\nAnd these fail (overflow on third one):\n'postgresql:///postgres?host=/tmp&port=5432 s &user=postgres'\n'postgresql:///postgres?host=/tmp&port= s 5432&user=postgres'\n'postgresql:///postgres?host=/tmp&port= 5000000000&user=postgres'\n\nBefore the patch any trailing characters caused a failures even if\nthere were just whitespaces as trailing characters (first case\nlisted).\n\n>> Perhaps we could add directly regression\n>> tests for libpq. I'll start a new thread about that once we are done\n>> here, the topic is larger.\n> \n> We have around 650 tests on ruby-pg to ensure everything runs as\n> expected and I always wondered how the API of libpq is being verified.\n\nFor advanced test scenarios like connection handling, we use perl's\nTAP tests. The situation regarding libpq-related testing is a bit\nmessy though. We have some tests in src/test/recovery/ for a couple\nof things, and we should have more things to stress anything related\nto the protocol (say message injection, etc.).\n\nI'll try to start a new thread about that with a patch adding some\nbasics for discussion.\n\nI have applied the parsing fix and your fix as two separate commits as\nthese are at the end two separate bugs, then back-patched down to v12.\nEd has been credited for the report, and I have marked the author as\nyou, Lars. Thanks!\n--\nMichael", "msg_date": "Mon, 21 Oct 2019 11:40:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: libpq: Fix wrong connection status on invalid \"connect_timeout\"" } ]
[ { "msg_contents": "While reviewing the partitionwise-join patch, I noticed $Subject,ie,\nthis in create_list_bounds():\n\n /*\n * Never put a null into the values array, flag instead for\n * the code further down below where we construct the actual\n * relcache struct.\n */\n if (null_index != -1)\n elog(ERROR, \"found null more than once\");\n null_index = i;\n\n\"the code further down below where we construct the actual relcache\nstruct\" isn't in the same file anymore by refactoring by commit\nb52b7dc25. How about modifying it like the attached?\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 18 Oct 2019 16:25:03 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Obsolete comment in partbounds.c" }, { "msg_contents": "On 2019-Oct-18, Etsuro Fujita wrote:\n\n> While reviewing the partitionwise-join patch, I noticed $Subject,ie,\n> this in create_list_bounds():\n> \n> /*\n> * Never put a null into the values array, flag instead for\n> * the code further down below where we construct the actual\n> * relcache struct.\n> */\n> if (null_index != -1)\n> elog(ERROR, \"found null more than once\");\n> null_index = i;\n> \n> \"the code further down below where we construct the actual relcache\n> struct\" isn't in the same file anymore by refactoring by commit\n> b52b7dc25. How about modifying it like the attached?\n\nYeah, agreed. Instead of \"the null comes from\" I would use \"the\npartition that stores nulls\".\n\nWhile reviewing your patch I noticed a few places where we use an odd\npattern in switches, which can be simplified as shown here.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 18 Oct 2019 06:56:36 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in partbounds.c" }, { "msg_contents": "Hi Alvaro,\n\nOn Fri, Oct 18, 2019 at 6:56 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Oct-18, Etsuro Fujita wrote:\n> > While reviewing the partitionwise-join patch, I noticed $Subject,ie,\n> > this in create_list_bounds():\n> >\n> > /*\n> > * Never put a null into the values array, flag instead for\n> > * the code further down below where we construct the actual\n> > * relcache struct.\n> > */\n> > if (null_index != -1)\n> > elog(ERROR, \"found null more than once\");\n> > null_index = i;\n> >\n> > \"the code further down below where we construct the actual relcache\n> > struct\" isn't in the same file anymore by refactoring by commit\n> > b52b7dc25. How about modifying it like the attached?\n>\n> Yeah, agreed. Instead of \"the null comes from\" I would use \"the\n> partition that stores nulls\".\n\nI think your wording is better than mine. Thank you for reviewing!\n\n> While reviewing your patch I noticed a few places where we use an odd\n> pattern in switches, which can be simplified as shown here.\n\n case PARTITION_STRATEGY_LIST:\n- num_indexes = bound->ndatums;\n+ return bound->ndatums;\n break;\n\nWhy not remove the break statement?\n\nOther than that the patch looks good to me.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 19 Oct 2019 17:56:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Obsolete comment in partbounds.c" }, { "msg_contents": "On Sat, Oct 19, 2019 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Oct 18, 2019 at 6:56 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > On 2019-Oct-18, Etsuro Fujita wrote:\n> > > While reviewing the partitionwise-join patch, I noticed $Subject,ie,\n> > > this in create_list_bounds():\n> > >\n> > > /*\n> > > * Never put a null into the values array, flag instead for\n> > > * the code further down below where we construct the actual\n> > > * relcache struct.\n> > > */\n> > > if (null_index != -1)\n> > > elog(ERROR, \"found null more than once\");\n> > > null_index = i;\n> > >\n> > > \"the code further down below where we construct the actual relcache\n> > > struct\" isn't in the same file anymore by refactoring by commit\n> > > b52b7dc25. How about modifying it like the attached?\n> >\n> > Yeah, agreed. Instead of \"the null comes from\" I would use \"the\n> > partition that stores nulls\".\n>\n> I think your wording is better than mine. Thank you for reviewing!\n\nI applied the patch down to PG12.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 21 Oct 2019 17:44:25 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Obsolete comment in partbounds.c" }, { "msg_contents": "On Mon, Oct 21, 2019 at 5:44 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Sat, Oct 19, 2019 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, Oct 18, 2019 at 6:56 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > On 2019-Oct-18, Etsuro Fujita wrote:\n> > > > While reviewing the partitionwise-join patch, I noticed $Subject,ie,\n> > > > this in create_list_bounds():\n> > > >\n> > > > /*\n> > > > * Never put a null into the values array, flag instead for\n> > > > * the code further down below where we construct the actual\n> > > > * relcache struct.\n> > > > */\n> > > > if (null_index != -1)\n> > > > elog(ERROR, \"found null more than once\");\n> > > > null_index = i;\n> > > >\n> > > > \"the code further down below where we construct the actual relcache\n> > > > struct\" isn't in the same file anymore by refactoring by commit\n> > > > b52b7dc25. How about modifying it like the attached?\n> > >\n> > > Yeah, agreed. Instead of \"the null comes from\" I would use \"the\n> > > partition that stores nulls\".\n> >\n> > I think your wording is better than mine. Thank you for reviewing!\n>\n> I applied the patch down to PG12.\n\nThank you Fujita-san and Alvaro.\n\nRegards,\nAmit\n\n\n", "msg_date": "Wed, 23 Oct 2019 12:08:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in partbounds.c" }, { "msg_contents": "On 2019-Oct-19, Etsuro Fujita wrote:\n\n> On Fri, Oct 18, 2019 at 6:56 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > Yeah, agreed. Instead of \"the null comes from\" I would use \"the\n> > partition that stores nulls\".\n> \n> I think your wording is better than mine. Thank you for reviewing!\n\nThanks for getting this done.\n\n> > While reviewing your patch I noticed a few places where we use an odd\n> > pattern in switches, which can be simplified as shown here.\n> \n> case PARTITION_STRATEGY_LIST:\n> - num_indexes = bound->ndatums;\n> + return bound->ndatums;\n> break;\n> \n> Why not remove the break statement?\n\nYou're right, I should have done that. However, I backed out of doing\nthis change after all; it seems a pretty minor stylistic adjustment of\nlittle value.\n\nThanks for reviewing all the same,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 4 Nov 2019 13:58:20 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in partbounds.c" } ]
[ { "msg_contents": "Attached is a patch to implement change badges in our documentation.\n\nWhat's a change badge? It's my term for a visual cue in the documentation\nused to indicate that the nearby section of documentation is new in this\nversion or otherwise changed from the previous version.\n\nOne example of change badges being used is in the DocBook documentation\nreference:\nhttps://tdg.docbook.org/tdg/4.5/ref-elements.html#common.attributes\n\nDocbook used graphical badges, which seemed to be a bad idea. Instead, I\nwent with a decorated text span like one finds in gmail labels or Reddit\n\"flair\".\n\nThe badges are implemented via using the \"revision\" attribute available on\nall docbook tags. All one needs to do to indicate a change is to change one\ntag, and add a revision attribute. For example:\n\n<varlistentry revision=\"new in 13\">\n\nwill add a small green text box with the tex \"new in 13\" immediately\npreceding the rendered elements. I have attached a screenshot\n(badges_in_acronyms.png) of an example of this from my browser viewing\nchanges to the acronyms.html file. This obviously lacks the polish of\nviewing the page on a full website, but it does give you an idea of the\nflexibility of the change badge, and where badge placement is (and is not)\na good idea.\n\nWhat are the benefits of using this?\n\nI think the benefits are as follows:\n\n1. It shows a casual user what pieces are new on that page (new functions,\nnew keywords, new command options, etc).\n\n2. It also works in the negative: a user can quickly skim a page, and\nlacking any badges, feel confident that everything there works in the way\nthat it did in version N-1.\n\n3. It also acts as a subtle cue for the user to click on the previous\nversion to see what it used to look like, confident that there *will* be a\ndifference on the previous version.\n\n\nHow would we implement this?\n\n1. All new documentation pages would get a \"NEW\" badge in their title.\n\n2. New function definitions, new command options, etc would get a \"NEW\"\nbadge as visually close to the change as is practical.\n\n3. Changes to existing functions, options, etc. would get a badge of\n\"UPDATED\"\n\n4. At major release time, we could do one of two things:\n\n4a. We could keep the NEW/UPDATED badges in the fixed release version, and\nthen completely remove them from the master, because for version N+1, they\nwon't be new anymore. This can be accomplished with an XSL transform\nlooking for any tag with the \"revision\" attribute\n\n4b. We could code in the version number at release time, and leave it in\nplace. So in version 14 you could find both \"v13\" and \"v14\" badges, and in\nversion 15 you could find badges for 15, 14, and 13. At some point (say\nv17), we start retiring the v13 badges, and in v18 we'd retire the v14\nbadges, and so on, to keep the clutter to a minimum.\n\nBack to the patch:\nI implemented this only for html output, and the colors I chose are very\noff-brand for postgres, so that will have to change. There's probably some\nspacing/padding issues I haven't thought of. Please try it out, make some\nmodifications to existing document pages to see how badges would work in\nthose contexts.", "msg_date": "Fri, 18 Oct 2019 07:54:18 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "Add Change Badges to documentation" }, { "msg_contents": "On Fri, Oct 18, 2019 at 07:54:18AM -0400, Corey Huinker wrote:\n>Attached is a patch to implement change badges in our documentation.\n>\n>What's a change badge? It's my term for a visual cue in the documentation\n>used to indicate that the nearby section of documentation is new in this\n>version or otherwise changed from the previous version.\n>\n>One example of change badges being used is in the DocBook documentation\n>reference:\n>https://tdg.docbook.org/tdg/4.5/ref-elements.html#common.attributes\n>\n>Docbook used graphical badges, which seemed to be a bad idea. Instead, I\n>went with a decorated text span like one finds in gmail labels or Reddit\n>\"flair\".\n>\n\nLooks useful. I sometimes need to look at a command in version X and see\nwhat changed since version Y. Currently I do that by opening both pages\nand visually comparing them, so those badges make it easier.\n\n>The badges are implemented via using the \"revision\" attribute available on\n>all docbook tags. All one needs to do to indicate a change is to change one\n>tag, and add a revision attribute. For example:\n>\n><varlistentry revision=\"new in 13\">\n>\n>will add a small green text box with the tex \"new in 13\" immediately\n>preceding the rendered elements. I have attached a screenshot\n>(badges_in_acronyms.png) of an example of this from my browser viewing\n>changes to the acronyms.html file. This obviously lacks the polish of\n>viewing the page on a full website, but it does give you an idea of the\n>flexibility of the change badge, and where badge placement is (and is not)\n>a good idea.\n>\n>What are the benefits of using this?\n>\n>I think the benefits are as follows:\n>\n>1. It shows a casual user what pieces are new on that page (new functions,\n>new keywords, new command options, etc).\n>\n\nYep.\n\n>2. It also works in the negative: a user can quickly skim a page, and\n>lacking any badges, feel confident that everything there works in the way\n>that it did in version N-1.\n>\n\nNot sure about this. It'd require marking all changes with the badge,\nbut we'll presumably mark only the large-ish changes, and it's unclear\nwhere the threshold is.\n\nIt also does not work when removing a block of text (e.g. when removing\nsome limitation), although it's true we often add a new para too.\n\n>3. It also acts as a subtle cue for the user to click on the previous\n>version to see what it used to look like, confident that there *will* be a\n>difference on the previous version.\n>\n>\n>How would we implement this?\n>\n>1. All new documentation pages would get a \"NEW\" badge in their title.\n>\n>2. New function definitions, new command options, etc would get a \"NEW\"\n>badge as visually close to the change as is practical.\n>\n>3. Changes to existing functions, options, etc. would get a badge of\n>\"UPDATED\"\n>\n>4. At major release time, we could do one of two things:\n>\n>4a. We could keep the NEW/UPDATED badges in the fixed release version, and\n>then completely remove them from the master, because for version N+1, they\n>won't be new anymore. This can be accomplished with an XSL transform\n>looking for any tag with the \"revision\" attribute\n>\n>4b. We could code in the version number at release time, and leave it in\n>place. So in version 14 you could find both \"v13\" and \"v14\" badges, and in\n>version 15 you could find badges for 15, 14, and 13. At some point (say\n>v17), we start retiring the v13 badges, and in v18 we'd retire the v14\n>badges, and so on, to keep the clutter to a minimum.\n>\n\nPresumably we could keep the SGML source and only decide which badges to\nignore during build of the docs. That would however require somewhat\nmore structured approach - now it's a single attribute with free text,\nwhich does not really allow easy filtering. With separate attributes for\nnew/removed bits, e.g.\n\n <para new_in_revision=\"11\">\n\nand\n\n <para removed_in_revision=\"13\">\n\nthe filtering would be much easier. But my experience with SGML is\nrather limited, so maybe I'm wrong.\n\n>Back to the patch:\n>I implemented this only for html output, and the colors I chose are very\n>off-brand for postgres, so that will have to change. There's probably some\n>spacing/padding issues I haven't thought of. Please try it out, make some\n>modifications to existing document pages to see how badges would work in\n>those contexts.\n\nHaven't looked yet, but I agree the colors might need a change - that's\na rather minor detail, though.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 18 Oct 2019 15:44:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add Change Badges to documentation" }, { "msg_contents": "\nHello Corey,\n\n> Attached is a patch to implement change badges in our documentation.\n\nMore precisely, it is a POC to show that the infra works. It adds 3 badges \non various entries.\n\nPatch applies cleanly, compiles, and indeed (too) green boxes show up. \nGood.\n\nMaybe it would be better with badges at the end of lines, otherwise it \ninterferes with the proper alignment of text. Also, ISTM that the shorter \nthe badge contents the better, so \"v13\" is better than \"new in version \n13\".\n\nMaybe it would be nice to have a on/off CSS/JS controled feature, so that \nthey can be hidden easily?\n\nI'm wondering about the maintainability of the feature if badges need to \nbe updated, but if this is only \"v13\" to say that a feature appears in \nv13, probably it is okay, there is no need to update.\n\nHowever, if a feature is changed, should we start accumulating badges?\n\nUpdating the documentation would be a great pain. Maybe it could be partly \nautomated.\n\n\n> 1. It shows a casual user what pieces are new on that page (new functions,\n> new keywords, new command options, etc).\n\nOk.\n\n> 2. It also works in the negative: a user can quickly skim a page, and\n> lacking any badges, feel confident that everything there works in the way\n> that it did in version N-1.\n\nPossibly. If the maintainer thought about it.\n\n> 3. It also acts as a subtle cue for the user to click on the previous\n> version to see what it used to look like, confident that there *will* be a\n> difference on the previous version.\n\nWhich suggests links to do that?\n\n> 1. All new documentation pages would get a \"NEW\" badge in their title.\n\nHmmm, I do not think that we want to add and remove NEW badges on every \nversion, that would be too troublesome. ISTM that maybe we can add \"v13\" \nand have some JS/CSS which says that it is new when looking at v13.\n\n> 2. New function definitions, new command options, etc would get a \"NEW\"\n> badge as visually close to the change as is practical.\n>\n> 3. Changes to existing functions, options, etc. would get a badge of\n> \"UPDATED\"\n\nIdem, maintainability? Unless this is automated.\n\n> 4. At major release time, we could do one of two things:\n>\n> 4a. We could keep the NEW/UPDATED badges in the fixed release version, and\n> then completely remove them from the master, because for version N+1, they\n> won't be new anymore. This can be accomplished with an XSL transform\n> looking for any tag with the \"revision\" attribute\n\nHmmm.\n\n> 4b. We could code in the version number at release time, and leave it in\n> place. So in version 14 you could find both \"v13\" and \"v14\" badges, and in\n> version 15 you could find badges for 15, 14, and 13. At some point (say\n> v17), we start retiring the v13 badges, and in v18 we'd retire the v14\n> badges, and so on, to keep the clutter to a minimum.\n\nHmmm.\n\n> Back to the patch:\n> I implemented this only for html output, and the colors I chose are very\n> off-brand for postgres, so that will have to change. There's probably some\n> spacing/padding issues I haven't thought of. Please try it out, make some\n> modifications to existing document pages to see how badges would work in\n> those contexts.\n>\n\n-- \nFabien Coelho - CRI, MINES ParisTech\n\n\n", "msg_date": "Thu, 7 Nov 2019 12:08:43 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Add Change Badges to documentation" }, { "msg_contents": "On Thu, Nov 07, 2019 at 12:08:43PM +0100, Fabien COELHO wrote:\n> More precisely, it is a POC to show that the infra works. It adds 3 badges\n> on various entries.\n\nIf the final patch could at least finish with one applied, that would\nbe nice as a base example. There are no objections for this patch,\nbut no updates have been provided, so I have switched the entry as\nreturned with feedback.\n--\nMichael", "msg_date": "Mon, 25 Nov 2019 16:58:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add Change Badges to documentation" } ]
[ { "msg_contents": "I've been struggling with how we're going to upgrade launchpad.net to\nPostgreSQL 12 or newer (we're currently on 10). We're one of those\napplications that deliberately uses CTEs as optimization fences in a few\ndifficult places. The provision of the MATERIALIZED keyword in 12 is\ngreat, but the fact that it doesn't exist in earlier versions is\nawkward. We certainly don't want to upgrade our application code at the\nsame time as upgrading the database, and dealing with performance\ndegradation between the database upgrade and the application upgrade\ndoesn't seem great either (not to mention that it would be hard to\ncoordinate). That leaves us with hacking our query compiler to add the\nMATERIALIZED keyword only if it's running on 12 or newer, which would be\npossible but is pretty cumbersome.\n\nHowever, an alternative would be to backport the new syntax to some\nearlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\nsynonymous with \"WITH ... AS\" in versions prior to 12; there's no need\nto support \"NOT MATERIALIZED\" since that's explicitly requesting the new\nquery-folding feature that only exists in 12. Would something like the\nattached patch against REL_11_STABLE be acceptable? I'd like to\nbackpatch it at least as far as PostgreSQL 10.\n\nThis compiles and passes regression tests.\n\nThanks,\n\n-- \nColin Watson [cjwatson@canonical.com]", "msg_date": "Fri, 18 Oct 2019 14:21:30 +0100", "msg_from": "Colin Watson <cjwatson@canonical.com>", "msg_from_op": true, "msg_subject": "Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n> However, an alternative would be to backport the new syntax to some\n> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n> synonymous with \"WITH ... AS\" in versions prior to 12; there's no need\n> to support \"NOT MATERIALIZED\" since that's explicitly requesting the new\n> query-folding feature that only exists in 12. Would something like the\n> attached patch against REL_11_STABLE be acceptable? I'd like to\n> backpatch it at least as far as PostgreSQL 10.\n\nI am afraid that new features don't gain a backpatch. This is a\nproject policy. Back-branches should just include bug fixes.\n--\nMichael", "msg_date": "Sat, 19 Oct 2019 11:34:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": ">>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n\n > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n >> However, an alternative would be to backport the new syntax to some\n >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n >> need to support \"NOT MATERIALIZED\" since that's explicitly\n >> requesting the new query-folding feature that only exists in 12.\n >> Would something like the attached patch against REL_11_STABLE be\n >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n >> 10.\n\n Michael> I am afraid that new features don't gain a backpatch. This is\n Michael> a project policy. Back-branches should just include bug fixes.\n\nI do think an argument can be made for making an exception in this\nparticular case. This wouldn't be backpatching a feature, just accepting\nand ignoring some of the new syntax to make upgrading easier.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Sat, 19 Oct 2019 05:01:04 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, Oct 19, 2019 at 05:01:04AM +0100, Andrew Gierth wrote:\n> >>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n> > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n> >> However, an alternative would be to backport the new syntax to some\n> >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n> >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n> >> need to support \"NOT MATERIALIZED\" since that's explicitly\n> >> requesting the new query-folding feature that only exists in 12.\n> >> Would something like the attached patch against REL_11_STABLE be\n> >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n> >> 10.\n> \n> Michael> I am afraid that new features don't gain a backpatch. This is\n> Michael> a project policy. Back-branches should just include bug fixes.\n> \n> I do think an argument can be made for making an exception in this\n> particular case. This wouldn't be backpatching a feature, just accepting\n> and ignoring some of the new syntax to make upgrading easier.\n\nRight, this is my position too. I'm explicitly not asking for\nbackpatching of the CTE-inlining feature, just trying to cope with the\nfact that we now have to spell some particular queries differently to\nretain the performance characteristics we need for them.\n\nI suppose an alternative would be to add a configuration option to 12\nthat allows disabling inlining of CTEs cluster-wide: we could then\nupgrade to 12 with inlining disabled, add MATERIALIZED to the relevant\nqueries, and then re-enable inlining. But I like that less because it\nwould end up leaving cruft around in PostgreSQL's configuration code\nsomewhat indefinitely for the sake of an edge case in upgrading to a\nparticular version.\n\n-- \nColin Watson [cjwatson@canonical.com]\n\n\n", "msg_date": "Sat, 19 Oct 2019 10:22:39 +0100", "msg_from": "Colin Watson <cjwatson@canonical.com>", "msg_from_op": true, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "Hi, \n\nOn October 19, 2019 6:01:04 AM GMT+02:00, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n>>>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n>\n> > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n> >> However, an alternative would be to backport the new syntax to some\n> >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n> >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n> >> need to support \"NOT MATERIALIZED\" since that's explicitly\n> >> requesting the new query-folding feature that only exists in 12.\n> >> Would something like the attached patch against REL_11_STABLE be\n> >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n> >> 10.\n>\n> Michael> I am afraid that new features don't gain a backpatch. This is\n>Michael> a project policy. Back-branches should just include bug fixes.\n>\n>I do think an argument can be made for making an exception in this\n>particular case. This wouldn't be backpatching a feature, just\n>accepting\n>and ignoring some of the new syntax to make upgrading easier.\n\n+1\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 19 Oct 2019 11:56:56 +0200", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, Oct 19, 2019 at 11:56:56AM +0200, Andres Freund wrote:\n>Hi,\n>\n>On October 19, 2019 6:01:04 AM GMT+02:00, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:\n>>>>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n>>\n>> > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n>> >> However, an alternative would be to backport the new syntax to some\n>> >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n>> >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n>> >> need to support \"NOT MATERIALIZED\" since that's explicitly\n>> >> requesting the new query-folding feature that only exists in 12.\n>> >> Would something like the attached patch against REL_11_STABLE be\n>> >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n>> >> 10.\n>>\n>> Michael> I am afraid that new features don't gain a backpatch. This is\n>>Michael> a project policy. Back-branches should just include bug fixes.\n>>\n>>I do think an argument can be made for making an exception in this\n>>particular case. This wouldn't be backpatching a feature, just\n>>accepting\n>>and ignoring some of the new syntax to make upgrading easier.\n>\n>+1\n>\n\n+0.5\n\nIn general, I'm not opposed to accepting and ignoring the MATERIALIZED\nsyntax (assuming we'd only accept AS MATERIALIZED, but not the negative\nvariant).\n\nFWIW I'm not sure the \"we don't want to upgrade application code at the\nsame time as the database\" is really tenable. I don't think we really\npromise that anywhere, and adding the AS MATERIALIZED seems quite\nmechanical. I think we could find cases where we caused worse breaks\nbetween major versions.\n\nOne disadvantage is that this will increase confusion for users, who'll\nget used to the behavior on 12, and then they'll get confused on older\nreleases (e.g. if you don't specify AS MATERIALIZED you'd expect the CTE\nto get inlined, but that won't happen).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Oct 2019 12:48:17 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, Oct 19, 2019 at 10:22:39AM +0100, Colin Watson wrote:\n>On Sat, Oct 19, 2019 at 05:01:04AM +0100, Andrew Gierth wrote:\n>> >>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n>> > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n>> >> However, an alternative would be to backport the new syntax to some\n>> >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n>> >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n>> >> need to support \"NOT MATERIALIZED\" since that's explicitly\n>> >> requesting the new query-folding feature that only exists in 12.\n>> >> Would something like the attached patch against REL_11_STABLE be\n>> >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n>> >> 10.\n>>\n>> Michael> I am afraid that new features don't gain a backpatch. This is\n>> Michael> a project policy. Back-branches should just include bug fixes.\n>>\n>> I do think an argument can be made for making an exception in this\n>> particular case. This wouldn't be backpatching a feature, just accepting\n>> and ignoring some of the new syntax to make upgrading easier.\n>\n>Right, this is my position too. I'm explicitly not asking for\n>backpatching of the CTE-inlining feature, just trying to cope with the\n>fact that we now have to spell some particular queries differently to\n>retain the performance characteristics we need for them.\n>\n>I suppose an alternative would be to add a configuration option to 12\n>that allows disabling inlining of CTEs cluster-wide: we could then\n>upgrade to 12 with inlining disabled, add MATERIALIZED to the relevant\n>queries, and then re-enable inlining. But I like that less because it\n>would end up leaving cruft around in PostgreSQL's configuration code\n>somewhat indefinitely for the sake of an edge case in upgrading to a\n>particular version.\n\nI think having a GUC option was proposed and discussed while developping\nthe feature, and it was rejected - one of the reasons being experience\nwith similar GUCs in the past, which essentially just allowed users to\npostpone the fix indefinitely, and increased our maintenance burden.\n\nI wonder if an extension could do something like that, though. It can\ninstall a hook after parse analysis, so I guess it could walk the CTEs\nand mark them as materialized.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Oct 2019 12:52:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "\nOn 10/19/19 6:48 AM, Tomas Vondra wrote:\n> On Sat, Oct 19, 2019 at 11:56:56AM +0200, Andres Freund wrote:\n>> Hi,\n>>\n>> On October 19, 2019 6:01:04 AM GMT+02:00, Andrew Gierth\n>> <andrew@tao11.riddles.org.uk> wrote:\n>>>>>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n>>>\n>>> > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n>>> >> However, an alternative would be to backport the new syntax to some\n>>> >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n>>> >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n>>> >> need to support \"NOT MATERIALIZED\" since that's explicitly\n>>> >> requesting the new query-folding feature that only exists in 12.\n>>> >> Would something like the attached patch against REL_11_STABLE be\n>>> >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n>>> >> 10.\n>>>\n>>> Michael> I am afraid that new features don't gain a backpatch. This is\n>>> Michael> a project policy. Back-branches should just include bug fixes.\n>>>\n>>> I do think an argument can be made for making an exception in this\n>>> particular case. This wouldn't be backpatching a feature, just\n>>> accepting\n>>> and ignoring some of the new syntax to make upgrading easier.\n>>\n>> +1\n>>\n>\n> +0.5\n>\n> In general, I'm not opposed to accepting and ignoring the MATERIALIZED\n> syntax (assuming we'd only accept AS MATERIALIZED, but not the negative\n> variant).\n>\n> FWIW I'm not sure the \"we don't want to upgrade application code at the\n> same time as the database\" is really tenable. \n\n\n\nI'm -1 for exactly this reason.\n\n\nIn any case, if you insist on using the same code with pre-12 and\npost-12 releases, this should be achievable (at least in most cases) by\nusing the \"offset 0\" trick, shouldn't it?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 19 Oct 2019 10:52:49 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, 19 Oct 2019 at 10:53, Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\nwrote:\n\n>\n> > In general, I'm not opposed to accepting and ignoring the MATERIALIZED\n> > syntax (assuming we'd only accept AS MATERIALIZED, but not the negative\n> > variant).\n> >\n> > FWIW I'm not sure the \"we don't want to upgrade application code at the\n> > same time as the database\" is really tenable.\n>\n> I'm -1 for exactly this reason.\n>\n> In any case, if you insist on using the same code with pre-12 and\n> post-12 releases, this should be achievable (at least in most cases) by\n> using the \"offset 0\" trick, shouldn't it?\n>\n\nThat embeds a temporary hack in the application code indefinitely.\n\nIf only we had Guido's (Python) time machine. We could go back and start\naccepting \"AS MATERIALIZED\" as noise words starting from version 7 or\nsomething.\n\nOn Sat, 19 Oct 2019 at 10:53, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> In general, I'm not opposed to accepting and ignoring the MATERIALIZED\n> syntax (assuming we'd only accept AS MATERIALIZED, but not the negative\n> variant).\n>\n> FWIW I'm not sure the \"we don't want to upgrade application code at the\n> same time as the database\" is really tenable. \nI'm -1 for exactly this reason.\nIn any case, if you insist on using the same code with pre-12 and\npost-12 releases, this should be achievable (at least in most cases) by\nusing the \"offset 0\" trick, shouldn't it?That embeds a temporary hack in the application code indefinitely.If only we had Guido's (Python) time machine. We could go back and start accepting \"AS MATERIALIZED\" as noise words starting from version 7 or something.", "msg_date": "Sat, 19 Oct 2019 11:10:43 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "\nOn 10/19/19 11:10 AM, Isaac Morland wrote:\n> On Sat, 19 Oct 2019 at 10:53, Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>\n>\n> > In general, I'm not opposed to accepting and ignoring the\n> MATERIALIZED\n> > syntax (assuming we'd only accept AS MATERIALIZED, but not the\n> negative\n> > variant).\n> >\n> > FWIW I'm not sure the \"we don't want to upgrade application code\n> at the\n> > same time as the database\" is really tenable.\n>\n> I'm -1 for exactly this reason.\n>\n> In any case, if you insist on using the same code with pre-12 and\n> post-12 releases, this should be achievable (at least in most\n> cases) by\n> using the \"offset 0\" trick, shouldn't it?\n>\n>\n> That embeds a temporary hack in the application code indefinitely.\n>\n> If only we had Guido's (Python) time machine. We could go back and\n> start accepting \"AS MATERIALIZED\" as noise words starting from version\n> 7 or something.\n\n\n\nlet me know when that's materialized :-)\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 19 Oct 2019 11:30:49 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "Greetings,\n\n* Isaac Morland (isaac.morland@gmail.com) wrote:\n> That embeds a temporary hack in the application code indefinitely.\n\n... one could argue the same about having to say AS MATERIALIZED.\n\nThanks,\n\nStephen", "msg_date": "Sat, 19 Oct 2019 13:36:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, 19 Oct 2019 at 13:36, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Isaac Morland (isaac.morland@gmail.com) wrote:\n> > That embeds a temporary hack in the application code indefinitely.\n>\n> ... one could argue the same about having to say AS MATERIALIZED.\n>\n\nI think OFFSET 0 is a hack - the fact that it forces an optimization fence\nfeels like an oddity. By contrast, saying AS MATERIALIZED means materialize\nthe CTE. I suppose you could argue that the need to be able to request that\nis a temporary hack until query optimization improves further, but I don't\nthink that's realistic. For the foreseeable future we will need to be able\nto tell the query planner that it is wrong. I mean, in principle the DB\nshould figure out for itself which (non-constraint) indexes are needed. But\nI don't see any proposals to attempt to implement that.\n\nSide note: I am frequently disappointed by the query planner. I have had\nmany situations in which a nice simple strategy of looking up some tiny\nnumber of records in an index and then following more indices to get joined\nrecords would have worked, but instead it did a linear scan through the\nwrong starting table. So I'm very glad the AS MATERIALIZED now exists for\nwhen it's needed. On the other hand, I recognize that the reason I'm\ndisappointed is because my expectations are so high: often I've written a\nquery that joins several views together, meaning that under the covers it's\nreally joining maybe 20 tables, and it comes back with the answer\ninstantly. So in effect the query planner is just good enough to make me\nexpect it to be even better than it is.\n\nOn Sat, 19 Oct 2019 at 13:36, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Isaac Morland (isaac.morland@gmail.com) wrote:\n> That embeds a temporary hack in the application code indefinitely.\n\n... one could argue the same about having to say AS MATERIALIZED. I think OFFSET 0 is a hack - the fact that it forces an optimization fence feels like an oddity. By contrast, saying AS MATERIALIZED means materialize the CTE. I suppose you could argue that the need to be able to request that is a temporary hack until query optimization improves further, but I don't think that's realistic. For the foreseeable future we will need to be able to tell the query planner that it is wrong. I mean, in principle the DB should figure out for itself which (non-constraint) indexes are needed. But I don't see any proposals to attempt to implement that.Side note: I am frequently disappointed by the query planner. I have had many situations in which a nice simple strategy of looking up some tiny number of records in an index and then following more indices to get joined records would have worked, but instead it did a linear scan through the wrong starting table. So I'm very glad the AS MATERIALIZED now exists for when it's needed. On the other hand, I recognize that the reason I'm disappointed is because my expectations are so high: often I've written a query that joins several views together, meaning that under the covers it's really joining maybe 20 tables, and it comes back with the answer instantly. So in effect the query planner is just good enough to make me expect it to be even better than it is.", "msg_date": "Sat, 19 Oct 2019 14:35:42 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> +0.5\n\n> In general, I'm not opposed to accepting and ignoring the MATERIALIZED\n> syntax (assuming we'd only accept AS MATERIALIZED, but not the negative\n> variant).\n\nFWIW, I'm +0.1 or thereabouts. I'd vote -1 if the patch required\nintroducing a new lexer keyword (even an unreserved one); but it\ndoesn't. So it's hard to argue that there's much downside.\n\n(If we do this, I wonder if we should make the back branches parse\nNOT MATERIALIZED as well, and then throw a \"not implemented\" error\nrather than the unhelpful syntax error you'd get today.)\n\n(Also, if we do this, I think we should patch all supported branches.\nThe OP's proposal to patch back to 10 has no foundation that I can see.)\n\n> FWIW I'm not sure the \"we don't want to upgrade application code at the\n> same time as the database\" is really tenable. I don't think we really\n> promise that anywhere, and adding the AS MATERIALIZED seems quite\n> mechanical. I think we could find cases where we caused worse breaks\n> between major versions.\n\nThat's certainly true, which is why I'm only lukewarm about the proposal.\n\n> One disadvantage is that this will increase confusion for users, who'll\n> get used to the behavior on 12, and then they'll get confused on older\n> releases (e.g. if you don't specify AS MATERIALIZED you'd expect the CTE\n> to get inlined, but that won't happen).\n\nI'm less concerned about that aspect than about the aspect of (for\ninstance) 11.6 and up allowing a syntax that 11.0-11.5 don't. People\nare likely to write code relying on this and then be surprised when\nit doesn't work on a slightly older server. Still, is that so much\ndifferent from cases where we fix a bug that prevented some statement\nfrom working?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Oct 2019 15:55:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "+1 for the configuration option. Otherwise, migration is a nightmare -- so\nmany CTEs were written specifically to use the \"optimization fence\"\nbehavior. The lack of such configuration options is now a \"migration fence\".\n\nOn Sat, Oct 19, 2019 at 2:49 AM Colin Watson <cjwatson@canonical.com> wrote:\n\n> On Sat, Oct 19, 2019 at 05:01:04AM +0100, Andrew Gierth wrote:\n> > >>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n> > > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n> > >> However, an alternative would be to backport the new syntax to some\n> > >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n> > >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n> > >> need to support \"NOT MATERIALIZED\" since that's explicitly\n> > >> requesting the new query-folding feature that only exists in 12.\n> > >> Would something like the attached patch against REL_11_STABLE be\n> > >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n> > >> 10.\n> >\n> > Michael> I am afraid that new features don't gain a backpatch. This is\n> > Michael> a project policy. Back-branches should just include bug fixes.\n> >\n> > I do think an argument can be made for making an exception in this\n> > particular case. This wouldn't be backpatching a feature, just accepting\n> > and ignoring some of the new syntax to make upgrading easier.\n>\n> Right, this is my position too. I'm explicitly not asking for\n> backpatching of the CTE-inlining feature, just trying to cope with the\n> fact that we now have to spell some particular queries differently to\n> retain the performance characteristics we need for them.\n>\n> I suppose an alternative would be to add a configuration option to 12\n> that allows disabling inlining of CTEs cluster-wide: we could then\n> upgrade to 12 with inlining disabled, add MATERIALIZED to the relevant\n> queries, and then re-enable inlining. But I like that less because it\n> would end up leaving cruft around in PostgreSQL's configuration code\n> somewhat indefinitely for the sake of an edge case in upgrading to a\n> particular version.\n>\n> --\n> Colin Watson [cjwatson@canonical.com]\n>\n>\n>\n\n+1 for the configuration option. Otherwise, migration is a nightmare -- so many CTEs were written specifically to use the \"optimization fence\" behavior. The lack of such configuration options is now a \"migration fence\".On Sat, Oct 19, 2019 at 2:49 AM Colin Watson <cjwatson@canonical.com> wrote:On Sat, Oct 19, 2019 at 05:01:04AM +0100, Andrew Gierth wrote:\n> >>>>> \"Michael\" == Michael Paquier <michael@paquier.xyz> writes:\n>  > On Fri, Oct 18, 2019 at 02:21:30PM +0100, Colin Watson wrote:\n>  >> However, an alternative would be to backport the new syntax to some\n>  >> earlier versions. \"WITH ... AS MATERIALIZED\" can easily just be\n>  >> synonymous with \"WITH ... AS\" in versions prior to 12; there's no\n>  >> need to support \"NOT MATERIALIZED\" since that's explicitly\n>  >> requesting the new query-folding feature that only exists in 12.\n>  >> Would something like the attached patch against REL_11_STABLE be\n>  >> acceptable? I'd like to backpatch it at least as far as PostgreSQL\n>  >> 10.\n> \n>  Michael> I am afraid that new features don't gain a backpatch. This is\n>  Michael> a project policy. Back-branches should just include bug fixes.\n> \n> I do think an argument can be made for making an exception in this\n> particular case. This wouldn't be backpatching a feature, just accepting\n> and ignoring some of the new syntax to make upgrading easier.\n\nRight, this is my position too.  I'm explicitly not asking for\nbackpatching of the CTE-inlining feature, just trying to cope with the\nfact that we now have to spell some particular queries differently to\nretain the performance characteristics we need for them.\n\nI suppose an alternative would be to add a configuration option to 12\nthat allows disabling inlining of CTEs cluster-wide: we could then\nupgrade to 12 with inlining disabled, add MATERIALIZED to the relevant\nqueries, and then re-enable inlining.  But I like that less because it\nwould end up leaving cruft around in PostgreSQL's configuration code\nsomewhat indefinitely for the sake of an edge case in upgrading to a\nparticular version.\n\n-- \nColin Watson                                    [cjwatson@canonical.com]", "msg_date": "Sat, 19 Oct 2019 13:46:11 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, Oct 19, 2019 at 8:11 AM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> That embeds a temporary hack in the application code indefinitely.\n>\n\nOr postpone the migration indefinitely. I saw so many times how migration\nin large companies was postponed because of similar \"small\" issues -- when\nthe code base is large, it's easier for managers to say something like \"no,\nwe will better live without cool new features for a couple of more years\nthan put our systems at risk due to lack of testing\".\n\nNobody invented an excellent way to test production workloads in\nnon-production environments yet. I know it very well because I'm also\nworking in this direction for a couple of years. So this feature (a great\none) seems to me as a major roadblock for DBAs and developers who would\nlike to migrate to 12 and have better performance and all the new features.\nIronically, including this one for the new or the updated code!\n\nIf you need to patch all your code adding \"AS MATERIALIZED\" (and you will\nneed it if you want to minimize risks of performance degradation after the\nupgrade), then it's also a temporary hack. But much, much more expensive in\nimplementation in large projects, and sometimes even not possible.\n\nI do think that the lack of this configuration option will prevent some\nprojects from migration for a long time.\n\nCorrect me if I'm wrong. Maybe somebody already thought about migration\noptions here and have good answers? What is the best way to upgrade if you\nhave dozens of multi-terabyte databases, a lot of legacy code and workloads\nwhere 1 minute of downtime or even performance degradation would cost a lot\nof money so it is not acceptable? What would be the good answers here?\n\nOn Sat, Oct 19, 2019 at 8:11 AM Isaac Morland <isaac.morland@gmail.com> wrote:That embeds a temporary hack in the application code indefinitely.Or postpone the migration indefinitely. I saw so many times how migration in large companies was postponed because of similar \"small\" issues -- when the code base is large, it's easier for managers to say something like \"no, we will better live without cool new features for a couple of more years than put our systems at risk due to lack of testing\".Nobody invented an excellent way to test production workloads in non-production environments yet. I know it very well because I'm also working in this direction for a couple of years. So this feature (a great one) seems to me as a major roadblock for DBAs and developers who would like to migrate to 12 and have better performance and all the new features. Ironically, including this one for the new or the updated code!If you need to patch all your code adding \"AS MATERIALIZED\" (and you will need it if you want to minimize risks of performance degradation after the upgrade), then it's also a temporary hack. But much, much more expensive in implementation in large projects, and sometimes even not possible.I do think that the lack of this configuration option will prevent some projects from migration for a long time.Correct me if I'm wrong. Maybe somebody already thought about migration options here and have good answers? What is the best way to upgrade if you have dozens of multi-terabyte databases, a lot of legacy code and workloads where 1 minute of downtime or even performance degradation would cost a lot of money so it is not acceptable? What would be the good answers here?", "msg_date": "Sat, 19 Oct 2019 14:04:43 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, Oct 19, 2019 at 10:53 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> > FWIW I'm not sure the \"we don't want to upgrade application code at the\n> > same time as the database\" is really tenable.\n>\n> I'm -1 for exactly this reason.\n\n-1 from me, too, also for this reason. I bet if we started looking\nwe'd find many changes every year that we could justify partially or\ncompletely back-porting on similar grounds, and if we start doing\nthat, we'll certainly screw it up sometimes, turning what should have\nbeen a smooth minor release upgrade process into one that breaks. And\nwe'll still not satisfy the people who don't want to upgrade the\napplication and the database at the same time, because there will\nalways be changes where nothing like this is remotely reasonable.\n\nAlso, we'll then have a lot more behavior differences between minor\nreleases, which sounds like a bad thing to me. In particular, nobody\nwill be happy if a pg_dump taken on version X.Y fails to restore on\nversion X.Z. But even apart from that, it just doesn't sound like a\ngood idea to have the user-facing behavior vary significantly across\nminor releases...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 21 Oct 2019 13:19:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, Oct 19, 2019 at 02:35:42PM -0400, Isaac Morland wrote:\n> On Sat, 19 Oct 2019 at 13:36, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Isaac Morland (isaac.morland@gmail.com) wrote:\n> > That embeds a temporary hack in the application code indefinitely.\n> \n> ... one could argue the same about having to say AS MATERIALIZED.\n> \n> �\n> I think OFFSET 0 is a hack - the fact that it forces an optimization fence\n> feels like an oddity. By contrast, saying AS MATERIALIZED means materialize the\n> CTE. I suppose you could argue that the need to be able to request that is a\n> temporary hack until query optimization improves further, but I don't think\n> that's realistic. For the foreseeable future we will need to be able to tell\n> the query planner that it is wrong. I mean, in principle the DB should figure\n> out for itself which (non-constraint) indexes are needed. But I don't see any\n> proposals to attempt to implement that.\n> \n> Side note: I am frequently disappointed by the query planner. I have had many\n> situations in which a nice simple strategy of looking up some tiny number of\n> records in an index and then following more indices to get joined records would\n> have worked, but instead it did a linear scan through the wrong starting table.\n> So I'm very glad the AS MATERIALIZED now exists for when it's needed. On the\n> other hand, I recognize that the reason I'm disappointed is because my\n> expectations are so high: often I've written a query that joins several views\n> together, meaning that under the covers it's really joining maybe 20 tables,\n> and it comes back with the answer instantly. So in effect the query planner is\n> just good enough to make me expect it to be even better than it is.\n\nWell, since geqo_threshold = 12 is the default, for a 20-table join, you\nare using genetic query optimization (GEQO) in PG 12 without\nMATERIALIZED:\n\n\thttps://www.postgresql.org/docs/12/geqo.html\n\nand GEQO assumes it would take too long to fully test all optimization\npossibilities, so it randomly checks just some of them. Therefore, it\nis no surprise you are disappointed in its output.\n\nIn a way, when you are using materialized CTEs, you are doing the\noptimization yourself, in your SQL code, and then the table join count\ndrops low enough that GEQO is not used and Postgres fully tests all\noptimization possibilities. This is behavior I had never considered ---\nthe idea that the user is partly replacing the optimizer, and that using\nmaterialized CTEs prevents the problems that require the use of GEQO.\n\nI guess my big take-away is that not only can MATERIALIZE change the\nquality of query plans but it can also improve the quality of query\nplans if it prevents GEQO from being used.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 4 Nov 2019 18:13:13 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" }, { "msg_contents": "On Sat, Oct 19, 2019 at 11:52 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I wonder if an extension could do something like that, though. It can\n> install a hook after parse analysis, so I guess it could walk the CTEs\n> and mark them as materialized.\n\nI wonder if the existing pg_hint_plan extension could be extended to\ndo that using something like /*+ MATERIALIZE */. That'd presumably be\nignored when not installed/not understood etc. I've never used\npg_hint_plan myself and don't know how or how well it works, but it\nlook like it supports Oracle-style hints hidden in comments like /*+\nHashJoin(a b) */ rather than SQL Server-style hints that are in the\nSQL grammar itself like SELECT ... FROM a HASH JOIN b.\n\n\n", "msg_date": "Tue, 5 Nov 2019 13:20:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Backport \"WITH ... AS MATERIALIZED\" syntax to <12?" } ]
[ { "msg_contents": "Hi Hackers,\n\nBy optimising our application I stumbled over the join quals used very often in our application.\nIn general this concerns datasets, which are subdivided into chunks, like years, seasons (here half a year), multiple tenants in OLTP system etc.\nIn these cases many tables are joined only to data of the same chunk, identified always by a separating column in the table (here: xx_season).\n\n(I tested it on PG 12.0 on Windows 64 bit, but similar results on older stable releases and other OS)\n\nHere is the test case, also in the attached file:\n(I choose to join 2 tables with 2 seasons (2 and 3) of about 1 million of records for every season. I put some randomness in the table creation to simulate the normal situation in OLTP systems)\n\n----------------------------------- Source start\n\ndrop table if exists tmaster;\n\ncreate table tmaster (\nid_t1 integer,\nt1_season integer,\nt1_id_t2 integer,\nt1_value integer,\nt1_cdescr varchar,\nprimary key (id_t1)\n);\n\n--\n\n\nselect setseed (0.34512);\n\ninsert into tmaster\nselect\n inum\n,iseason\n,row_number () over () as irow\n,irandom\n,'TXT: '||irandom::varchar\nfrom (\nselect \n inum::integer\n,((inum>>20)+2)::integer as iseason\n,inum::integer + (500000*random())::integer as irandom\nfrom generate_series (1,(1<<21)) as inum\norder by irandom\n)qg\n;\n\nalter table tmaster add constraint uk_master_season_id unique (t1_season,id_t1);\n\n\n\ndrop table if exists tfact;\n\ncreate table tfact (\nid_t2 integer,\nt2_season integer,\nt2_value integer,\nt2_cdescr varchar,\nprimary key (id_t2)\n);\n\n--\n\n\nselect setseed (-0.76543);\n\ninsert into tfact\nselect\n qg.*\n,'FKT: '||irandom::varchar\nfrom (\nselect \n inum::integer\n,((inum>>20)+2)::integer as iseason\n,inum::integer + (500000*random())::integer as irandom\nfrom generate_series (1,(1<<21)) as inum\norder by irandom\n)qg\n;\n\nalter table tfact add constraint uk_fact_season_id unique (t2_season,id_t2);\n\n-----------------\n\n-- slower:\n\nexplain (analyze, verbose, costs, settings, buffers)\nselect *\nfrom tmaster\nleft join tfact on id_t2=t1_id_t2 and t2_season=t1_season\nwhere t1_season=3\n;\n\n-- faster by setting a constant in left join on condition:\n\nexplain (analyze, verbose, costs, settings, buffers)\nselect *\nfrom tmaster\nleft join tfact on id_t2=t1_id_t2 and t2_season=3 --t1_season\nwhere t1_season=3\n;\n\n----------------------------------- Source end\n\nThe results for the first query:\n\nexplain (analyze, verbose, costs, settings, buffers)\nselect *\nfrom tmaster\nleft join tfact on id_t2=t1_id_t2 and t2_season=t1_season\nwhere t1_season=3\n;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=53436.01..111610.15 rows=1046129 width=52) (actual time=822.784..2476.573 rows=1048576 loops=1)\n Output: tmaster.id_t1, tmaster.t1_season, tmaster.t1_id_t2, tmaster.t1_value, tmaster.t1_cdescr, tfact.id_t2, tfact.t2_season, tfact.t2_value, tfact.t2_cdescr\n Inner Unique: true\n Hash Cond: ((tmaster.t1_season = tfact.t2_season) AND (tmaster.t1_id_t2 = tfact.id_t2))\n Buffers: shared hit=2102193, temp read=10442 written=10442\n -> Index Scan using uk_master_season_id on public.tmaster (cost=0.43..32263.38 rows=1046129 width=28) (actual time=0.008..565.222 rows=1048576 loops=1)\n Output: tmaster.id_t1, tmaster.t1_season, tmaster.t1_id_t2, tmaster.t1_value, tmaster.t1_cdescr\n Index Cond: (tmaster.t1_season = 3)\n Buffers: shared hit=1051086\n -> Hash (cost=31668.49..31668.49 rows=1043473 width=24) (actual time=820.960..820.961 rows=1048576 loops=1)\n Output: tfact.id_t2, tfact.t2_season, tfact.t2_value, tfact.t2_cdescr\n Buckets: 524288 (originally 524288) Batches: 4 (originally 2) Memory Usage: 28673kB\n Buffers: shared hit=1051107, temp written=4316\n -> Index Scan using uk_fact_season_id on public.tfact (cost=0.43..31668.49 rows=1043473 width=24) (actual time=0.024..598.648 rows=1048576 loops=1)\n Output: tfact.id_t2, tfact.t2_season, tfact.t2_value, tfact.t2_cdescr\n Index Cond: (tfact.t2_season = 3)\n Buffers: shared hit=1051107\n Settings: effective_cache_size = '8GB', random_page_cost = '1', temp_buffers = '32MB', work_mem = '32MB'\n Planning Time: 0.627 ms\n Execution Time: 2502.702 ms\n(20 rows)\n\nand for the second one:\n\nexplain (analyze, verbose, costs, settings, buffers)\nselect *\nfrom tmaster\nleft join tfact on id_t2=t1_id_t2 and t2_season=3 --t1_season\nwhere t1_season=3\n;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=50827.33..106255.38 rows=1046129 width=52) (actual time=758.086..2313.175 rows=1048576 loops=1)\n Output: tmaster.id_t1, tmaster.t1_season, tmaster.t1_id_t2, tmaster.t1_value, tmaster.t1_cdescr, tfact.id_t2, tfact.t2_season, tfact.t2_value, tfact.t2_cdescr\n Inner Unique: true\n Hash Cond: (tmaster.t1_id_t2 = tfact.id_t2)\n Buffers: shared hit=2102193, temp read=9024 written=9024\n -> Index Scan using uk_master_season_id on public.tmaster (cost=0.43..32263.38 rows=1046129 width=28) (actual time=0.009..549.793 rows=1048576 loops=1)\n Output: tmaster.id_t1, tmaster.t1_season, tmaster.t1_id_t2, tmaster.t1_value, tmaster.t1_cdescr\n Index Cond: (tmaster.t1_season = 3)\n Buffers: shared hit=1051086\n -> Hash (cost=31668.49..31668.49 rows=1043473 width=24) (actual time=756.125..756.125 rows=1048576 loops=1)\n Output: tfact.id_t2, tfact.t2_season, tfact.t2_value, tfact.t2_cdescr\n Buckets: 524288 Batches: 4 Memory Usage: 18711kB\n Buffers: shared hit=1051107, temp written=4317\n -> Index Scan using uk_fact_season_id on public.tfact (cost=0.43..31668.49 rows=1043473 width=24) (actual time=0.025..584.652 rows=1048576 loops=1)\n Output: tfact.id_t2, tfact.t2_season, tfact.t2_value, tfact.t2_cdescr\n Index Cond: (tfact.t2_season = 3)\n Buffers: shared hit=1051107\n Settings: effective_cache_size = '8GB', random_page_cost = '1', temp_buffers = '32MB', work_mem = '32MB'\n Planning Time: 0.290 ms\n Execution Time: 2339.651 ms\n(20 rows)\n\nBy replacing the =t1_season with =3 the query took about 160 ms less or about 7 percent.\n\nBoth queries are logically equivalent. The planner correctly identifies the Index Cond: (tfact.t2_season = 3) for selecting from the index uk_fact_season_id.\nBut in the slower query the outer hash condition still hashes with the column t1_season and t2_season as in\nHash Cond: ((tmaster.t1_season = tfact.t2_season) AND (tmaster.t1_id_t2 = tfact.id_t2)).\nThis can only be detected with explain analyze verbose when the hash cond are shown.\n\nThe first query notation with and t2_season=t1_season is much more natural, as it requires only one numerical constant to get good query Speed (often many fact tables are joined).\n\nThe inclusion of the xx_season quals reduces the processed dataset and helps also when the seasons columns are used for list partitioning of all the involved tables.\nWhen omitting it, the whole fact table will be joined.\n\nTo me it seems that the \"constantness\" is not propagated to all equivalence columns and not considered in hash joining.\n\nUnfortunely I am not in the position to write a patch, so I would appreciate any help to get this optimization realized.\n\nMuch thanks in advance\n\nHans Buschmann", "msg_date": "Fri, 18 Oct 2019 15:40:34 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": true, "msg_subject": "Missing constant propagation in planner on hash quals causes join\n slowdown" }, { "msg_contents": "On Fri, Oct 18, 2019 at 03:40:34PM +0000, Hans Buschmann wrote:\n>\n> ...\n>\n>Both queries are logically equivalent. The planner correctly identifies\n>the Index Cond: (tfact.t2_season = 3) for selecting from the index\n>uk_fact_season_id.\n>\n\nAre those queries actually equivalent? I've been repeatedly bitten by\nnullability in left join queries, when playing with optimizations like\nthis, so maybe this is one of such cases?\n\nThis seems to be happening because distribute_qual_to_rels() does this:\n\n ...\n else if (bms_overlap(relids, outerjoin_nonnullable))\n {\n /*\n * The qual is attached to an outer join and mentions (some of the)\n * rels on the nonnullable side, so it's not degenerate.\n *\n * We can't use such a clause to deduce equivalence (the left and\n * right sides might be unequal above the join because one of them has\n * gone to NULL) ... but we might be able to use it for more limited\n * deductions, if it is mergejoinable. So consider adding it to the\n * lists of set-aside outer-join clauses.\n */\n is_pushed_down = false;\n ...\n }\n ...\n\nand the clause does indeed reference the nullable side of the join,\npreventing us from marking the clause as pushed-down.\n\nI haven't managed to construct a query that would break this, though.\nI.e. a case where the two queries would give different results. So maybe\nthose queries actually are redundant. Or maybe the example would need to\nbe more complicated (requiring more joins, or something like that).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 Nov 2019 14:43:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing constant propagation in planner on hash quals causes\n join slowdown" }, { "msg_contents": "\nThanks for looking at it.\n\nI think these two queries are equivalent, as shown by the explain.\n\nIn both cases the index scan only selects tuples with xx_season=3 as shown in both explains:\n\n Index Cond: (tmaster.t1_season = 3)\n Index Cond: (tfact.t2_season = 3)\nSo no tuple can have a null value for xx_season.\n\nMy point is the construction of the hash table, wich includes the t2_season even if it is constant and not null. From explain:\n\nwith overhead:\n Hash Cond: ((tmaster.t1_season = tfact.t2_season) AND (tmaster.t1_id_t2 = tfact.id_t2))\n\noptimized:\n Hash Cond: (tmaster.t1_id_t2 = tfact.id_t2)\n\nThe planner correctly sets the index conditions (knows that the xx_season columns are constant), but fails to apply this constantness to the hash conditions by discarding a constant column in a hash table.\n\nIn my real application most of the xx_season columns are declared not null, but this should not change the outcome.\n\nThe performance difference is slightly lower when the created tables are previously analyzed (what I forgot).\n\nBut the percentual gain is much higher considering only the construction of the hash table, the only part of the query execution altered by this optimization.\n\nIn my opinion this scenario could be quite common in multi-tenant cases, in logging, time based data sets etc.\n\nI tried to look at the pg source code but could not yet find the place where the hash conditions are selected and potentially tested.\n\nWhen optimizing the constants away there my be a special case where all hash conditions are constants, so a hash table has not to be build (or at least one hash cond has to be preserved). \n\n\nHans Buschmann\n\n\n\n", "msg_date": "Sat, 9 Nov 2019 15:40:03 +0000", "msg_from": "Hans Buschmann <buschmann@nidsa.net>", "msg_from_op": true, "msg_subject": "AW: Missing constant propagation in planner on hash quals causes join\n slowdown" }, { "msg_contents": "Hans Buschmann <buschmann@nidsa.net> writes:\n> The planner correctly sets the index conditions (knows that the xx_season columns are constant), but fails to apply this constantness to the hash conditions by discarding a constant column in a hash table.\n\nYeah. The reason for this behavior is that, after\nreconsider_outer_join_clauses has decided that it can derive\ntfact.t2_season = 3 from the given conditions, it still \"throws back\"\nthe original join condition tmaster.t1_season = tfact.t2_season into\nthe set of ordinary join clauses, so that that condition will still get\napplied at the time the join is computed. The comment about it claims\n\n * If we don't find any match for a set-aside outer join clause, we must\n * throw it back into the regular joinclause processing by passing it to\n * distribute_restrictinfo_to_rels(). If we do generate a derived clause,\n * however, the outer-join clause is redundant. We still throw it back,\n * because otherwise the join will be seen as a clauseless join and avoided\n * during join order searching; but we mark it as redundant to keep from\n * messing up the joinrel's size estimate.\n\nHowever, this seems to be a lie, or at least not the whole truth. If you\ntry diking out the throw-back logic, which is simple enough to do, you'll\nimmediately find that some test cases in join.sql give the wrong answers.\nThe reason is that once we've derived tfact.t2_season = 3 and asserted\nthat that's an equivalence condition, the eclass logic believes that\ntmaster.t1_season = tfact.t2_season must hold everywhere (that's more or\nless the definition of an eclass). But *it doesn't hold* above the left\njoin, because tfact.t2_season could be null instead. In particular this\ncan break any higher joins involving tfact.t2_season. By treating\ntmaster.t1_season = tfact.t2_season as an ordinary join clause we force\nthe necessary tests to be made anyway, independently of the eclass logic.\n(There's no bug in Hans' example because there's only one join; the\nproblem is not really with this particular clause, but with other columns\nthat might also be thought equal to tfact.t2_season. It's those upper\njoin clauses that can't safely be thrown away.)\n\nI've had a bee in my bonnet for a long time about redesigning all this\nto be less klugy. Fundamentally what we lack is an honest representation\nthat a given value might be NULL instead of the original value from its\ntable; this results in assorted compromises both here and elsewhere in\nthe planner. The rough sketch that's been lurking in my hindbrain is\n\n1) Early in the planner where we flatten join alias variables, do so\nonly when they actually are formally equivalent to the input variables,\nie only for the non-nullable side of any outer join. This gives us\nthe desired representation distinction between original and\npossibly-nulled values: the former are base-table Vars, the latter\nare join Vars.\n\n2) tmaster.t1_season = tfact.t2_season could be treated as an honest\nequivalence condition *between those variables*, but it would not\nimply anything about the join output variable \"j.t2_season\".\nI think reconsider_outer_join_clauses goes away entirely.\n\n3) Places where we're trying to estimate variable values would have\nto be taught to look through unflattened join alias variables to see\nwhat they're based on. For extra credit they could assign some\nhigher-than-normal probability that the value is null (though that\ncould be done later, since there's certainly nothing accounting\nfor that today).\n\n4) The final flattening of these alias variables would probably not\nhappen till setrefs.c.\n\n5) I have not thought through what this implies for full joins.\nThe weird COALESCE business might go away entirely, or it might\nbe something that setrefs.c still has to insert.\n\n6) There are lots of other, related kluges that could stand to be revisited\nat the same time. The business with \"outer-join delayed\" clauses is\na big example. Maybe that all just magically goes away once we have\na unique understanding of what value a Var represents, but I've not\nthought hard about it. We'd certainly need some new understanding of\nhow to schedule outer-join clauses, since their Var membership would\nno longer correspond directly to sets of baserels. There might be\nconnections to the \"PlaceHolderVar\" mess as well.\n\nThis is a pretty major change and will doubtless break stuff throughout\nthe planner. I'm also not clear on the planner performance implications;\nboth point (3) and the need for two rounds of alias flattening seem like\nthey'd slow things down. But maybe we could buy some speed back by\neliminating kluges, and there is a hope that we'd end up with better\nplans in a useful number of cases.\n\nAnyway, the large amount of work involved and the rather small benefits\nwe'd probably get have discouraged me from working on this. But maybe\nsomeday it'll get to the top of the queue. Basically this is all\ntechnical debt left over from the way we bolted outer joins onto the\noriginal planner design, so we really oughta fix it sometime.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Dec 2019 18:34:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Missing constant propagation in planner on hash quals causes\n join slowdown" } ]
[ { "msg_contents": "Hello,\n\nI am one of the primary maintainers of Pleroma, a federated social\nnetworking application written in Elixir, which uses PostgreSQL in\nways that may be considered outside the typical usage scenarios for\nPostgreSQL.\n\nNamely, we leverage JSONB heavily as a backing store for JSON-LD\ndocuments[1]. We also use JSONB in combination with Ecto's \"embedded\nstructs\" to store things like user preferences.\n\nThe fact that we can use JSONB to achieve our design goals is a\ntestament to the flexibility PostgreSQL has.\n\nHowever, in the process of doing so, we have discovered a serious flaw\nin the way jsonb_set() functions, but upon reading through this\nmailing list, we have discovered that this flaw appears to be an\nintentional design.[2]\n\nA few times now, we have written migrations that do things like copy\nkeys in a JSONB object to a new key, to rename them. These migrations\nlook like so:\n\n update users set info=jsonb_set(info, '{bar}', info->'foo');\n\nTypically, this works nicely, except for cases where evaluating\ninfo->'foo' results in an SQL null being returned. When that happens,\njsonb_set() returns an SQL null, which then results in data loss.[3]\n\nThis is not acceptable. PostgreSQL is a database that is renowned for\ndata integrity, but here it is wiping out data when it encounters a\nfailure case. The way jsonb_set() should fail in this case is to\nsimply return the original input: it should NEVER return SQL null.\n\nBut hey, we've been burned by this so many times now that we'd like to\ndonate a useful function to the commons, consider it a mollyguard for\nthe real jsonb_set() function.\n\n create or replace function safe_jsonb_set(target jsonb, path\ntext[], new_value jsonb, create_missing boolean default true) returns\njsonb as $$\n declare\n result jsonb;\n begin\n result := jsonb_set(target, path, coalesce(new_value,\n'null'::jsonb), create_missing);\n if result is NULL then\n return target;\n else\n return result;\n end if;\n end;\n $$ language plpgsql;\n\nThis safe_jsonb_set() wrapper should not be necessary. PostgreSQL's\nown jsonb_set() should have this safety feature built in. Without it,\nusing jsonb_set() is like playing russian roulette with your data,\nwhich is not a reasonable expectation for a database renowned for its\ncommitment to data integrity.\n\nPlease fix this bug so that we do not have to hack around this bug.\nIt has probably ruined countless people's days so far. I don't want\nto hear about how the function is strict, I'm aware it is strict, and\nthat strictness is harmful. Please fix the function so that it is\nactually safe to use.\n\n[1]: JSON-LD stands for JSON Linked Data. Pleroma has an \"internal\nrepresentation\" that shares similar qualities to JSON-LD, so I use\nJSON-LD here as a simplification.\n\n[2]: https://www.postgresql.org/message-id/flat/qfkua9$2q0e$1@blaine.gmane.org\n\n[3]: https://git.pleroma.social/pleroma/pleroma/issues/1324 is an\nexample of data loss induced by this issue.\n\nAriadne\n\n\n", "msg_date": "Fri, 18 Oct 2019 12:37:24 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\n\nOn Fri, Oct 18, 2019, at 12:37, Ariadne Conill wrote:\n> Hello,\n> \n> I am one of the primary maintainers of Pleroma, a federated social\n> networking application written in Elixir, which uses PostgreSQL in\n> ways that may be considered outside the typical usage scenarios for\n> PostgreSQL.\n> \n> Namely, we leverage JSONB heavily as a backing store for JSON-LD\n> documents[1]. We also use JSONB in combination with Ecto's \"embedded\n> structs\" to store things like user preferences.\n> \n> The fact that we can use JSONB to achieve our design goals is a\n> testament to the flexibility PostgreSQL has.\n> \n> However, in the process of doing so, we have discovered a serious flaw\n> in the way jsonb_set() functions, but upon reading through this\n> mailing list, we have discovered that this flaw appears to be an\n> intentional design.[2]\n> \n> A few times now, we have written migrations that do things like copy\n> keys in a JSONB object to a new key, to rename them. These migrations\n> look like so:\n> \n> update users set info=jsonb_set(info, '{bar}', info->'foo');\n> \n> Typically, this works nicely, except for cases where evaluating\n> info->'foo' results in an SQL null being returned. When that happens,\n> jsonb_set() returns an SQL null, which then results in data loss.[3]\n> \n> This is not acceptable. PostgreSQL is a database that is renowned for\n> data integrity, but here it is wiping out data when it encounters a\n> failure case. The way jsonb_set() should fail in this case is to\n> simply return the original input: it should NEVER return SQL null.\n> \n> But hey, we've been burned by this so many times now that we'd like to\n> donate a useful function to the commons, consider it a mollyguard for\n> the real jsonb_set() function.\n> \n> create or replace function safe_jsonb_set(target jsonb, path\n> text[], new_value jsonb, create_missing boolean default true) returns\n> jsonb as $$\n> declare\n> result jsonb;\n> begin\n> result := jsonb_set(target, path, coalesce(new_value,\n> 'null'::jsonb), create_missing);\n> if result is NULL then\n> return target;\n> else\n> return result;\n> end if;\n> end;\n> $$ language plpgsql;\n> \n> This safe_jsonb_set() wrapper should not be necessary. PostgreSQL's\n> own jsonb_set() should have this safety feature built in. Without it,\n> using jsonb_set() is like playing russian roulette with your data,\n> which is not a reasonable expectation for a database renowned for its\n> commitment to data integrity.\n> \n> Please fix this bug so that we do not have to hack around this bug.\n> It has probably ruined countless people's days so far. I don't want\n> to hear about how the function is strict, I'm aware it is strict, and\n> that strictness is harmful. Please fix the function so that it is\n> actually safe to use.\n> \n> [1]: JSON-LD stands for JSON Linked Data. Pleroma has an \"internal\n> representation\" that shares similar qualities to JSON-LD, so I use\n> JSON-LD here as a simplification.\n> \n> [2]: https://www.postgresql.org/message-id/flat/qfkua9$2q0e$1@blaine.gmane.org\n> \n> [3]: https://git.pleroma.social/pleroma/pleroma/issues/1324 is an\n> example of data loss induced by this issue.\n> \n> Ariadne\n>\n\nThis should be directed towards the hackers list, too.\n\nWhat will it take to change the semantics of jsonb_set()? MySQL implements safe behavior here. It's a real shame Postgres does not. I'll offer a $200 bounty to whoever fixes it. I'm sure it's destroyed more than $200 worth of data and people's time by now, but it's something.\n\n\nKind regards,\n\n\n\n-- \n Mark Felder\n ports-secteam & portmgr alumni\n feld@FreeBSD.org\n\n\n", "msg_date": "Fri, 18 Oct 2019 14:10:51 -0500", "msg_from": "\"Mark Felder\" <feld@FreeBSD.org>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "## Ariadne Conill (ariadne@dereferenced.org):\n\n> update users set info=jsonb_set(info, '{bar}', info->'foo');\n> \n> Typically, this works nicely, except for cases where evaluating\n> info->'foo' results in an SQL null being returned. When that happens,\n> jsonb_set() returns an SQL null, which then results in data loss.[3]\n\nSo why don't you use the facilities of SQL to make sure to only\ntouch the rows which match the prerequisites?\n\n UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n WHERE info->'foo' IS NOT NULL;\n\nNo special wrappers required.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Fri, 18 Oct 2019 23:50:18 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net>\nwrote:\n\n> ## Ariadne Conill (ariadne@dereferenced.org):\n>\n> > update users set info=jsonb_set(info, '{bar}', info->'foo');\n> >\n> > Typically, this works nicely, except for cases where evaluating\n> > info->'foo' results in an SQL null being returned. When that happens,\n> > jsonb_set() returns an SQL null, which then results in data loss.[3]\n>\n> So why don't you use the facilities of SQL to make sure to only\n> touch the rows which match the prerequisites?\n>\n> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n> WHERE info->'foo' IS NOT NULL;\n>\n>\nThere are many ways to add code to queries to make working with this\nfunction safer - though using them presupposes one remembers at the time of\nwriting the query that there is danger and caveats in using this function.\nI agree that we should have (and now) provided sane defined behavior when\none of the inputs to the function is null instead blowing off the issue and\ndefining the function as being strict. Whether that is \"ignore and return\nthe original object\" or \"add the key with a json null scalar value\" is\ndebatable but either is considerably more useful than returning SQL NULL.\n\nDavid J.\n\nOn Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:## Ariadne Conill (ariadne@dereferenced.org):\n\n>    update users set info=jsonb_set(info, '{bar}', info->'foo');\n> \n> Typically, this works nicely, except for cases where evaluating\n> info->'foo' results in an SQL null being returned.  When that happens,\n> jsonb_set() returns an SQL null, which then results in data loss.[3]\n\nSo why don't you use the facilities of SQL to make sure to only\ntouch the rows which match the prerequisites?\n\n  UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n    WHERE info->'foo' IS NOT NULL;There are many ways to add code to queries to make working with this function safer - though using them presupposes one remembers at the time of writing the query that there is danger and caveats in using this function.  I agree that we should have (and now) provided sane defined behavior when one of the inputs to the function is null instead blowing off the issue and defining the function as being strict.  Whether that is \"ignore and return the original object\" or \"add the key with a json null scalar value\" is debatable but either is considerably more useful than returning SQL NULL.David J.", "msg_date": "Fri, 18 Oct 2019 15:00:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Fri, Oct 18, 2019 at 4:50 PM Christoph Moench-Tegeder\n<cmt@burggraben.net> wrote:\n>\n> ## Ariadne Conill (ariadne@dereferenced.org):\n>\n> > update users set info=jsonb_set(info, '{bar}', info->'foo');\n> >\n> > Typically, this works nicely, except for cases where evaluating\n> > info->'foo' results in an SQL null being returned. When that happens,\n> > jsonb_set() returns an SQL null, which then results in data loss.[3]\n>\n> So why don't you use the facilities of SQL to make sure to only\n> touch the rows which match the prerequisites?\n>\n> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n> WHERE info->'foo' IS NOT NULL;\n\nWhy don't we fix the database engine to not eat data when the\njsonb_set() operation fails? Telling people to work around design\nflaws in the software is what I would expect of MySQL, not a database\nknown for its data integrity.\n\nObviously, it is possible to adjust the UPDATE statement to only match\ncertain pre-conditions, *if you know those pre-conditions may be a\nproblem*. What happens with us, and with other people who have hit\nthis bug with jsonb_set() is that they hit issues that were not\npreviously known about, and that's when jsonb_set() eats your data.\n\nI would also like to point out that the MySQL equivalent, json_set()\nwhen presented with a similar failure simply returns the unmodified\ninput. It is not unreasonable to do the same in PostgreSQL.\nPersonally, as a developer, I expect PostgreSQL to be on their game\nbetter than MySQL.\n\n> No special wrappers required.\n\nA special wrapper is needed because jsonb_set() does broken things\nwhen invoked in situations that do not match the preconceptions of\nthose situations. We will have to ship this wrapper for several years\nbecause of the current behaviour of the jsonb_set() function.\n\nAriadne\n\n\n", "msg_date": "Fri, 18 Oct 2019 17:05:02 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Fri, Oct 18, 2019 at 5:01 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:\n>>\n>> ## Ariadne Conill (ariadne@dereferenced.org):\n>>\n>> > update users set info=jsonb_set(info, '{bar}', info->'foo');\n>> >\n>> > Typically, this works nicely, except for cases where evaluating\n>> > info->'foo' results in an SQL null being returned. When that happens,\n>> > jsonb_set() returns an SQL null, which then results in data loss.[3]\n>>\n>> So why don't you use the facilities of SQL to make sure to only\n>> touch the rows which match the prerequisites?\n>>\n>> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n>> WHERE info->'foo' IS NOT NULL;\n>>\n>\n> There are many ways to add code to queries to make working with this function safer - though using them presupposes one remembers at the time of writing the query that there is danger and caveats in using this function. I agree that we should have (and now) provided sane defined behavior when one of the inputs to the function is null instead blowing off the issue and defining the function as being strict. Whether that is \"ignore and return the original object\" or \"add the key with a json null scalar value\" is debatable but either is considerably more useful than returning SQL NULL.\n\nA great example of how we got burned by this last year: Pleroma\nmaintains pre-computed counters in JSONB for various types of\nactivities (posts, followers, followings). Last year, another counter\nwas added, with a migration. But some people did not run the\nmigration, because they are users, and that's what users do. This\nresulted in Pleroma blanking out the `info` structure for users as\nthey performed new activities that incremented that counter. At that\ntime, Pleroma maintained various things like private keys used to sign\nthings in that JSONB column (we no longer do this because of being\nburned by this several times now), which broke federation temporarily\nfor the affected accounts with other servers for up to a week as those\nservers had to learn new public keys for those accounts (since the\noriginal private keys were lost).\n\nI believe that anything that can be catastrophically broken by users\nnot following upgrade instructions precisely is a serious problem, and\ncan lead to serious problems. I am sure that this is not the only\nproject using JSONB which have had users destroy their own data in\nsuch a completely preventable fashion.\n\nAriadne\n\n\n", "msg_date": "Fri, 18 Oct 2019 17:11:51 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "## Ariadne Conill (ariadne@dereferenced.org):\n\n> Why don't we fix the database engine to not eat data when the\n> jsonb_set() operation fails?\n\nIt didn't fail, it worked like SQL (you've been doing SQL for too\nlong when you get used to the NULL propagation, but that's still\nwhat SQL does - check \"+\" for example).\nAnd changing a function will cause fun for everyone who relies on\nthe current behaviour - so at least it shouldn't be done on a whim\n(some might argue that a whim was what got us into this situation\nin the first place).\n\nContinuing along that thought, I'd even argue that your are\nwriting code which relies on properties of the data which you never\nguaranteed. There is a use case for data types and constraints.\nNot that I'm arguing for maximum surprise in programming, but\nI'm a little puzzled when people eschew thew built-in tools and\nstart implmenting them again side-to-side with what's already\nthere.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Sat, 19 Oct 2019 00:57:52 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/18/19 3:11 PM, Ariadne Conill wrote:\n> Hello,\n> \n> On Fri, Oct 18, 2019 at 5:01 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>>\n>> On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:\n>>>\n>>> ## Ariadne Conill (ariadne@dereferenced.org):\n>>>\n>>>> update users set info=jsonb_set(info, '{bar}', info->'foo');\n>>>>\n>>>> Typically, this works nicely, except for cases where evaluating\n>>>> info->'foo' results in an SQL null being returned. When that happens,\n>>>> jsonb_set() returns an SQL null, which then results in data loss.[3]\n>>>\n>>> So why don't you use the facilities of SQL to make sure to only\n>>> touch the rows which match the prerequisites?\n>>>\n>>> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n>>> WHERE info->'foo' IS NOT NULL;\n>>>\n>>\n>> There are many ways to add code to queries to make working with this function safer - though using them presupposes one remembers at the time of writing the query that there is danger and caveats in using this function. I agree that we should have (and now) provided sane defined behavior when one of the inputs to the function is null instead blowing off the issue and defining the function as being strict. Whether that is \"ignore and return the original object\" or \"add the key with a json null scalar value\" is debatable but either is considerably more useful than returning SQL NULL.\n> \n> A great example of how we got burned by this last year: Pleroma\n> maintains pre-computed counters in JSONB for various types of\n> activities (posts, followers, followings). Last year, another counter\n> was added, with a migration. But some people did not run the\n> migration, because they are users, and that's what users do. This\n\nSo you are more forgiving of your misstep, allowing users to run \noutdated code, then of running afoul of Postgres documented behavior:\n\nhttps://www.postgresql.org/docs/11/functions-json.html\n\" The field/element/path extraction operators return NULL, rather than \nfailing, if the JSON input does not have the right structure to match \nthe request; for example if no such element exists\"\n\nJust trying to figure why one is worse then the other.\n\n> resulted in Pleroma blanking out the `info` structure for users as\n> they performed new activities that incremented that counter. At that\n> time, Pleroma maintained various things like private keys used to sign\n> things in that JSONB column (we no longer do this because of being\n> burned by this several times now), which broke federation temporarily\n> for the affected accounts with other servers for up to a week as those\n> servers had to learn new public keys for those accounts (since the\n> original private keys were lost).\n> \n> I believe that anything that can be catastrophically broken by users\n> not following upgrade instructions precisely is a serious problem, and\n> can lead to serious problems. I am sure that this is not the only\n> project using JSONB which have had users destroy their own data in\n> such a completely preventable fashion.\n> \n> Ariadne\n> \n> \n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 16:01:43 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n>\n> On 10/18/19 3:11 PM, Ariadne Conill wrote:\n> > Hello,\n> >\n> > On Fri, Oct 18, 2019 at 5:01 PM David G. Johnston\n> > <david.g.johnston@gmail.com> wrote:\n> >>\n> >> On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:\n> >>>\n> >>> ## Ariadne Conill (ariadne@dereferenced.org):\n> >>>\n> >>>> update users set info=jsonb_set(info, '{bar}', info->'foo');\n> >>>>\n> >>>> Typically, this works nicely, except for cases where evaluating\n> >>>> info->'foo' results in an SQL null being returned. When that happens,\n> >>>> jsonb_set() returns an SQL null, which then results in data loss.[3]\n> >>>\n> >>> So why don't you use the facilities of SQL to make sure to only\n> >>> touch the rows which match the prerequisites?\n> >>>\n> >>> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n> >>> WHERE info->'foo' IS NOT NULL;\n> >>>\n> >>\n> >> There are many ways to add code to queries to make working with this function safer - though using them presupposes one remembers at the time of writing the query that there is danger and caveats in using this function. I agree that we should have (and now) provided sane defined behavior when one of the inputs to the function is null instead blowing off the issue and defining the function as being strict. Whether that is \"ignore and return the original object\" or \"add the key with a json null scalar value\" is debatable but either is considerably more useful than returning SQL NULL.\n> >\n> > A great example of how we got burned by this last year: Pleroma\n> > maintains pre-computed counters in JSONB for various types of\n> > activities (posts, followers, followings). Last year, another counter\n> > was added, with a migration. But some people did not run the\n> > migration, because they are users, and that's what users do. This\n>\n> So you are more forgiving of your misstep, allowing users to run\n> outdated code, then of running afoul of Postgres documented behavior:\n\nI'm not forgiving of either.\n\n> https://www.postgresql.org/docs/11/functions-json.html\n> \" The field/element/path extraction operators return NULL, rather than\n> failing, if the JSON input does not have the right structure to match\n> the request; for example if no such element exists\"\n\nIt is known that the extraction operators return NULL. The problem\nhere is jsonb_set() returning NULL when it encounters SQL NULL.\n\n> Just trying to figure why one is worse then the other.\n\nAny time a user loses data, it is worse. The preference for not\nhaving data loss is why Pleroma uses PostgreSQL as it's database of\nchoice, as PostgreSQL has traditionally valued durability. If we\nshould not use PostgreSQL, just say so.\n\nAriadne\n\n>\n> > resulted in Pleroma blanking out the `info` structure for users as\n> > they performed new activities that incremented that counter. At that\n> > time, Pleroma maintained various things like private keys used to sign\n> > things in that JSONB column (we no longer do this because of being\n> > burned by this several times now), which broke federation temporarily\n> > for the affected accounts with other servers for up to a week as those\n> > servers had to learn new public keys for those accounts (since the\n> > original private keys were lost).\n> >\n> > I believe that anything that can be catastrophically broken by users\n> > not following upgrade instructions precisely is a serious problem, and\n> > can lead to serious problems. I am sure that this is not the only\n> > project using JSONB which have had users destroy their own data in\n> > such a completely preventable fashion.\n> >\n> > Ariadne\n> >\n> >\n> >\n>\n>\n> --\n> Adrian Klaver\n> adrian.klaver@aklaver.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 18:31:56 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Fri, Oct 18, 2019 at 5:57 PM Christoph Moench-Tegeder\n<cmt@burggraben.net> wrote:\n>\n> ## Ariadne Conill (ariadne@dereferenced.org):\n>\n> > Why don't we fix the database engine to not eat data when the\n> > jsonb_set() operation fails?\n>\n> It didn't fail, it worked like SQL (you've been doing SQL for too\n> long when you get used to the NULL propagation, but that's still\n> what SQL does - check \"+\" for example).\n> And changing a function will cause fun for everyone who relies on\n> the current behaviour - so at least it shouldn't be done on a whim\n> (some might argue that a whim was what got us into this situation\n> in the first place).\n\nNULL propagation makes sense in the context of traditional SQL. What\nusers expect from the JSONB support is for it to behave as JSON\nmanipulation behaves everywhere else. It makes sense that 2 + NULL\nreturns NULL -- it's easily understood as a type mismatch. It does\nnot make sense that jsonb_set('{}'::jsonb, '{foo}', NULL) returns NULL\nbecause a *value* was SQL NULL. In this case, it should, at the\nleast, automatically coalesce to 'null'::jsonb.\n\n> Continuing along that thought, I'd even argue that your are\n> writing code which relies on properties of the data which you never\n> guaranteed. There is a use case for data types and constraints.\n\nThere is a use case, but this frequently comes up as a question people\nask. At some point, you have to start pondering whether the behaviour\ndoes not make logical sense in the context that people frame the JSONB\ntype and it's associated manipulation functions. It is not *obvious*\nthat jsonb_set() will trash your data, but that is what it is capable\nof doing. In a database that is advertised as being durable and not\ntrashing data, even.\n\n> Not that I'm arguing for maximum surprise in programming, but\n> I'm a little puzzled when people eschew thew built-in tools and\n> start implmenting them again side-to-side with what's already\n> there.\n\nIf you read the safe_jsonb_set() function, all we do is coalesce any\nSQL NULL to 'null'::jsonb, which is what it should be doing anyway,\nand then additionally handling any *unanticipated* failure case on top\nof that. While you are arguing that we should use tools to work\naround unanticipated effects (that are not even documented -- in no\nplace in the jsonb_set() documentation does it say \"if you pass SQL\nNULL to this function as a value, it will return SQL NULL\"), I am\narguing that jsonb_set() shouldn't set people up for their data to be\ntrashed in the first place.\n\nEven MySQL gets this right. MySQL! The database that everyone knows\ntakes your data out for a night it will never forget. This argument\nis miserable. It does not matter to me how jsonb_set() works as long\nas it does not return NULL when passed NULL as a value to set. JSONB\ncolumns should be treated as the complex types that they are: you\ndon't null out an entire hash table because someone set a key to SQL\nNULL. So, please, let us fix this.\n\nAriadne\n\n\n", "msg_date": "Fri, 18 Oct 2019 18:45:26 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Greetings,\n\n* Ariadne Conill (ariadne@dereferenced.org) wrote:\n> On Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n> > https://www.postgresql.org/docs/11/functions-json.html\n> > \" The field/element/path extraction operators return NULL, rather than\n> > failing, if the JSON input does not have the right structure to match\n> > the request; for example if no such element exists\"\n> \n> It is known that the extraction operators return NULL. The problem\n> here is jsonb_set() returning NULL when it encounters SQL NULL.\n> \n> > Just trying to figure why one is worse then the other.\n> \n> Any time a user loses data, it is worse. The preference for not\n> having data loss is why Pleroma uses PostgreSQL as it's database of\n> choice, as PostgreSQL has traditionally valued durability. If we\n> should not use PostgreSQL, just say so.\n\nYour contention that the documented, clear, and easily addressed\nbehavior of a particular strict function equates to \"the database system\nloses data and isn't durable\" is really hurting your arguments here, not\nhelping it.\n\nThe argument about how it's unintuitive and can cause application\ndevelopers to misuse the function (which is clearly an application bug,\nbut perhaps an understandable one if the function interface isn't\nintuitive or is confusing) is a reasonable one and might be convincing\nenough to result in a change here.\n\nI'd suggest sticking to the latter argument when making this case.\n\n> > > I believe that anything that can be catastrophically broken by users\n> > > not following upgrade instructions precisely is a serious problem, and\n> > > can lead to serious problems. I am sure that this is not the only\n> > > project using JSONB which have had users destroy their own data in\n> > > such a completely preventable fashion.\n\nLet's be clear here that the issue with the upgrade instructions was\nthat the user didn't follow your *application's* upgrade instructions,\nand your later code wasn't written to use the function, as documented,\nproperly- this isn't a case of PG destroying your data. It's fine to\ncontend that the interface sucks and that we should change it, but the\nargument that PG is eating data because the application sent a query to\nthe database telling it, based on our documentation, to eat the data,\nisn't appropriate. Again, let's have a reasonable discussion here about\nif it makes sense to make a change here because the interface isn't\nintuitive and doesn't match what other systems do (I'm guessing it isn't\nin the SQL standard either, so we unfortunately can't look to that for\nhelp; though I'd hardly be surprised if they supported what PG does\ntoday anyway).\n\nAs a practical response to the issue you've raised- have you considered\nusing a trigger to check the validity of the new jsonb? Or, maybe, just\nmade the jsonb column not nullable? With a trigger you could disallow\nnon-null->null transistions, for example, or if it just shouldn't ever\nbe null then making the column 'not null' would suffice.\n\nI'll echo Christoph's comments up thread too, though in my own language-\nthese are risks you've explicitly accepted by using JSONB and writing\nyour own validation and checks (or, not, apparently) rather than using\nwhat the database system provides. That doesn't mean I'm against\nmaking the change you suggest, but it certainly should become a lesson\nto anyone who is considering using primairly jsonb for their storage\nthat it's risky to do so, because you're removing the database system's\nknowledge and understanding of the data, and further you tend to end up\nnot having the necessary constraints in place to ensure that the data\ndoesn't end up being garbage- thus letting your application destroy all\nthe data easily due to an application bug.\n\nThanks,\n\nStephen", "msg_date": "Fri, 18 Oct 2019 19:51:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Greetings,\n\n* Ariadne Conill (ariadne@dereferenced.org) wrote:\n> On Fri, Oct 18, 2019 at 5:57 PM Christoph Moench-Tegeder\n> <cmt@burggraben.net> wrote:\n> > ## Ariadne Conill (ariadne@dereferenced.org):\n> > > Why don't we fix the database engine to not eat data when the\n> > > jsonb_set() operation fails?\n> >\n> > It didn't fail, it worked like SQL (you've been doing SQL for too\n> > long when you get used to the NULL propagation, but that's still\n> > what SQL does - check \"+\" for example).\n> > And changing a function will cause fun for everyone who relies on\n> > the current behaviour - so at least it shouldn't be done on a whim\n> > (some might argue that a whim was what got us into this situation\n> > in the first place).\n> \n> NULL propagation makes sense in the context of traditional SQL. What\n> users expect from the JSONB support is for it to behave as JSON\n> manipulation behaves everywhere else. It makes sense that 2 + NULL\n> returns NULL -- it's easily understood as a type mismatch. It does\n> not make sense that jsonb_set('{}'::jsonb, '{foo}', NULL) returns NULL\n> because a *value* was SQL NULL. In this case, it should, at the\n> least, automatically coalesce to 'null'::jsonb.\n\n2 + NULL isn't a type mismatch, just to be clear, it's \"2 + unknown =\nunknown\", which is pretty reasonable, if you accept the general notion\nof what NULL is to begin with.\n\nAnd as such, what follows with \"set this blob of stuff to include this\nunknown thing ... implies ... we don't know what the result of the set\nis and therefore it's unknown\" isn't entirely unreasonable, but I can\nagree that in this specific case, because what we're dealing with\nregarding JSONB is a complex data structure, not an individual value,\nthat it's surprising to a developer and there can be an argument made\nthere that we should consider changing it.\n\n> > Continuing along that thought, I'd even argue that your are\n> > writing code which relies on properties of the data which you never\n> > guaranteed. There is a use case for data types and constraints.\n> \n> There is a use case, but this frequently comes up as a question people\n> ask. At some point, you have to start pondering whether the behaviour\n> does not make logical sense in the context that people frame the JSONB\n> type and it's associated manipulation functions. It is not *obvious*\n> that jsonb_set() will trash your data, but that is what it is capable\n> of doing. In a database that is advertised as being durable and not\n> trashing data, even.\n\nHaving the result of a call to a strict function be NULL isn't\n\"trashing\" your data.\n\n> > Not that I'm arguing for maximum surprise in programming, but\n> > I'm a little puzzled when people eschew thew built-in tools and\n> > start implmenting them again side-to-side with what's already\n> > there.\n> \n> If you read the safe_jsonb_set() function, all we do is coalesce any\n> SQL NULL to 'null'::jsonb, which is what it should be doing anyway,\n\nI'm not convinced that this is at all the right answer, particularly\nsince we don't do that generally. We don't return the string 'null'\nwhen you do: NULL || 'abc', we return NULL. There might be something we\ncan do here that doesn't result in the whole jsonb document becoming\nNULL though.\n\n> and then additionally handling any *unanticipated* failure case on top\n> of that. While you are arguing that we should use tools to work\n> around unanticipated effects (that are not even documented -- in no\n> place in the jsonb_set() documentation does it say \"if you pass SQL\n> NULL to this function as a value, it will return SQL NULL\"), I am\n> arguing that jsonb_set() shouldn't set people up for their data to be\n> trashed in the first place.\n\nThe function is marked as strict, and the meaning of that is quite\nclearly defined in the documentation. I'm not against including a\ncomment regarding this in the documentation, to be clear, though I\nseriously doubt it would actually have changed anything in this case.\n\nThanks,\n\nStephen", "msg_date": "Fri, 18 Oct 2019 20:01:03 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/18/19 4:31 PM, Ariadne Conill wrote:\n> Hello,\n> \n> On Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n>>\n>> On 10/18/19 3:11 PM, Ariadne Conill wrote:\n>>> Hello,\n>>>\n>>> On Fri, Oct 18, 2019 at 5:01 PM David G. Johnston\n>>> <david.g.johnston@gmail.com> wrote:\n>>>>\n>>>> On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:\n>>>>>\n>>>>> ## Ariadne Conill (ariadne@dereferenced.org):\n>>>>>\n>>>>>> update users set info=jsonb_set(info, '{bar}', info->'foo');\n>>>>>>\n>>>>>> Typically, this works nicely, except for cases where evaluating\n>>>>>> info->'foo' results in an SQL null being returned. When that happens,\n>>>>>> jsonb_set() returns an SQL null, which then results in data loss.[3]\n>>>>>\n>>>>> So why don't you use the facilities of SQL to make sure to only\n>>>>> touch the rows which match the prerequisites?\n>>>>>\n>>>>> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n>>>>> WHERE info->'foo' IS NOT NULL;\n>>>>>\n>>>>\n>>>> There are many ways to add code to queries to make working with this function safer - though using them presupposes one remembers at the time of writing the query that there is danger and caveats in using this function. I agree that we should have (and now) provided sane defined behavior when one of the inputs to the function is null instead blowing off the issue and defining the function as being strict. Whether that is \"ignore and return the original object\" or \"add the key with a json null scalar value\" is debatable but either is considerably more useful than returning SQL NULL.\n>>>\n>>> A great example of how we got burned by this last year: Pleroma\n>>> maintains pre-computed counters in JSONB for various types of\n>>> activities (posts, followers, followings). Last year, another counter\n>>> was added, with a migration. But some people did not run the\n>>> migration, because they are users, and that's what users do. This\n>>\n>> So you are more forgiving of your misstep, allowing users to run\n>> outdated code, then of running afoul of Postgres documented behavior:\n> \n> I'm not forgiving of either.\n> \n>> https://www.postgresql.org/docs/11/functions-json.html\n>> \" The field/element/path extraction operators return NULL, rather than\n>> failing, if the JSON input does not have the right structure to match\n>> the request; for example if no such element exists\"\n> \n> It is known that the extraction operators return NULL. The problem\n> here is jsonb_set() returning NULL when it encounters SQL NULL.\n\nI'm not following. Your original case was:\n\njsonb_set(info, '{bar}', info->'foo');\n\nwhere info->'foo' is equivalent to:\n\ntest=# select '{\"f1\":1,\"f2\":null}'::jsonb ->'f3';\n ?column?\n----------\n NULL\n\nSo you know there is a possibility that a value extraction could return \nNULL and from your wrapper that COALESCE is the way to deal with this.\n\n\n> \n>> Just trying to figure why one is worse then the other.\n> \n> Any time a user loses data, it is worse. The preference for not\n> having data loss is why Pleroma uses PostgreSQL as it's database of\n> choice, as PostgreSQL has traditionally valued durability. If we\n> should not use PostgreSQL, just say so.\n\nThere are any number of ways you can make Postgres lose data that are \nnot related to durability e.g build the following in code:\n\nDELETE FROM some_table;\n\nand forget the WHERE.\n\n> \n> Ariadne\n> \n>>\n>>> resulted in Pleroma blanking out the `info` structure for users as\n>>> they performed new activities that incremented that counter. At that\n>>> time, Pleroma maintained various things like private keys used to sign\n>>> things in that JSONB column (we no longer do this because of being\n>>> burned by this several times now), which broke federation temporarily\n>>> for the affected accounts with other servers for up to a week as those\n>>> servers had to learn new public keys for those accounts (since the\n>>> original private keys were lost).\n>>>\n>>> I believe that anything that can be catastrophically broken by users\n>>> not following upgrade instructions precisely is a serious problem, and\n>>> can lead to serious problems. I am sure that this is not the only\n>>> project using JSONB which have had users destroy their own data in\n>>> such a completely preventable fashion.\n>>>\n>>> Ariadne\n>>>\n>>>\n>>>\n>>\n>>\n>> --\n>> Adrian Klaver\n>> adrian.klaver@aklaver.com\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Fri, 18 Oct 2019 17:04:21 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Fri, Oct 18, 2019 at 6:52 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Ariadne Conill (ariadne@dereferenced.org) wrote:\n> > On Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n> > > https://www.postgresql.org/docs/11/functions-json.html\n> > > \" The field/element/path extraction operators return NULL, rather than\n> > > failing, if the JSON input does not have the right structure to match\n> > > the request; for example if no such element exists\"\n> >\n> > It is known that the extraction operators return NULL. The problem\n> > here is jsonb_set() returning NULL when it encounters SQL NULL.\n> >\n> > > Just trying to figure why one is worse then the other.\n> >\n> > Any time a user loses data, it is worse. The preference for not\n> > having data loss is why Pleroma uses PostgreSQL as it's database of\n> > choice, as PostgreSQL has traditionally valued durability. If we\n> > should not use PostgreSQL, just say so.\n>\n> Your contention that the documented, clear, and easily addressed\n> behavior of a particular strict function equates to \"the database system\n> loses data and isn't durable\" is really hurting your arguments here, not\n> helping it.\n>\n> The argument about how it's unintuitive and can cause application\n> developers to misuse the function (which is clearly an application bug,\n> but perhaps an understandable one if the function interface isn't\n> intuitive or is confusing) is a reasonable one and might be convincing\n> enough to result in a change here.\n>\n> I'd suggest sticking to the latter argument when making this case.\n>\n> > > > I believe that anything that can be catastrophically broken by users\n> > > > not following upgrade instructions precisely is a serious problem, and\n> > > > can lead to serious problems. I am sure that this is not the only\n> > > > project using JSONB which have had users destroy their own data in\n> > > > such a completely preventable fashion.\n>\n> Let's be clear here that the issue with the upgrade instructions was\n> that the user didn't follow your *application's* upgrade instructions,\n> and your later code wasn't written to use the function, as documented,\n> properly- this isn't a case of PG destroying your data. It's fine to\n> contend that the interface sucks and that we should change it, but the\n> argument that PG is eating data because the application sent a query to\n> the database telling it, based on our documentation, to eat the data,\n> isn't appropriate. Again, let's have a reasonable discussion here about\n> if it makes sense to make a change here because the interface isn't\n> intuitive and doesn't match what other systems do (I'm guessing it isn't\n> in the SQL standard either, so we unfortunately can't look to that for\n> help; though I'd hardly be surprised if they supported what PG does\n> today anyway).\n\nOkay, I will admit that saying PG is eating data is perhaps\nhyperbolic, but I will also say that the behaviour of jsonb_set()\nunder this type of edge case is unintuitive and frequently results in\nunintended data loss. So, while PostgreSQL is not actually eating the\ndata, it is putting the user in a position where they may suffer data\nloss if they are not extremely careful.\n\nHere is how other implementations handle this case:\n\nMySQL/MariaDB:\n\nselect json_set('{\"a\":1,\"b\":2,\"c\":3}', '$.a', NULL) results in:\n {\"a\":null,\"b\":2,\"c\":3}\n\nMicrosoft SQL Server:\n\nselect json_modify('{\"a\":1,\"b\":2,\"c\":3}', '$.a', NULL) results in:\n {\"b\":2,\"c\":3}\n\nBoth of these outcomes make sense, given the nature of JSON objects.\nI am actually more in favor of what MSSQL does however, I think that\nmakes the most sense of all.\n\nI did not compare to other database systems, because using them I\nfound that there is a JSON_TABLE type function and then you do stuff\nwith that to rewrite the object and dump it back out as JSON, and it's\nquite a mess. But MySQL and MSSQL have an equivalent jsonb inline\nmodification function, as seen above.\n\n> As a practical response to the issue you've raised- have you considered\n> using a trigger to check the validity of the new jsonb? Or, maybe, just\n> made the jsonb column not nullable? With a trigger you could disallow\n> non-null->null transistions, for example, or if it just shouldn't ever\n> be null then making the column 'not null' would suffice.\n\nWe have already mitigated the issue in a way we find appropriate to\ndo. The suggestion of having a non-null constraint does seem useful\nas well and I will look into that.\n\n> I'll echo Christoph's comments up thread too, though in my own language-\n> these are risks you've explicitly accepted by using JSONB and writing\n> your own validation and checks (or, not, apparently) rather than using\n> what the database system provides. That doesn't mean I'm against\n> making the change you suggest, but it certainly should become a lesson\n> to anyone who is considering using primairly jsonb for their storage\n> that it's risky to do so, because you're removing the database system's\n> knowledge and understanding of the data, and further you tend to end up\n> not having the necessary constraints in place to ensure that the data\n> doesn't end up being garbage- thus letting your application destroy all\n> the data easily due to an application bug.\n\nAriadne\n\n\n", "msg_date": "Fri, 18 Oct 2019 21:14:09 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Fri, Oct 18, 2019 at 7:04 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n>\n> On 10/18/19 4:31 PM, Ariadne Conill wrote:\n> > Hello,\n> >\n> > On Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n> >>\n> >> On 10/18/19 3:11 PM, Ariadne Conill wrote:\n> >>> Hello,\n> >>>\n> >>> On Fri, Oct 18, 2019 at 5:01 PM David G. Johnston\n> >>> <david.g.johnston@gmail.com> wrote:\n> >>>>\n> >>>> On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:\n> >>>>>\n> >>>>> ## Ariadne Conill (ariadne@dereferenced.org):\n> >>>>>\n> >>>>>> update users set info=jsonb_set(info, '{bar}', info->'foo');\n> >>>>>>\n> >>>>>> Typically, this works nicely, except for cases where evaluating\n> >>>>>> info->'foo' results in an SQL null being returned. When that happens,\n> >>>>>> jsonb_set() returns an SQL null, which then results in data loss.[3]\n> >>>>>\n> >>>>> So why don't you use the facilities of SQL to make sure to only\n> >>>>> touch the rows which match the prerequisites?\n> >>>>>\n> >>>>> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n> >>>>> WHERE info->'foo' IS NOT NULL;\n> >>>>>\n> >>>>\n> >>>> There are many ways to add code to queries to make working with this function safer - though using them presupposes one remembers at the time of writing the query that there is danger and caveats in using this function. I agree that we should have (and now) provided sane defined behavior when one of the inputs to the function is null instead blowing off the issue and defining the function as being strict. Whether that is \"ignore and return the original object\" or \"add the key with a json null scalar value\" is debatable but either is considerably more useful than returning SQL NULL.\n> >>>\n> >>> A great example of how we got burned by this last year: Pleroma\n> >>> maintains pre-computed counters in JSONB for various types of\n> >>> activities (posts, followers, followings). Last year, another counter\n> >>> was added, with a migration. But some people did not run the\n> >>> migration, because they are users, and that's what users do. This\n> >>\n> >> So you are more forgiving of your misstep, allowing users to run\n> >> outdated code, then of running afoul of Postgres documented behavior:\n> >\n> > I'm not forgiving of either.\n> >\n> >> https://www.postgresql.org/docs/11/functions-json.html\n> >> \" The field/element/path extraction operators return NULL, rather than\n> >> failing, if the JSON input does not have the right structure to match\n> >> the request; for example if no such element exists\"\n> >\n> > It is known that the extraction operators return NULL. The problem\n> > here is jsonb_set() returning NULL when it encounters SQL NULL.\n>\n> I'm not following. Your original case was:\n>\n> jsonb_set(info, '{bar}', info->'foo');\n>\n> where info->'foo' is equivalent to:\n>\n> test=# select '{\"f1\":1,\"f2\":null}'::jsonb ->'f3';\n> ?column?\n> ----------\n> NULL\n>\n> So you know there is a possibility that a value extraction could return\n> NULL and from your wrapper that COALESCE is the way to deal with this.\n\nYou're not following because you don't want to follow.\n\nIt does not matter that info->'foo' is in my example. That's not what\nI am talking about.\n\nWhat I am talking about is that jsonb_set(..., ..., NULL) returns SQL NULL.\n\npostgres=# \\pset null '(null)'\nNull display is \"(null)\".\npostgres=# select jsonb_set('{\"a\":1,\"b\":2,\"c\":3}'::jsonb, '{a}', NULL);\njsonb_set\n-----------\n(null)\n(1 row)\n\nThis behaviour is basically giving an application developer a loaded\nshotgun and pointing it at their feet. It is not a good design. It\nis a design which has likely lead to many users experiencing\nunintentional data loss.\n\nAriadne\n\n\n", "msg_date": "Fri, 18 Oct 2019 21:18:41 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hi\n\n\n> What I am talking about is that jsonb_set(..., ..., NULL) returns SQL NULL.\n>\n> postgres=# \\pset null '(null)'\n> Null display is \"(null)\".\n> postgres=# select jsonb_set('{\"a\":1,\"b\":2,\"c\":3}'::jsonb, '{a}', NULL);\n> jsonb_set\n> -----------\n> (null)\n> (1 row)\n>\n> This behaviour is basically giving an application developer a loaded\n> shotgun and pointing it at their feet. It is not a good design. It\n> is a design which has likely lead to many users experiencing\n> unintentional data loss.\n>\n\non second hand - PostgreSQL design is one possible that returns additional\ninformation if value was changed or not.\n\nUnfortunately It is very low probably so the design of this function will\nbe changed - just it is not a bug (although I fully agree, it has different\nbehave than has other databases and for some usages it is not practical).\nProbably there will be some applications that needs NULL result in\nsituations when value was not changed or when input value has not expected\nformat. Design using in Postgres allows later customization - you can\nimplement with COALESCE very simply behave that you want (sure, you have to\nknow what you do). If Postgres implement design used by MySQL, then there\nis not any possibility to react on situation when update is not processed.\n\nIs not hard to implement second function with different name that has\nbehave that you need and you expect - although it is just\n\nCREATE OR REPLACE FUNCTION jsonb_modify(jsonb, text[], jsonb)\nRETURNS jsonb AS $$\nSELECT jsonb_set($1, $2, COALESCE($3, \"null\"::jsonb), true);\n$$ LANGUAGE sql;\n\nIt is important to understand so JSON NULL is not PostgreSQL NULL. In this\ncase is not problem in PostgreSQL design because it is consistent with\neverything in PG, but in bad expectations. Unfortunately, there are lot of\nwrong expectations, and these cannot be covered by Postgres design because\nthen Postgres will be very not consistent software. You can see - my\nfunction jsonb_modify is what you are expect, and can works for you\nperfectly, but from system perspective is not consistent, and very strong\nnot consistent. Users should not to learn where NULL has different behave\nor where NULL is JSON__NULL. Buildin functions should be consistent in\nPostgres. It is Postgres, not other databases.\n\nPavel\n\n\n\n\n\n> Ariadne\n>\n>\n>\n\nHi\n\nWhat I am talking about is that jsonb_set(..., ..., NULL) returns SQL NULL.\n\npostgres=# \\pset null '(null)'\nNull display is \"(null)\".\npostgres=# select jsonb_set('{\"a\":1,\"b\":2,\"c\":3}'::jsonb, '{a}', NULL);\njsonb_set\n-----------\n(null)\n(1 row)\n\nThis behaviour is basically giving an application developer a loaded\nshotgun and pointing it at their feet.  It is not a good design.  It\nis a design which has likely lead to many users experiencing\nunintentional data loss.on second hand - PostgreSQL design is one possible that returns additional information if value was changed or not. Unfortunately It is very low probably so the design of this function will be changed - just it is not a bug (although I fully agree, it has different behave than has other databases and for some usages it is not practical). Probably there will be some applications that needs NULL result in situations when value was not changed or when input value has not expected format. Design using in Postgres allows later customization - you can implement with COALESCE very simply behave that you want (sure, you have to know what you do). If Postgres implement design used by MySQL, then there is not any possibility to react on situation when update is not processed. Is not hard to implement second function with different name that has behave that you need and you expect - although it is justCREATE OR REPLACE FUNCTION jsonb_modify(jsonb, text[], jsonb)RETURNS jsonb AS $$SELECT jsonb_set($1, $2, COALESCE($3, \"null\"::jsonb), true);$$ LANGUAGE sql;It is important to understand so JSON NULL is not PostgreSQL NULL. In this case is not problem in PostgreSQL design because it is consistent with everything in PG, but in bad expectations. Unfortunately, there are lot of wrong expectations, and these cannot be covered by Postgres design because then Postgres will be very not consistent software. You can see - my function jsonb_modify is what you are expect, and can works for you perfectly, but from system perspective is not consistent, and very strong not consistent. Users should not to learn where NULL has different behave or where NULL is JSON__NULL. Buildin functions should be consistent in Postgres. It is Postgres, not other databases.Pavel\n\nAriadne", "msg_date": "Sat, 19 Oct 2019 06:17:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Friday, October 18, 2019, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n\n> Probably there will be some applications that needs NULL result in\n> situations when value was not changed or when input value has not expected\n> format. Design using in Postgres allows later customization - you can\n> implement with COALESCE very simply behave that you want (sure, you have to\n> know what you do). If Postgres implement design used by MySQL, then there\n> is not any possibility to react on situation when update is not processed.\n>\n\nA CASE expression seems like it would work well for such detection in the\nrare case it is needed. Current behavior is unsafe with minimal or no\nredeeming qualities. Change it so passing in null raises an exception and\nmake the user decide their own behavior if we don’t want to choose one for\nthem.\n\nDavid J.\n\nOn Friday, October 18, 2019, Pavel Stehule <pavel.stehule@gmail.com> wrote: Probably there will be some applications that needs NULL result in situations when value was not changed or when input value has not expected format. Design using in Postgres allows later customization - you can implement with COALESCE very simply behave that you want (sure, you have to know what you do). If Postgres implement design used by MySQL, then there is not any possibility to react on situation when update is not processed. A CASE expression seems like it would work well for such detection in the rare case it is needed.  Current behavior is unsafe with minimal or no redeeming qualities.  Change it so passing in null raises an exception and make the user decide their own behavior if we don’t want to choose one for them.David J.", "msg_date": "Fri, 18 Oct 2019 22:41:24 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "so 19. 10. 2019 v 7:41 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Friday, October 18, 2019, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>\n>> Probably there will be some applications that needs NULL result in\n>> situations when value was not changed or when input value has not expected\n>> format. Design using in Postgres allows later customization - you can\n>> implement with COALESCE very simply behave that you want (sure, you have to\n>> know what you do). If Postgres implement design used by MySQL, then there\n>> is not any possibility to react on situation when update is not processed.\n>>\n>\n> A CASE expression seems like it would work well for such detection in the\n> rare case it is needed. Current behavior is unsafe with minimal or no\n> redeeming qualities. Change it so passing in null raises an exception and\n> make the user decide their own behavior if we don’t want to choose one for\n> them.\n>\n\nHow you can do it? Buildn functions cannot to return more than one value.\nThe NULL is one possible signal how to emit this informations.\n\nThe NULL value can be problem everywhere - and is not consistent to raise\nexception somewhere and elsewhere not.\n\nI agree so the safe way is raising exception on NULL. Unfortunately,\nexception handling is pretty expensive in Postres (more in write\ntransactions), so it should be used only when it is really necessary.\n\n\n\n\n\n> David J.\n>\n>\n\nso 19. 10. 2019 v 7:41 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Friday, October 18, 2019, Pavel Stehule <pavel.stehule@gmail.com> wrote: Probably there will be some applications that needs NULL result in situations when value was not changed or when input value has not expected format. Design using in Postgres allows later customization - you can implement with COALESCE very simply behave that you want (sure, you have to know what you do). If Postgres implement design used by MySQL, then there is not any possibility to react on situation when update is not processed. A CASE expression seems like it would work well for such detection in the rare case it is needed.  Current behavior is unsafe with minimal or no redeeming qualities.  Change it so passing in null raises an exception and make the user decide their own behavior if we don’t want to choose one for them.How you can do it? Buildn functions cannot to return more than one value. The NULL is one possible signal how to emit this informations.The NULL value can be problem everywhere - and is not consistent to raise exception somewhere and elsewhere not. I agree so the safe way is raising exception on NULL. Unfortunately, exception handling is pretty expensive in Postres (more in write transactions), so it should be used only when it is really necessary.David J.", "msg_date": "Sat, 19 Oct 2019 07:52:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Sat, Oct 19, 2019 at 12:52 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> so 19. 10. 2019 v 7:41 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:\n>>\n>> On Friday, October 18, 2019, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>\n>>>\n>>> Probably there will be some applications that needs NULL result in situations when value was not changed or when input value has not expected format. Design using in Postgres allows later customization - you can implement with COALESCE very simply behave that you want (sure, you have to know what you do). If Postgres implement design used by MySQL, then there is not any possibility to react on situation when update is not processed.\n>>\n>>\n>> A CASE expression seems like it would work well for such detection in the rare case it is needed. Current behavior is unsafe with minimal or no redeeming qualities. Change it so passing in null raises an exception and make the user decide their own behavior if we don’t want to choose one for them.\n>\n>\n> How you can do it? Buildn functions cannot to return more than one value. The NULL is one possible signal how to emit this informations.\n>\n> The NULL value can be problem everywhere - and is not consistent to raise exception somewhere and elsewhere not.\n>\n> I agree so the safe way is raising exception on NULL. Unfortunately, exception handling is pretty expensive in Postres (more in write transactions), so it should be used only when it is really necessary.\n\nI would say that any thing like\n\nupdate whatever set column=jsonb_set(column, '{foo}', NULL)\n\nshould throw an exception. It should do, literally, *anything* else\nbut blank that column.\n\nAriadne\n\n\n", "msg_date": "Sat, 19 Oct 2019 01:52:11 -0500", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Fri, Oct 18, 2019 at 09:14:09PM -0500, Ariadne Conill wrote:\n>Hello,\n>\n>On Fri, Oct 18, 2019 at 6:52 PM Stephen Frost <sfrost@snowman.net> wrote:\n>>\n>> Greetings,\n>>\n>> * Ariadne Conill (ariadne@dereferenced.org) wrote:\n>> > On Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n>> > > https://www.postgresql.org/docs/11/functions-json.html\n>> > > \" The field/element/path extraction operators return NULL, rather than\n>> > > failing, if the JSON input does not have the right structure to match\n>> > > the request; for example if no such element exists\"\n>> >\n>> > It is known that the extraction operators return NULL. The problem\n>> > here is jsonb_set() returning NULL when it encounters SQL NULL.\n>> >\n>> > > Just trying to figure why one is worse then the other.\n>> >\n>> > Any time a user loses data, it is worse. The preference for not\n>> > having data loss is why Pleroma uses PostgreSQL as it's database of\n>> > choice, as PostgreSQL has traditionally valued durability. If we\n>> > should not use PostgreSQL, just say so.\n>>\n>> Your contention that the documented, clear, and easily addressed\n>> behavior of a particular strict function equates to \"the database system\n>> loses data and isn't durable\" is really hurting your arguments here, not\n>> helping it.\n>>\n>> The argument about how it's unintuitive and can cause application\n>> developers to misuse the function (which is clearly an application bug,\n>> but perhaps an understandable one if the function interface isn't\n>> intuitive or is confusing) is a reasonable one and might be convincing\n>> enough to result in a change here.\n>>\n>> I'd suggest sticking to the latter argument when making this case.\n>>\n>> > > > I believe that anything that can be catastrophically broken by users\n>> > > > not following upgrade instructions precisely is a serious problem, and\n>> > > > can lead to serious problems. I am sure that this is not the only\n>> > > > project using JSONB which have had users destroy their own data in\n>> > > > such a completely preventable fashion.\n>>\n>> Let's be clear here that the issue with the upgrade instructions was\n>> that the user didn't follow your *application's* upgrade instructions,\n>> and your later code wasn't written to use the function, as documented,\n>> properly- this isn't a case of PG destroying your data. It's fine to\n>> contend that the interface sucks and that we should change it, but the\n>> argument that PG is eating data because the application sent a query to\n>> the database telling it, based on our documentation, to eat the data,\n>> isn't appropriate. Again, let's have a reasonable discussion here about\n>> if it makes sense to make a change here because the interface isn't\n>> intuitive and doesn't match what other systems do (I'm guessing it isn't\n>> in the SQL standard either, so we unfortunately can't look to that for\n>> help; though I'd hardly be surprised if they supported what PG does\n>> today anyway).\n>\n>Okay, I will admit that saying PG is eating data is perhaps\n>hyperbolic,\n\nMy experience is that using such hyperbole is pretty detrimental, even\nwhen one is trying to make a pretty sensible case. The problem is that\npeople often respond in a similarly hyperbolic claims, particularly when\nyou hit a nerve. And that's exactly what happened here, becase we're\n*extremely* sensitive about data corruption issues, so when you claim\nPostgreSQL is \"eating data\" people are likely to jump on you, beating\nyou with the documentation stick. It's unfortunate, but it's also\nentirely predictable.\n\n>but I will also say that the behaviour of jsonb_set()\n>under this type of edge case is unintuitive and frequently results in\n>unintended data loss. So, while PostgreSQL is not actually eating the\n>data, it is putting the user in a position where they may suffer data\n>loss if they are not extremely careful.\n>\n>Here is how other implementations handle this case:\n>\n>MySQL/MariaDB:\n>\n>select json_set('{\"a\":1,\"b\":2,\"c\":3}', '$.a', NULL) results in:\n> {\"a\":null,\"b\":2,\"c\":3}\n>\n>Microsoft SQL Server:\n>\n>select json_modify('{\"a\":1,\"b\":2,\"c\":3}', '$.a', NULL) results in:\n> {\"b\":2,\"c\":3}\n>\n>Both of these outcomes make sense, given the nature of JSON objects.\n>I am actually more in favor of what MSSQL does however, I think that\n>makes the most sense of all.\n>\n\nI do mostly agree with this. The json[b]_set behavior seems rather\nsurprising, and I think I've seen a couple of cases running into exactly\nthis issue. I've solved that with a simple CASE, but maybe changing the\nbehavior would be better. That's unlikely to be back-patchable, though,\nso maybe a better option is to create a non-strict wrappers. But that\ndoes not work when the user is unaware of the behavior :-(\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Oct 2019 13:08:31 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "> On Sat, Oct 19, 2019 at 1:08 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> >Here is how other implementations handle this case:\n> >\n> >MySQL/MariaDB:\n> >\n> >select json_set('{\"a\":1,\"b\":2,\"c\":3}', '$.a', NULL) results in:\n> > {\"a\":null,\"b\":2,\"c\":3}\n> >\n> >Microsoft SQL Server:\n> >\n> >select json_modify('{\"a\":1,\"b\":2,\"c\":3}', '$.a', NULL) results in:\n> > {\"b\":2,\"c\":3}\n> >\n> >Both of these outcomes make sense, given the nature of JSON objects.\n> >I am actually more in favor of what MSSQL does however, I think that\n> >makes the most sense of all.\n> >\n>\n> I do mostly agree with this. The json[b]_set behavior seems rather\n> surprising, and I think I've seen a couple of cases running into exactly\n> this issue. I've solved that with a simple CASE, but maybe changing the\n> behavior would be better. That's unlikely to be back-patchable, though,\n> so maybe a better option is to create a non-strict wrappers. But that\n> does not work when the user is unaware of the behavior :-(\n\nAgree, that could be confusing. If I remember correctly, so far I've seen four\nor five such complains in mailing lists, but of course number of people who\ndidn't reach out hackers is probably bigger.\n\nIf we want to change it, the question is where to stop? Essentially we have:\n\n update table set data = some_func(data, some_args_with_null);\n\nwhere some_func happened to be jsonb_set, but could be any strict function.\n\nI wonder if in this case it makes sense to think about an alternative? For\nexample, there is generic type subscripting patch, that allows to update a\njsonb in the following way:\n\n update table set jsonb_data[key] = 'value';\n\nIt doesn't look like a function, so it's not a big deal if it will handle NULL\nvalues differently. And at the same time one can argue, that people, who are\nnot aware about this caveat with jsonb_set and NULL values, will most likely\nuse it due to a bit simpler syntax (more similar to some popular programming\nlanguages).\n\n\n", "msg_date": "Sat, 19 Oct 2019 15:32:30 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "## Ariadne Conill (ariadne@dereferenced.org):\n\n> NULL propagation makes sense in the context of traditional SQL. What\n> users expect from the JSONB support is for it to behave as JSON\n> manipulation behaves everywhere else.\n\nWell, some users expect that. Others are using this interface as it is\ndocumented and implemented right now. And that's what makes this a\nsomewhat difficult case: I wouldn't argue for one behaviour or the\nother if this was new functionality. But jsonb_set() was added in 9.5,\nand changing that behaviour now will make other people about as unhappy\nas you are right now.\nFurther, \"now\" is a rather flexible term: the function cannot be changed\n\"right now\" with the next bugfix release (may break existing applications,\ndeterring people from installing bugfixes: very bad) and there's about\nno way to get a new function into a bugfix release (catversion bump).\nThe next chance to do anything here is version 13, to be expected around\nthis time next year. This gives us ample time to think about a solution\nwhich is consistent and works for (almost) everyone - no need to force\na behaviour change in that function right now (and in case it comes to\nthat: which other json/jsonb-functions would be affected?).\n\nThat creates a kind of bind for your case: you cannot rely on the new\nbehaviour until the new version is in reasonably widespread use.\nDatabase servers are long-lived beasts - in the field, version 8.4\nhas finally mostly disappeared this year, but we still get some\nquestions about that version here on the lists (8.4 went EOL over\nfive years ago). At some point, you'll need to make a cut and require\nyour users to upgrade the database.\n\n> At some point, you have to start pondering whether the behaviour\n> does not make logical sense in the context that people frame the JSONB\n> type and it's associated manipulation functions.\n\nBut it does make sense from a SQL point of view - and this is a SQL\ndatabase. JSON is not SQL (the sheer amount of \"Note\" in between the\nJSON functions and operators documentation is proof of that) and nots\nASN.1, \"people expect\" depends a lot on what kind of people you ask. \nNone of these expectations is \"right\" or \"wrong\" in an absolute manner.\nCode has to be \"absolute\" in order to be deterministic, and it should\ndo so in a way that is unsurprising to the least amount of users: I'm\nwilling to concede that jsonb_set() fails this test, but I'm still not\nconvinced that your approach is much better just because it fits your\nspecific use case.\n\n> It is not *obvious*\n> that jsonb_set() will trash your data, but that is what it is capable\n> of doing.\n\nIt didn't. The data still fit the constraints you put on it: none,\nunfortunately. Which leads me to the advice for the time being (until\nwe have this sorted out in one way or another, possibly the next\nmajor release): at least put a NOT NULL on columns which must be not\nNULL - that alone would have gone a long way to prevent the issues\nyou've unfortunately had. You could even put CHECK constraints on\nyour JSONB (like \"CHECK (j->'info' IS NOT NULL)\") to make sure it\nstays well-formed. As a SQL person, I'd even argue that you shouldn't\nuse JSON columns for key data - there is a certain mismatch between\nSQL and JSON, which will get you now and then, and once you've\nimplemented all the checks to be safe, you've build a type system\nwhen the database would have given you one for free. (And running\nUPDATEs inside your JSONB fields is not as efficient as on simple\ncolumns).\nAnd finally, you might put some version information in your\ndatabase schema, so the application can check if all the neccessary\ndata migrations have been run.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Sat, 19 Oct 2019 15:59:00 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Greetings,\n\n* Dmitry Dolgov (9erthalion6@gmail.com) wrote:\n> If we want to change it, the question is where to stop? Essentially we have:\n> \n> update table set data = some_func(data, some_args_with_null);\n> \n> where some_func happened to be jsonb_set, but could be any strict function.\n\nI don't think it makes any sense to try and extrapolate this out to\nother strict functions. Functions should be strict when it makes sense\nfor them to be- in this case, it sounds like it doesn't really make\nsense for jsonb_set to be strict, and that's where we stop it.\n\n> I wonder if in this case it makes sense to think about an alternative? For\n> example, there is generic type subscripting patch, that allows to update a\n> jsonb in the following way:\n> \n> update table set jsonb_data[key] = 'value';\n> \n> It doesn't look like a function, so it's not a big deal if it will handle NULL\n> values differently. And at the same time one can argue, that people, who are\n> not aware about this caveat with jsonb_set and NULL values, will most likely\n> use it due to a bit simpler syntax (more similar to some popular programming\n> languages).\n\nThis seems like an entirely independent thing ...\n\nThanks,\n\nStephen", "msg_date": "Sat, 19 Oct 2019 11:21:26 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 10/18/19 3:10 PM, Mark Felder wrote:\n>\n> On Fri, Oct 18, 2019, at 12:37, Ariadne Conill wrote:\n>> Hello,\n>>\n>> I am one of the primary maintainers of Pleroma, a federated social\n>> networking application written in Elixir, which uses PostgreSQL in\n>> ways that may be considered outside the typical usage scenarios for\n>> PostgreSQL.\n>>\n>> Namely, we leverage JSONB heavily as a backing store for JSON-LD\n>> documents[1]. We also use JSONB in combination with Ecto's \"embedded\n>> structs\" to store things like user preferences.\n>>\n>> The fact that we can use JSONB to achieve our design goals is a\n>> testament to the flexibility PostgreSQL has.\n>>\n>> However, in the process of doing so, we have discovered a serious flaw\n>> in the way jsonb_set() functions, but upon reading through this\n>> mailing list, we have discovered that this flaw appears to be an\n>> intentional design.[2]\n>>\n>> A few times now, we have written migrations that do things like copy\n>> keys in a JSONB object to a new key, to rename them. These migrations\n>> look like so:\n>>\n>> update users set info=jsonb_set(info, '{bar}', info->'foo');\n>>\n>> Typically, this works nicely, except for cases where evaluating\n>> info->'foo' results in an SQL null being returned. When that happens,\n>> jsonb_set() returns an SQL null, which then results in data loss.[3]\n>>\n>> This is not acceptable. PostgreSQL is a database that is renowned for\n>> data integrity, but here it is wiping out data when it encounters a\n>> failure case. The way jsonb_set() should fail in this case is to\n>> simply return the original input: it should NEVER return SQL null.\n>>\n>> But hey, we've been burned by this so many times now that we'd like to\n>> donate a useful function to the commons, consider it a mollyguard for\n>> the real jsonb_set() function.\n>>\n>> create or replace function safe_jsonb_set(target jsonb, path\n>> text[], new_value jsonb, create_missing boolean default true) returns\n>> jsonb as $$\n>> declare\n>> result jsonb;\n>> begin\n>> result := jsonb_set(target, path, coalesce(new_value,\n>> 'null'::jsonb), create_missing);\n>> if result is NULL then\n>> return target;\n>> else\n>> return result;\n>> end if;\n>> end;\n>> $$ language plpgsql;\n>>\n>> This safe_jsonb_set() wrapper should not be necessary. PostgreSQL's\n>> own jsonb_set() should have this safety feature built in. Without it,\n>> using jsonb_set() is like playing russian roulette with your data,\n>> which is not a reasonable expectation for a database renowned for its\n>> commitment to data integrity.\n>>\n>> Please fix this bug so that we do not have to hack around this bug.\n>> It has probably ruined countless people's days so far. I don't want\n>> to hear about how the function is strict, I'm aware it is strict, and\n>> that strictness is harmful. Please fix the function so that it is\n>> actually safe to use.\n>>\n>> [1]: JSON-LD stands for JSON Linked Data. Pleroma has an \"internal\n>> representation\" that shares similar qualities to JSON-LD, so I use\n>> JSON-LD here as a simplification.\n>>\n>> [2]: https://www.postgresql.org/message-id/flat/qfkua9$2q0e$1@blaine.gmane.org\n>>\n>> [3]: https://git.pleroma.social/pleroma/pleroma/issues/1324 is an\n>> example of data loss induced by this issue.\n>>\n>> Ariadne\n>>\n> This should be directed towards the hackers list, too.\n>\n> What will it take to change the semantics of jsonb_set()? MySQL implements safe behavior here. It's a real shame Postgres does not. I'll offer a $200 bounty to whoever fixes it. I'm sure it's destroyed more than $200 worth of data and people's time by now, but it's something.\n>\n>\n\n\nThe hyperbole here is misplaced. There is a difference between a bug and\na POLA violation. This might be the latter, but it isn't the former. So\nplease tone it down a bit. It's not the function that's unsafe, but the\nill-informed use of it.\n\n\nWe invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\nsince 9.5. That's five releases ago.  So it's a bit late to be coming to\nus telling us it's not safe (according to your preconceptions of what it\nshould be doing).\n\n\nWe could change it prospectively (i.e. from release 13 on) if we choose.\nBut absent an actual bug (i.e. acting contrary to documented behaviour)\nwe do not normally backpatch such changes, especially when there is a\nsimple workaround for the perceived problem. And it's that policy that\nis in large measure responsible for Postgres' deserved reputation for\nstability.\n\n\nIncidentally, why is your function written in plpgsql? Wouldn't a simple\nSQL wrapper be better?\n\n\n create or replace function safe_jsonb_set\n     (target jsonb, path text[], new_value jsonb, create_missing\n boolean default true)\n returns jsonb as\n $func$\n     select case when new_value is null then target else\n jsonb_set(target, path, new_value, create_missing) end\n $func$ language sql;\n\n\nAnd if we were to change it I'm not at all sure that we should do it the\nway that's suggested here, which strikes me as no more intuitive than\nthe current behaviour. Rather I think we should possibly fill in a json\nnull in the indicated place.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 19 Oct 2019 11:26:50 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sat, Oct 19, 2019 at 11:26:50AM -0400, Andrew Dunstan wrote:\n>\n> ...\n>\n>The hyperbole here is misplaced. There is a difference between a bug and\n>a POLA violation. This might be the latter, but it isn't the former. So\n>please tone it down a bit. It's not the function that's unsafe, but the\n>ill-informed use of it.\n>\n>\n>We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\n>since 9.5. That's five releases ago.� So it's a bit late to be coming to\n>us telling us it's not safe (according to your preconceptions of what it\n>should be doing).\n>\n>\n>We could change it prospectively (i.e. from release 13 on) if we choose.\n>But absent an actual bug (i.e. acting contrary to documented behaviour)\n>we do not normally backpatch such changes, especially when there is a\n>simple workaround for the perceived problem. And it's that policy that\n>is in large measure responsible for Postgres' deserved reputation for\n>stability.\n>\n\nYeah.\n\n>\n>Incidentally, why is your function written in plpgsql? Wouldn't a simple\n>SQL wrapper be better?\n>\n>\n> create or replace function safe_jsonb_set\n> ��� (target jsonb, path text[], new_value jsonb, create_missing\n> boolean default true)\n> returns jsonb as\n> $func$\n> ��� select case when new_value is null then target else\n> jsonb_set(target, path, new_value, create_missing) end\n> $func$ language sql;\n>\n>\n>And if we were to change it I'm not at all sure that we should do it the\n>way that's suggested here, which strikes me as no more intuitive than\n>the current behaviour. Rather I think we should possibly fill in a json\n>null in the indicated place.\n>\n\nNot sure, but that seems rather confusing to me, because it's mixing SQL\nNULL and JSON null, i.e. it's not clear to me why\n\n jsonb_set(..., \"...\", NULL)\n\nshould do the same thing as\n\n jsonb_set(..., \"...\", 'null':jsonb)\n\nI'm not entirely surprised it's what MySQL does ;-) but I'd say treating\nit as a deletion of the key (just like MSSQL) is somewhat more sensible.\nBut I admit it's quite subjective.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 19 Oct 2019 18:18:51 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sat, Oct 19, 2019 at 11:21:26AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Dmitry Dolgov (9erthalion6@gmail.com) wrote:\n>> If we want to change it, the question is where to stop? Essentially we have:\n>>\n>> update table set data = some_func(data, some_args_with_null);\n>>\n>> where some_func happened to be jsonb_set, but could be any strict function.\n>\n>I don't think it makes any sense to try and extrapolate this out to\n>other strict functions. Functions should be strict when it makes sense\n>for them to be- in this case, it sounds like it doesn't really make\n>sense for jsonb_set to be strict, and that's where we stop it.\n>\n\nYeah. I think the issue here is (partially) that other databases adopted\nsimilar functions after us, but decided to use a different behavior. It\nmight be more natural for the users, but that does not mean we should\nchange the other strict functions.\n\nPlus I'm not sure if SQL standard says anything about strict functions\n(I found nothing, but I looked only very quickly), but I'm pretty sure\nwe can't change how basic operators change, and we translate them to\nfunction calls (e.g. 1+2 is int4pl(1,2)).\n\n>> I wonder if in this case it makes sense to think about an alternative? For\n>> example, there is generic type subscripting patch, that allows to update a\n>> jsonb in the following way:\n>>\n>> update table set jsonb_data[key] = 'value';\n>>\n>> It doesn't look like a function, so it's not a big deal if it will handle NULL\n>> values differently. And at the same time one can argue, that people, who are\n>> not aware about this caveat with jsonb_set and NULL values, will most likely\n>> use it due to a bit simpler syntax (more similar to some popular programming\n>> languages).\n>\n>This seems like an entirely independent thing ...\n>\n\nRight. Useful, but entirely separate feature.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 19 Oct 2019 18:31:15 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 10/19/19 12:18 PM, Tomas Vondra wrote:\n> On Sat, Oct 19, 2019 at 11:26:50AM -0400, Andrew Dunstan wrote:\n>\n> Not sure, but that seems rather confusing to me, because it's mixing SQL\n> NULL and JSON null, i.e. it's not clear to me why\n>\n>    jsonb_set(..., \"...\", NULL)\n>\n> should do the same thing as\n>\n>    jsonb_set(..., \"...\", 'null':jsonb)\n>\n> I'm not entirely surprised it's what MySQL does ;-) but I'd say treating\n> it as a deletion of the key (just like MSSQL) is somewhat more sensible.\n> But I admit it's quite subjective.\n>\n\n\nThat's yet another variant, which just reinforces my view that there is\nno guaranteed-intuitive behaviour here.\n\n\nOTOH, to me, turning jsonb_set into jsonb_delete for some case seems ...\nodd.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 19 Oct 2019 12:32:10 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> >\n> >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\n> >since 9.5. That's five releases ago. So it's a bit late to be coming to\n> >us telling us it's not safe (according to your preconceptions of what it\n> >should be doing).\n> >\n>\n\nThere have been numerous complaints and questions about this behavior in\nthose five years; and none of the responses to those defenses has actually\nmade the current behavior sound beneficial but rather have simply said\n\"this is how it works, deal with it\".\n\n>\n> >We could change it prospectively (i.e. from release 13 on) if we choose.\n> >But absent an actual bug (i.e. acting contrary to documented behaviour)\n> >we do not normally backpatch such changes, especially when there is a\n> >simple workaround for the perceived problem. And it's that policy that\n> >is in large measure responsible for Postgres' deserved reputation for\n> >stability.\n> >\n>\n> Yeah.\n>\n>\nAgreed, this is v13 material if enough people come on board to support\nmaking a change.\n\n>\n> >And if we were to change it I'm not at all sure that we should do it the\n> >way that's suggested here, which strikes me as no more intuitive than\n> >the current behaviour. Rather I think we should possibly fill in a json\n> >null in the indicated place.\n> >\n>\n> Not sure, but that seems rather confusing to me, because it's mixing SQL\n> NULL and JSON null, i.e. it's not clear to me why\n>\n[...]\n\n> But I admit it's quite subjective.\n>\n\nProviding SQL NULL to this function and asking it to do something with that\nis indeed subjective - with no obvious reasonable default, and I agree that\n\"return a NULL\" while possible consistent is probably the least useful\nbehavior that could have been chosen. We should never have allowed an SQL\nNULL to be an acceptable argument in the first place, and can reasonably\nsafely and effectively prevent it going forward. Then people will have to\nexplicitly code what they want to do if their data and queries present this\ninvalid unknown data to the function.\n\nDavid J.\n\nOn Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:>\n>We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\n>since 9.5. That's five releases ago.  So it's a bit late to be coming to\n>us telling us it's not safe (according to your preconceptions of what it\n>should be doing).\n>There have been numerous complaints and questions about this behavior in those five years; and none of the responses to those defenses has actually made the current behavior sound beneficial but rather have simply said \"this is how it works, deal with it\".\n>\n>We could change it prospectively (i.e. from release 13 on) if we choose.\n>But absent an actual bug (i.e. acting contrary to documented behaviour)\n>we do not normally backpatch such changes, especially when there is a\n>simple workaround for the perceived problem. And it's that policy that\n>is in large measure responsible for Postgres' deserved reputation for\n>stability.\n>\n\nYeah.\nAgreed, this is v13 material if enough people come on board to support making a change.\n>And if we were to change it I'm not at all sure that we should do it the\n>way that's suggested here, which strikes me as no more intuitive than\n>the current behaviour. Rather I think we should possibly fill in a json\n>null in the indicated place.\n>\n\nNot sure, but that seems rather confusing to me, because it's mixing SQL\nNULL and JSON null, i.e. it's not clear to me why[...]But I admit it's quite subjective.Providing SQL NULL to this function and asking it to do something with that is indeed subjective - with no obvious reasonable default, and I agree that \"return a NULL\" while possible consistent is probably the least useful behavior that could have been chosen.  We should never have allowed an SQL NULL to be an acceptable argument in the first place, and can reasonably safely and effectively prevent it going forward.  Then people will have to explicitly code what they want to do if their data and queries present this invalid unknown data to the function.David J.", "msg_date": "Sat, 19 Oct 2019 09:32:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 10/19/19 12:32 PM, David G. Johnston wrote:\n> On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>>\n> wrote:\n>\n> >\n> >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\n> >since 9.5. That's five releases ago.  So it's a bit late to be\n> coming to\n> >us telling us it's not safe (according to your preconceptions of\n> what it\n> >should be doing).\n> >\n>\n>\n> There have been numerous complaints and questions about this behavior\n> in those five years; and none of the responses to those defenses has\n> actually made the current behavior sound beneficial but rather have\n> simply said \"this is how it works, deal with it\".\n\n\nI haven't seen a patch, which for most possible solutions should be\nfairly simple to code. This is open source. Code speaks louder than\ncomplaints.\n\n\n>\n> >\n> >We could change it prospectively (i.e. from release 13 on) if we\n> choose.\n> >But absent an actual bug (i.e. acting contrary to documented\n> behaviour)\n> >we do not normally backpatch such changes, especially when there is a\n> >simple workaround for the perceived problem. And it's that policy\n> that\n> >is in large measure responsible for Postgres' deserved reputation for\n> >stability.\n> >\n>\n> Yeah.\n>\n>\n> Agreed, this is v13 material if enough people come on board to support\n> making a change.\n\n\n\nWe have changed such things in the past. But maybe a new function might\nbe a better way to go. I haven't given it enough thought yet.\n\n\n\n>\n> >And if we were to change it I'm not at all sure that we should do\n> it the\n> >way that's suggested here, which strikes me as no more intuitive than\n> >the current behaviour. Rather I think we should possibly fill in\n> a json\n> >null in the indicated place.\n> >\n>\n> Not sure, but that seems rather confusing to me, because it's\n> mixing SQL\n> NULL and JSON null, i.e. it's not clear to me why\n>\n> [...]\n>\n> But I admit it's quite subjective.\n>\n>\n> Providing SQL NULL to this function and asking it to do something with\n> that is indeed subjective - with no obvious reasonable default, and I\n> agree that \"return a NULL\" while possible consistent is probably the\n> least useful behavior that could have been chosen.  We should never\n> have allowed an SQL NULL to be an acceptable argument in the first\n> place, and can reasonably safely and effectively prevent it going\n> forward.  Then people will have to explicitly code what they want to\n> do if their data and queries present this invalid unknown data to the\n> function.\n>\n>\n\nHow exactly do we prevent a NULL being passed as an argument? The only\nthing we could do would be to raise an exception, I think. That seems\nlike a fairly ugly thing to do, I'd need a h3eck of a lot of convincing.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 19 Oct 2019 12:47:39 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/18/19 7:18 PM, Ariadne Conill wrote:\n> Hello,\n> \n> On Fri, Oct 18, 2019 at 7:04 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n>>\n>> On 10/18/19 4:31 PM, Ariadne Conill wrote:\n>>> Hello,\n>>>\n>>> On Fri, Oct 18, 2019 at 6:01 PM Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n>>>>\n>>>> On 10/18/19 3:11 PM, Ariadne Conill wrote:\n>>>>> Hello,\n>>>>>\n>>>>> On Fri, Oct 18, 2019 at 5:01 PM David G. Johnston\n>>>>> <david.g.johnston@gmail.com> wrote:\n>>>>>>\n>>>>>> On Fri, Oct 18, 2019 at 2:50 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:\n>>>>>>>\n>>>>>>> ## Ariadne Conill (ariadne@dereferenced.org):\n>>>>>>>\n>>>>>>>> update users set info=jsonb_set(info, '{bar}', info->'foo');\n>>>>>>>>\n>>>>>>>> Typically, this works nicely, except for cases where evaluating\n>>>>>>>> info->'foo' results in an SQL null being returned. When that happens,\n>>>>>>>> jsonb_set() returns an SQL null, which then results in data loss.[3]\n>>>>>>>\n>>>>>>> So why don't you use the facilities of SQL to make sure to only\n>>>>>>> touch the rows which match the prerequisites?\n>>>>>>>\n>>>>>>> UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')\n>>>>>>> WHERE info->'foo' IS NOT NULL;\n>>>>>>>\n>>>>>>\n>>>>>> There are many ways to add code to queries to make working with this function safer - though using them presupposes one remembers at the time of writing the query that there is danger and caveats in using this function. I agree that we should have (and now) provided sane defined behavior when one of the inputs to the function is null instead blowing off the issue and defining the function as being strict. Whether that is \"ignore and return the original object\" or \"add the key with a json null scalar value\" is debatable but either is considerably more useful than returning SQL NULL.\n>>>>>\n>>>>> A great example of how we got burned by this last year: Pleroma\n>>>>> maintains pre-computed counters in JSONB for various types of\n>>>>> activities (posts, followers, followings). Last year, another counter\n>>>>> was added, with a migration. But some people did not run the\n>>>>> migration, because they are users, and that's what users do. This\n>>>>\n>>>> So you are more forgiving of your misstep, allowing users to run\n>>>> outdated code, then of running afoul of Postgres documented behavior:\n>>>\n>>> I'm not forgiving of either.\n>>>\n>>>> https://www.postgresql.org/docs/11/functions-json.html\n>>>> \" The field/element/path extraction operators return NULL, rather than\n>>>> failing, if the JSON input does not have the right structure to match\n>>>> the request; for example if no such element exists\"\n>>>\n>>> It is known that the extraction operators return NULL. The problem\n>>> here is jsonb_set() returning NULL when it encounters SQL NULL.\n>>\n>> I'm not following. Your original case was:\n>>\n>> jsonb_set(info, '{bar}', info->'foo');\n>>\n>> where info->'foo' is equivalent to:\n>>\n>> test=# select '{\"f1\":1,\"f2\":null}'::jsonb ->'f3';\n>> ?column?\n>> ----------\n>> NULL\n>>\n>> So you know there is a possibility that a value extraction could return\n>> NULL and from your wrapper that COALESCE is the way to deal with this.\n> \n> You're not following because you don't want to follow.\n> \n> It does not matter that info->'foo' is in my example. That's not what\n> I am talking about.\n> \n> What I am talking about is that jsonb_set(..., ..., NULL) returns SQL NULL >\n> postgres=# \\pset null '(null)'\n> Null display is \"(null)\".\n> postgres=# select jsonb_set('{\"a\":1,\"b\":2,\"c\":3}'::jsonb, '{a}', NULL);\n> jsonb_set\n> -----------\n> (null)\n> (1 row)\n> \n> This behaviour is basically giving an application developer a loaded\n> shotgun and pointing it at their feet. It is not a good design. It\n> is a design which has likely lead to many users experiencing\n> unintentional data loss.\n\ncreate table null_test(fld_1 integer, fld_2 integer);\n\ninsert into null_test values(1, 2), (3, NULL);\n\nselect * from null_test ;\n fld_1 | fld_2\n-------+-------\n 1 | 2\n 3 | NULL\n(2 rows)\n\nupdate null_test set fld_1 = fld_1 + fld_2;\n\nselect * from null_test ;\n fld_1 | fld_2\n-------+-------\n 3 | 2\n NULL | NULL\n\nFailure to account for NULL is a generic issue. Given that this only the \nsecond post I can find that deals with this, in going on 4 years, I am \nguessing most users have dealt with it. If you really think this raises \nto the level of a bug then I would suggest filing a report here:\n\nhttps://www.postgresql.org/account/login/?next=/account/submitbug/\n\n\n\n> \n> Ariadne\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Sat, 19 Oct 2019 11:28:47 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sat, Oct 19, 2019 at 12:47:39PM -0400, Andrew Dunstan wrote:\n>\n>On 10/19/19 12:32 PM, David G. Johnston wrote:\n>> On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>>\n>> wrote:\n>>\n>> >\n>> >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\n>> >since 9.5. That's five releases ago.� So it's a bit late to be\n>> coming to\n>> >us telling us it's not safe (according to your preconceptions of\n>> what it\n>> >should be doing).\n>> >\n>>\n>>\n>> There have been numerous complaints and questions about this behavior\n>> in those five years; and none of the responses to those defenses has\n>> actually made the current behavior sound beneficial but rather have\n>> simply said \"this is how it works, deal with it\".\n>\n>\n>I haven't seen a patch, which for most possible solutions should be\n>fairly simple to code. This is open source. Code speaks louder than\n>complaints.\n>\n\nIMHO that might be a bit too harsh - I'm not surprised no one sent a\npatch when we're repeatedly telling people \"you're holding it wrong\".\nWithout a clear consensus what the \"correct\" behavior is, I wouldn't\nsend a patch either.\n\n>\n>>\n>> >\n>> >We could change it prospectively (i.e. from release 13 on) if we\n>> choose.\n>> >But absent an actual bug (i.e. acting contrary to documented\n>> behaviour)\n>> >we do not normally backpatch such changes, especially when there is a\n>> >simple workaround for the perceived problem. And it's that policy\n>> that\n>> >is in large measure responsible for Postgres' deserved reputation for\n>> >stability.\n>> >\n>>\n>> Yeah.\n>>\n>>\n>> Agreed, this is v13 material if enough people come on board to support\n>> making a change.\n>\n>\n>\n>We have changed such things in the past. But maybe a new function might\n>be a better way to go. I haven't given it enough thought yet.\n>\n\nI think the #1 thing we should certainly do is explaining the behavior\nin the docs.\n\n>\n>\n>>\n>> >And if we were to change it I'm not at all sure that we should do\n>> it the\n>> >way that's suggested here, which strikes me as no more intuitive than\n>> >the current behaviour. Rather I think we should possibly fill in\n>> a json\n>> >null in the indicated place.\n>> >\n>>\n>> Not sure, but that seems rather confusing to me, because it's\n>> mixing SQL\n>> NULL and JSON null, i.e. it's not clear to me why\n>>\n>> [...]\n>>\n>> But I admit it's quite subjective.\n>>\n>>\n>> Providing SQL NULL to this function and asking it to do something with\n>> that is indeed subjective - with no obvious reasonable default, and I\n>> agree that \"return a NULL\" while possible consistent is probably the\n>> least useful behavior that could have been chosen.� We should never\n>> have allowed an SQL NULL to be an acceptable argument in the first\n>> place, and can reasonably safely and effectively prevent it going\n>> forward.� Then people will have to explicitly code what they want to\n>> do if their data and queries present this invalid unknown data to the\n>> function.\n>>\n>>\n>\n>How exactly do we prevent a NULL being passed as an argument? The only\n>thing we could do would be to raise an exception, I think. That seems\n>like a fairly ugly thing to do, I'd need a h3eck of a lot of convincing.\n>\n\nI don't know, but if we don't know what the \"right\" behavior with NULL\nis, is raising an exception really that ugly?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 19 Oct 2019 21:27:23 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Sat, Oct 19, 2019, 3:27 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Sat, Oct 19, 2019 at 12:47:39PM -0400, Andrew Dunstan wrote:\n> >\n> >On 10/19/19 12:32 PM, David G. Johnston wrote:\n> >> On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra\n> >> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>>\n> >> wrote:\n> >>\n> >> >\n> >> >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\n> >> >since 9.5. That's five releases ago. So it's a bit late to be\n> >> coming to\n> >> >us telling us it's not safe (according to your preconceptions of\n> >> what it\n> >> >should be doing).\n> >> >\n> >>\n> >>\n> >> There have been numerous complaints and questions about this behavior\n> >> in those five years; and none of the responses to those defenses has\n> >> actually made the current behavior sound beneficial but rather have\n> >> simply said \"this is how it works, deal with it\".\n> >\n> >\n> >I haven't seen a patch, which for most possible solutions should be\n> >fairly simple to code. This is open source. Code speaks louder than\n> >complaints.\n> >\n>\n> IMHO that might be a bit too harsh - I'm not surprised no one sent a\n> patch when we're repeatedly telling people \"you're holding it wrong\".\n> Without a clear consensus what the \"correct\" behavior is, I wouldn't\n> send a patch either.\n>\n> >\n> >>\n> >> >\n> >> >We could change it prospectively (i.e. from release 13 on) if we\n> >> choose.\n> >> >But absent an actual bug (i.e. acting contrary to documented\n> >> behaviour)\n> >> >we do not normally backpatch such changes, especially when there\n> is a\n> >> >simple workaround for the perceived problem. And it's that policy\n> >> that\n> >> >is in large measure responsible for Postgres' deserved reputation\n> for\n> >> >stability.\n> >> >\n> >>\n> >> Yeah.\n> >>\n> >>\n> >> Agreed, this is v13 material if enough people come on board to support\n> >> making a change.\n> >\n> >\n> >\n> >We have changed such things in the past. But maybe a new function might\n> >be a better way to go. I haven't given it enough thought yet.\n> >\n>\n> I think the #1 thing we should certainly do is explaining the behavior\n> in the docs.\n>\n> >\n> >\n> >>\n> >> >And if we were to change it I'm not at all sure that we should do\n> >> it the\n> >> >way that's suggested here, which strikes me as no more intuitive\n> than\n> >> >the current behaviour. Rather I think we should possibly fill in\n> >> a json\n> >> >null in the indicated place.\n> >> >\n> >>\n> >> Not sure, but that seems rather confusing to me, because it's\n> >> mixing SQL\n> >> NULL and JSON null, i.e. it's not clear to me why\n> >>\n> >> [...]\n> >>\n> >> But I admit it's quite subjective.\n> >>\n> >>\n> >> Providing SQL NULL to this function and asking it to do something with\n> >> that is indeed subjective - with no obvious reasonable default, and I\n> >> agree that \"return a NULL\" while possible consistent is probably the\n> >> least useful behavior that could have been chosen. We should never\n> >> have allowed an SQL NULL to be an acceptable argument in the first\n> >> place, and can reasonably safely and effectively prevent it going\n> >> forward. Then people will have to explicitly code what they want to\n> >> do if their data and queries present this invalid unknown data to the\n> >> function.\n> >>\n> >>\n> >\n> >How exactly do we prevent a NULL being passed as an argument? The only\n> >thing we could do would be to raise an exception, I think. That seems\n> >like a fairly ugly thing to do, I'd need a h3eck of a lot of convincing.\n> >\n>\n> I don't know, but if we don't know what the \"right\" behavior with NULL\n> is, is raising an exception really that ugly?\n>\n\nRaising an exception at least would prevent people from blanking their\ncolumn out unintentionally.\n\nAnd I am willing to write a patch to do that if we have consensus on how to\nchange it.\n\nAriadne\n\nHello,On Sat, Oct 19, 2019, 3:27 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Sat, Oct 19, 2019 at 12:47:39PM -0400, Andrew Dunstan wrote:\n>\n>On 10/19/19 12:32 PM, David G. Johnston wrote:\n>> On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>>\n>> wrote:\n>>\n>>     >\n>>     >We invented jsonb_set() (credit to Dmitry Dolgov). And we've had it\n>>     >since 9.5. That's five releases ago.  So it's a bit late to be\n>>     coming to\n>>     >us telling us it's not safe (according to your preconceptions of\n>>     what it\n>>     >should be doing).\n>>     >\n>>\n>>\n>> There have been numerous complaints and questions about this behavior\n>> in those five years; and none of the responses to those defenses has\n>> actually made the current behavior sound beneficial but rather have\n>> simply said \"this is how it works, deal with it\".\n>\n>\n>I haven't seen a patch, which for most possible solutions should be\n>fairly simple to code. This is open source. Code speaks louder than\n>complaints.\n>\n\nIMHO that might be a bit too harsh - I'm not surprised no one sent a\npatch when we're repeatedly telling people \"you're holding it wrong\".\nWithout a clear consensus what the \"correct\" behavior is, I wouldn't\nsend a patch either.\n\n>\n>>\n>>     >\n>>     >We could change it prospectively (i.e. from release 13 on) if we\n>>     choose.\n>>     >But absent an actual bug (i.e. acting contrary to documented\n>>     behaviour)\n>>     >we do not normally backpatch such changes, especially when there is a\n>>     >simple workaround for the perceived problem. And it's that policy\n>>     that\n>>     >is in large measure responsible for Postgres' deserved reputation for\n>>     >stability.\n>>     >\n>>\n>>     Yeah.\n>>\n>>\n>> Agreed, this is v13 material if enough people come on board to support\n>> making a change.\n>\n>\n>\n>We have changed such things in the past. But maybe a new function might\n>be a better way to go. I haven't given it enough thought yet.\n>\n\nI think the #1 thing we should certainly do is explaining the behavior\nin the docs.\n\n>\n>\n>>\n>>     >And if we were to change it I'm not at all sure that we should do\n>>     it the\n>>     >way that's suggested here, which strikes me as no more intuitive than\n>>     >the current behaviour. Rather I think we should possibly fill in\n>>     a json\n>>     >null in the indicated place.\n>>     >\n>>\n>>     Not sure, but that seems rather confusing to me, because it's\n>>     mixing SQL\n>>     NULL and JSON null, i.e. it's not clear to me why\n>>\n>> [...]\n>>\n>>     But I admit it's quite subjective.\n>>\n>>\n>> Providing SQL NULL to this function and asking it to do something with\n>> that is indeed subjective - with no obvious reasonable default, and I\n>> agree that \"return a NULL\" while possible consistent is probably the\n>> least useful behavior that could have been chosen.  We should never\n>> have allowed an SQL NULL to be an acceptable argument in the first\n>> place, and can reasonably safely and effectively prevent it going\n>> forward.  Then people will have to explicitly code what they want to\n>> do if their data and queries present this invalid unknown data to the\n>> function.\n>>\n>>\n>\n>How exactly do we prevent a NULL being passed as an argument? The only\n>thing we could do would be to raise an exception, I think. That seems\n>like a fairly ugly thing to do, I'd need a h3eck of a lot of convincing.\n>\n\nI don't know, but if we don't know what the \"right\" behavior with NULL\nis, is raising an exception really that ugly?Raising an exception at least would prevent people from blanking their column out unintentionally.And I am willing to write a patch to do that if we have consensus on how to change it.Ariadne", "msg_date": "Sat, 19 Oct 2019 15:49:29 -0400", "msg_from": "Ariadne Conill <ariadne@dereferenced.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "FWIW I've been bitten by this 'feature' more than once as well, accidentally erasing a column. Now I usually write js = jsonb_set(js, coalesce(new_column, 'null'::jsonb)) to prevent erasing the whole column, and instead setting the value to a jsonb null value, but I also found the STRICT behavior very surprising at first..\n\n\n-Floris\n\n\n\n\n\n\n\n\nFWIW I've been bitten by this 'feature' more than once as well, accidentally erasing a column. Now I usually write js = jsonb_set(js, coalesce(new_column, 'null'::jsonb)) to prevent erasing the whole column, and instead setting the value to a jsonb null\n value, but I also found the STRICT behavior very surprising at first..\n\n\n\n-Floris", "msg_date": "Sun, 20 Oct 2019 08:39:58 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": false, "msg_subject": "jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 19/10/2019 07:52, Ariadne Conill wrote:\n>\n> I would say that any thing like\n>\n> update whatever set column=jsonb_set(column, '{foo}', NULL)\n>\n> should throw an exception. It should do, literally, *anything* else\n> but blank that column.\n\nsteve=# create table foo (bar jsonb not null);\nCREATE TABLE\nsteve=# insert into foo (bar) values ('{\"a\":\"b\"}');\nINSERT 0 1\nsteve=# update foo set bar = jsonb_set(bar, '{foo}', NULL);\nERROR:  null value in column \"bar\" violates not-null constraint\nDETAIL:  Failing row contains (null).\nsteve=# update foo set bar = jsonb_set(bar, '{foo}', 'null'::jsonb);\nUPDATE 1\n\nI don't see any behaviour that's particularly surprising there? Though I \nunderstand how an app developer who's light on SQL might get it wrong - \nand I've made similar mistakes in schema upgrade scripts without \ninvolving jsonb.\n\nCheers,\n   Steve\n\n\n\n", "msg_date": "Sun, 20 Oct 2019 10:13:04 +0100", "msg_from": "Steve Atkins <steve@blighty.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 10/20/19 4:39 AM, Floris Van Nee wrote:\n>\n> FWIW I've been bitten by this 'feature' more than once as well,\n> accidentally erasing a column. Now I usually write�js =�jsonb_set(js,\n> coalesce(new_column, 'null'::jsonb)) to prevent erasing the whole\n> column, and instead setting the value to a jsonb null value, but I\n> also found the STRICT behavior very surprising at first..\n>\n>\n>\n\n\n\nUnderstood. I think the real question here is what it should do instead\nwhen the value is NULL. Your behaviour above is one suggestion, which I\npersonally find intuitive. Another has been to remove the associated\nkey. Another is to return the original target. And yet another is to\nraise an exception, which is easy to write but really punts the issue\nback to the application programmer who will have to decide how to ensure\nthey never pass in a NULL parameter. Possibly we could even add an extra\nparameter to specify what should be done.\n\n\nAlso, the question will arise what to do when any of the other\nparameters are NULL. Should we return NULL in those cases as we do now?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 20 Oct 2019 08:31:38 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sun, 20 Oct 2019 at 08:32, Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\nwrote:\n\n>\n> Understood. I think the real question here is what it should do instead\n> when the value is NULL. Your behaviour above is one suggestion, which I\n> personally find intuitive. Another has been to remove the associated\n> key. Another is to return the original target. And yet another is to\n> raise an exception, which is easy to write but really punts the issue\n> back to the application programmer who will have to decide how to ensure\n> they never pass in a NULL parameter. Possibly we could even add an extra\n> parameter to specify what should be done.\n>\n\nI vote for remove the key. If we make NULL and 'null'::jsonb the same,\nwe're missing an opportunity to provide more functionality. Sometimes it's\nconvenient to be able to handle both the \"update\" and \"remove\" cases with\none function, just depending on the parameter value supplied.\n\nAlso, the question will arise what to do when any of the other\n> parameters are NULL. Should we return NULL in those cases as we do now?\n>\n\nI would argue that only if the target parameter (the actual json value) is\nNULL should the result be NULL. The function is documented as returning the\ntarget, with modifications to a small part of its structure as specified by\nthe other parameters. It is strange for the result to suddenly collapse\ndown to NULL just because another parameter is NULL. Perhaps if the path is\nNULL, that can mean \"don't update\". And if create_missing is NULL, that\nshould mean the same as not specifying it. I think. At a minimum, if we\ndon't change it, the documentation needs to get one of those warning boxes\nalerting people that the functions will destroy their input entirely rather\nthan slightly modifying it if any of the other parameters are NULL.\n\nMy only doubt about any of this is that by the same argument, functions\nlike replace() should not return NULL if the 2nd or 3rd parameter is NULL.\nI'm guessing replace() is specified by SQL and also unchanged in many\nversions so therefore not eligible for re-thinking but it still gives me\njust a bit of pause.\n\nOn Sun, 20 Oct 2019 at 08:32, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:Understood. I think the real question here is what it should do instead\nwhen the value is NULL. Your behaviour above is one suggestion, which I\npersonally find intuitive. Another has been to remove the associated\nkey. Another is to return the original target. And yet another is to\nraise an exception, which is easy to write but really punts the issue\nback to the application programmer who will have to decide how to ensure\nthey never pass in a NULL parameter. Possibly we could even add an extra\nparameter to specify what should be done.I vote for remove the key. If we make NULL and 'null'::jsonb the same, we're missing an opportunity to provide more functionality. Sometimes it's convenient to be able to handle both the \"update\" and \"remove\" cases with one function, just depending on the parameter value supplied.Also, the question will arise what to do when any of the other\nparameters are NULL. Should we return NULL in those cases as we do now?I would argue that only if the target parameter (the actual json value) is NULL should the result be NULL. The function is documented as returning the target, with modifications to a small part of its structure as specified by the other parameters. It is strange for the result to suddenly collapse down to NULL just because another parameter is NULL. Perhaps if the path is NULL, that can mean \"don't update\". And if create_missing is NULL, that should mean the same as not specifying it. I think. At a minimum, if we don't change it, the documentation needs to get one of those warning boxes alerting people that the functions will destroy their input entirely rather than slightly modifying it if any of the other parameters are NULL.My only doubt about any of this is that by the same argument, functions like replace() should not return NULL if the 2nd or 3rd parameter is NULL. I'm guessing replace() is specified by SQL and also unchanged in many versions so therefore not eligible for re-thinking but it still gives me just a bit of pause.", "msg_date": "Sun, 20 Oct 2019 09:42:59 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n> And yet another is to\n> raise an exception, which is easy to write but really punts the issue\n> back to the application programmer who will have to decide how to ensure\n> they never pass in a NULL parameter.\n\n\nThat's kinda the point - if they never pass NULL they won't encounter any\nproblems but as soon as the data and their application don't see eye-to-eye\nthe application developer has to decide what they want to do about it. We\nare in no position to decide for them and making it obvious they have a\ndecision to make and implement here doesn't seem like a improper position\nto take.\n\n\n> Possibly we could even add an extra\n> parameter to specify what should be done.\n>\n\nHas appeal.\n\n\n\n> Should we return NULL in those cases as we do now?\n>\n\nProbably the same thing - though I'd accept having the input json being\nnull result in the output json being null as well.\n\nDavid J.\n\nOn Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:And yet another is to\nraise an exception, which is easy to write but really punts the issue\nback to the application programmer who will have to decide how to ensure\nthey never pass in a NULL parameter.That's kinda the point - if they never pass NULL they won't encounter any problems but as soon as the data and their application don't see eye-to-eye the application developer has to decide what they want to do about it.  We are in no position to decide for them and making it obvious they have a decision to make and implement here doesn't seem like a improper position to take.  Possibly we could even add an extra\nparameter to specify what should be done.Has appeal. Should we return NULL in those cases as we do now?Probably the same thing - though I'd accept having the input json being null result in the output json being null as well.David J.", "msg_date": "Sun, 20 Oct 2019 10:14:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 10/20/19 1:14 PM, David G. Johnston wrote:\n> On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>\n> And yet another is to\n> raise an exception, which is easy to write but really punts the issue\n> back to the application programmer who will have to decide how to\n> ensure\n> they never pass in a NULL parameter.\n>\n>\n> That's kinda the point - if they never pass NULL they won't encounter\n> any problems but as soon as the data and their application don't see\n> eye-to-eye the application developer has to decide what they want to\n> do about it.  We are in no position to decide for them and making it\n> obvious they have a decision to make and implement here doesn't seem\n> like a improper position to take.\n\n\nThe app dev can avoid this problem today by making sure they don't pass\na NULL as the value. Or they can use a wrapper function which does that\nfor them. So frankly this doesn't seem like much of an advance. And, as\nhas been noted, it's not consistent with what either MySQL or MSSQL do.\nIn general I'm not that keen on raising an exception for cases like this.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 20 Oct 2019 15:48:05 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sun, Oct 20, 2019 at 03:48:05PM -0400, Andrew Dunstan wrote:\n>\n>On 10/20/19 1:14 PM, David G. Johnston wrote:\n>> On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan\n>> <andrew.dunstan@2ndquadrant.com\n>> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>>\n>> And yet another is to\n>> raise an exception, which is easy to write but really punts the issue\n>> back to the application programmer who will have to decide how to\n>> ensure\n>> they never pass in a NULL parameter.\n>>\n>>\n>> That's kinda the point - if they never pass NULL they won't encounter\n>> any problems but as soon as the data and their application don't see\n>> eye-to-eye the application developer has to decide what they want to\n>> do about it.� We are in no position to decide for them and making it\n>> obvious they have a decision to make and implement here doesn't seem\n>> like a improper position to take.\n>\n>\n>The app dev can avoid this problem today by making sure they don't pass\n>a NULL as the value. Or they can use a wrapper function which does that\n>for them. So frankly this doesn't seem like much of an advance. And, as\n>has been noted, it's not consistent with what either MySQL or MSSQL do.\n>In general I'm not that keen on raising an exception for cases like this.\n>\n\nI think the general premise of this thread is that the application\ndeveloper does not realize that may be necessary, because it's a bit\nsurprising behavior, particularly when having more experience with other\ndatabases that behave differently. It's also pretty easy to not notice\nthis issue for a long time, resulting in significant data loss.\n\nLet's say you're used to the MSSQL or MySQL behavior, you migrate your\napplication to PostgreSQL or whatever - how do you find out about this\nbehavior? Users are likely to visit\n\n https://www.postgresql.org/docs/12/functions-json.html\n\nbut that says nothing about how jsonb_set works with NULL values :-(\n\nYou're right raising an exception may not be the \"right behavior\" for\nwhatever definition of \"right\". But I kinda agree with David that it's\nsomewhat reasonable when we don't know what the \"universally correct\"\nthing is (or when there's no such thing). IMHO that's better than\nsilently discarding some of the data.\n\nFWIW I think the JSON/JSONB part of our code base is amazing, and the\nfact that various other databases adopted something very similar over\nthe last couple of years just confirms that. And if this is the only\nspeck of dust in the API, I think that's pretty amazing.\n\nI'm not sure how significant this issue actually is - it's true we got a\ncouple of complaints over the years (judging by a quick search for\njsonb_set and NULL in the archives), but I'm not sure that's enough to\njustify any changes in backbranches. I'd say no, but I have no idea how\nmany people are affected by this but don't know about it ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 20 Oct 2019 22:18:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "I would think though that raising an exception is better than a default\nbehavior which deletes data.\nAs an app dev I am quite used to all sorts of \"APIs\" throwing exceptions\nand have learned to deal with them.\n\nThis is my way of saying that raising an exception is an improvement over\nthe current situation. May not be the \"best\" solution but definitely an\nimprovement.\nThanks\nSteve\n\nOn Sun, Oct 20, 2019 at 12:48 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 10/20/19 1:14 PM, David G. Johnston wrote:\n> > On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com\n> > <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n> >\n> > And yet another is to\n> > raise an exception, which is easy to write but really punts the issue\n> > back to the application programmer who will have to decide how to\n> > ensure\n> > they never pass in a NULL parameter.\n> >\n> >\n> > That's kinda the point - if they never pass NULL they won't encounter\n> > any problems but as soon as the data and their application don't see\n> > eye-to-eye the application developer has to decide what they want to\n> > do about it. We are in no position to decide for them and making it\n> > obvious they have a decision to make and implement here doesn't seem\n> > like a improper position to take.\n>\n>\n> The app dev can avoid this problem today by making sure they don't pass\n> a NULL as the value. Or they can use a wrapper function which does that\n> for them. So frankly this doesn't seem like much of an advance. And, as\n> has been noted, it's not consistent with what either MySQL or MSSQL do.\n> In general I'm not that keen on raising an exception for cases like this.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan https://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n>\n\nI would think though that raising an exception is better than a default behavior which deletes data.As an app dev I am quite used to all sorts of \"APIs\" throwing exceptions and have learned to deal with them. This is my way of saying that raising an exception is an improvement over the current situation. May not be the \"best\" solution but definitely an improvement.ThanksSteveOn Sun, Oct 20, 2019 at 12:48 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 10/20/19 1:14 PM, David G. Johnston wrote:\n> On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>\n>     And yet another is to\n>     raise an exception, which is easy to write but really punts the issue\n>     back to the application programmer who will have to decide how to\n>     ensure\n>     they never pass in a NULL parameter.\n>\n>\n> That's kinda the point - if they never pass NULL they won't encounter\n> any problems but as soon as the data and their application don't see\n> eye-to-eye the application developer has to decide what they want to\n> do about it.  We are in no position to decide for them and making it\n> obvious they have a decision to make and implement here doesn't seem\n> like a improper position to take.\n\n\nThe app dev can avoid this problem today by making sure they don't pass\na NULL as the value. Or they can use a wrapper function which does that\nfor them. So frankly this doesn't seem like much of an advance. And, as\nhas been noted, it's not consistent with what either MySQL or MSSQL do.\nIn general I'm not that keen on raising an exception for cases like this.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan                https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 20 Oct 2019 13:20:23 -0700", "msg_from": "Steven Pousty <steve.pousty@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Fri, 2019-10-18 at 21:18 -0500, Ariadne Conill wrote:\n> postgres=# \\pset null '(null)'\n> Null display is \"(null)\".\n> postgres=# select jsonb_set('{\"a\":1,\"b\":2,\"c\":3}'::jsonb, '{a}', NULL);\n> jsonb_set\n> -----------\n> (null)\n> (1 row)\n> \n> This behaviour is basically giving an application developer a loaded\n> shotgun and pointing it at their feet. It is not a good design. It\n> is a design which has likely lead to many users experiencing\n> unintentional data loss.\n\nI understand your sentiments, even if you voiced them too drastically for\nmy taste.\n\nThe basic problem is that SQL NULL and JSON null have different semantics,\nand while it is surprising for you that a database function returns NULL\nif an argument is NULL, many database people would be surprised by the\nopposite. Please have some understanding.\n\nThat said, I think it is reasonable that a PostgreSQL JSON function\nbehaves in the way that JSON users would expect, so here is my +1 for\ninterpreting an SQL NULL as a JSON null in the above case, so that the\nresult of the above becomes\n{\"a\": null, \"b\": 2, \"c\": 3}\n\n-1 for backpatching such a change.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Sun, 20 Oct 2019 22:53:13 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": ">\n>\n> You're not following because you don't want to follow.\n>\n>\nI think that anyone with a \"commit bit\" on this project that tolerates that\nsentence is a much better human being than I ever will be.\n\nI may be the dumbest person on this list by many measures - but isn't there\nstandard options that are supposed to be the first line of defense here?\nWhy do I need a function to ensure I don't remove data by passing a NULL\nvalue to an update?\n\nWhy would any of these standard statements not solve this issue?\n\nupdate users set info=jsonb_set(info, '{bar}', info->'foo')\nwhere info->'foo' is not null\n\nupdate users set info=jsonb_set(info, '{bar}', info->'foo')\nwhere jsonb_set(info, '{bar}', info->'foo') is not null\n\nupdate users set info=coalesce(jsonb_set(info, '{bar}', info->'foo'), info)\n\nI can totally respect the person that slams into this wall the first time\nand is REALLY upset about it - but by their own admission this has occurred\nmultiple times in this project and they continue to not take standard\nprecautions.\n\nAgain, I applaud the patience of many people on this list. You deserve much\nmore respect than you are being shown here right now.\n\nJohn W Higgins\n\n\nYou're not following because you don't want to follow.\nI think that anyone with a \"commit bit\" on this project that tolerates that sentence is a much better human being than I ever will be.I may be the dumbest person on this list by many measures - but isn't there standard options that are supposed to be the first line of defense here? Why do I need a function to ensure I don't remove data by passing a NULL value to an update?Why would any of these standard statements not solve this issue?update users set info=jsonb_set(info, '{bar}', info->'foo')where info->'foo' is not nullupdate users set info=jsonb_set(info, '{bar}', info->'foo')where jsonb_set(info, '{bar}', info->'foo') is not nullupdate users set info=coalesce(jsonb_set(info, '{bar}', info->'foo'), info)I can totally respect the person that slams into this wall the first time and is REALLY upset about it - but by their own admission this has occurred multiple times in this project and they continue to not take standard precautions.Again, I applaud the patience of many people on this list. You deserve much more respect than you are being shown here right now.John W Higgins", "msg_date": "Sun, 20 Oct 2019 14:09:36 -0700", "msg_from": "John W Higgins <wishdev@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "> That said, I think it is reasonable that a PostgreSQL JSON function\n> behaves in the way that JSON users would expect, so here is my +1 for\n> interpreting an SQL NULL as a JSON null in the above case\n\nJust to chime in as another application developer: the current\nfunctionality does seem pretty surprising and dangerous to me. Raising\nan exception seems pretty annoying. Setting the key's value to a JSON\nnull would be fine, but I also like the idea of removing the key\nentirely, since that gives you strictly more functionality: you can\nalways set the key to a JSON null by passing one in, if that's what\nyou want. But there are lots of other functions that convert SQL NULL\nto JSON null:\n\npostgres=# select row_to_json(row(null)), json_build_object('foo',\nnull), json_object(array['foo', null]), json_object(array['foo'],\narray[null]);\n row_to_json | json_build_object | json_object | json_object\n-------------+-------------------+----------------+----------------\n {\"f1\":null} | {\"foo\" : null} | {\"foo\" : null} | {\"foo\" : null}\n(1 row)\n\n(The jsonb variants give the same results.)\n\nI think those functions are very similar to json_set here, and I'd\nexpect json_set to do what they do (i.e. convert SQL NULL to JSON\nnull).\n\nPaul\n\n\n", "msg_date": "Sun, 20 Oct 2019 14:10:15 -0700", "msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 10/20/19 4:18 PM, Tomas Vondra wrote:\n> On Sun, Oct 20, 2019 at 03:48:05PM -0400, Andrew Dunstan wrote:\n>>\n>> On 10/20/19 1:14 PM, David G. Johnston wrote:\n>>> On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan\n>>> <andrew.dunstan@2ndquadrant.com\n>>> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>>>\n>>>     And yet another is to\n>>>     raise an exception, which is easy to write but really punts the\n>>> issue\n>>>     back to the application programmer who will have to decide how to\n>>>     ensure\n>>>     they never pass in a NULL parameter.\n>>>\n>>>\n>>> That's kinda the point - if they never pass NULL they won't encounter\n>>> any problems but as soon as the data and their application don't see\n>>> eye-to-eye the application developer has to decide what they want to\n>>> do about it.  We are in no position to decide for them and making it\n>>> obvious they have a decision to make and implement here doesn't seem\n>>> like a improper position to take.\n>>\n>>\n>> The app dev can avoid this problem today by making sure they don't pass\n>> a NULL as the value. Or they can use a wrapper function which does that\n>> for them. So frankly this doesn't seem like much of an advance. And, as\n>> has been noted, it's not consistent with what either MySQL or MSSQL do.\n>> In general I'm not that keen on raising an exception for cases like\n>> this.\n>>\n>\n> I think the general premise of this thread is that the application\n> developer does not realize that may be necessary, because it's a bit\n> surprising behavior, particularly when having more experience with other\n> databases that behave differently. It's also pretty easy to not notice\n> this issue for a long time, resulting in significant data loss.\n>\n> Let's say you're used to the MSSQL or MySQL behavior, you migrate your\n> application to PostgreSQL or whatever - how do you find out about this\n> behavior? Users are likely to visit\n>\n>    https://www.postgresql.org/docs/12/functions-json.html\n>\n> but that says nothing about how jsonb_set works with NULL values :-(\n\n\n\nWe should certainly fix that. I accept some responsibility for the omission.\n\n\n\n>\n> You're right raising an exception may not be the \"right behavior\" for\n> whatever definition of \"right\". But I kinda agree with David that it's\n> somewhat reasonable when we don't know what the \"universally correct\"\n> thing is (or when there's no such thing). IMHO that's better than\n> silently discarding some of the data.\n\n\nI'm not arguing against the idea of improving the situation. But I am\narguing against a minimal fix that will not provide much of value to a\ncareful app developer. i.e. I want to do more to support app devs.\nIdeally they would not need to use wrapper functions. There will be\nplenty of situations where it is mighty inconvenient to catch an\nexception thrown by jsonb_set(). And catching exceptions can be\nexpensive. You want to avoid that if possible in your\nperformance-critical plpgsql code.\n\n\n\n>\n> FWIW I think the JSON/JSONB part of our code base is amazing, and the\n> fact that various other databases adopted something very similar over\n> the last couple of years just confirms that. And if this is the only\n> speck of dust in the API, I think that's pretty amazing.\n\n\nTY. When I first saw the SQL/JSON spec I thought I should send a request\nto the SQL standards committee for a royalty payment, since it looked so\nfamiliar ;-)\n\n\n>\n> I'm not sure how significant this issue actually is - it's true we got a\n> couple of complaints over the years (judging by a quick search for\n> jsonb_set and NULL in the archives), but I'm not sure that's enough to\n> justify any changes in backbranches. I'd say no, but I have no idea how\n> many people are affected by this but don't know about it ...\n>\n>\n\nNo, no backpatching. As I said upthread, this isn't a bug, but it is\narguably a POLA violation, which is why we should do something for\nrelease 13.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 20 Oct 2019 18:51:05 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Steven Pousty wrote:\n\n> I would think though that raising an exception is better than a\n> default behavior which deletes data.\n\nI can't help but feel the need to make the point that\nthe function is not deleting anything. It is just\nreturning null. The deletion of data is being performed\nby an update statement that uses the function's return\nvalue to set a column value.\n\nI don't agree that raising an exception in the function\nis a good idea (perhaps unless it's valid to assume\nthat this function will only ever be used in such a\ncontext). Making the column not null (as already\nsuggested) and having the update statement itself raise\nthe exception seems more appropriate if an exception is\ndesirable. But that presumes an accurate understanding\nof the behaviour of jsonb_set.\n\nReally, I think the best fix would be in the\ndocumentation so that everyone who finds the function\nin the documentation understands its behaviour\nimmediately. I didn't even know there was such a thing\nas a strict function or what it means and the\ndocumentation for jsonb_set doesn't mention that it is\na strict function and the examples of its use don't\ndemonstrate this behaviour. I'm referring to\nhttps://www.postgresql.org/docs/9.5/functions-json.html.\n\nAll of this contributes to the astonishment encountered\nhere. Least astonishment can probably be achieved with\nadditional documentation but it has to be where the\nreader is looking when they first encounter the\nfunction in the documentation so that their\nexpectations are set correctly and set early. And\ndocumentation can be \"fixed\" sooner than postgresql 13.\n\nPerhaps an audit of the documentation for all strict\nfunctions would be a good idea to see if they need\nwork. Knowing that a function won't be executed at all\nand will effectively return null when given a null\nargument might be important to know for other functions\nas well.\n\ncheers,\nraf\n\n\n\n", "msg_date": "Mon, 21 Oct 2019 10:31:15 +1100", "msg_from": "raf <raf@raf.org>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Sun, 2019-10-20 at 18:51 -0400, Andrew Dunstan wrote:\n> On 10/20/19 4:18 PM, Tomas Vondra wrote:\n> > \n> > https://www.postgresql.org/docs/12/functions-json.html\n> > \n> > but that says nothing about how jsonb_set works with NULL values :-\n> > (\n> \n> \n> We should certainly fix that. I accept some responsibility for the\n> omission.\n> \n> \n> \n\nFWIW, if you are able to update the documentation, the current JSON RFC\nis 8259.\n\nhttps://tools.ietf.org/html/rfc8259\n\n\nCheers,\nRob\n\n\n\n\n\n\n", "msg_date": "Mon, 21 Oct 2019 12:18:23 +1100", "msg_from": "rob stone <floriparob@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": ">\n>\n>> I would argue that only if the target parameter (the actual json value)\n> is NULL should the result be NULL. The function is documented as returning\n> the target, with modifications to a small part of its structure as\n> specified by the other parameters. It is strange for the result to suddenly\n> collapse down to NULL just because another parameter is NULL. Perhaps if\n> the path is NULL, that can mean \"don't update\". And if create_missing is\n> NULL, that should mean the same as not specifying it. I think. At a\n> minimum, if we don't change it, the documentation needs to get one of those\n> warning boxes alerting people that the functions will destroy their input\n> entirely rather than slightly modifying it if any of the other parameters\n> are NULL.\n>\n> My only doubt about any of this is that by the same argument, functions\n> like replace() should not return NULL if the 2nd or 3rd parameter is NULL.\n> I'm guessing replace() is specified by SQL and also unchanged in many\n> versions so therefore not eligible for re-thinking but it still gives me\n> just a bit of pause.\n>\n\nThat's the essential difference though, no? With jsonb, conceptually, we\nhave a nested row. That's where we get confused. We think that the\noperation should affect the element within the nested structure, not the\nstructure itself.\n\nIt would be equivalent to replace() nulling out the entire row on null.\n\nI understand the logic behind it, but I also definitely see why it's not\nintuitive.\n\nAH\n\nI would argue that only if the target parameter (the actual json value) is NULL should the result be NULL. The function is documented as returning the target, with modifications to a small part of its structure as specified by the other parameters. It is strange for the result to suddenly collapse down to NULL just because another parameter is NULL. Perhaps if the path is NULL, that can mean \"don't update\". And if create_missing is NULL, that should mean the same as not specifying it. I think. At a minimum, if we don't change it, the documentation needs to get one of those warning boxes alerting people that the functions will destroy their input entirely rather than slightly modifying it if any of the other parameters are NULL.My only doubt about any of this is that by the same argument, functions like replace() should not return NULL if the 2nd or 3rd parameter is NULL. I'm guessing replace() is specified by SQL and also unchanged in many versions so therefore not eligible for re-thinking but it still gives me just a bit of pause.\n\nThat's the essential difference though, no? With jsonb, conceptually, we have a nested row. That's where we get confused. We think that the operation should affect the element within the nested structure, not the structure itself.It would be equivalent to replace() nulling out the entire row on null.I understand the logic behind it, but I also definitely see why it's not intuitive.AH", "msg_date": "Sun, 20 Oct 2019 19:58:33 -0700", "msg_from": "Abelard Hoffman <abelardhoffman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:\n>\n>On 10/20/19 4:18 PM, Tomas Vondra wrote:\n>> On Sun, Oct 20, 2019 at 03:48:05PM -0400, Andrew Dunstan wrote:\n>>>\n>>> On 10/20/19 1:14 PM, David G. Johnston wrote:\n>>>> On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan\n>>>> <andrew.dunstan@2ndquadrant.com\n>>>> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n>>>>\n>>>> ��� And yet another is to\n>>>> ��� raise an exception, which is easy to write but really punts the\n>>>> issue\n>>>> ��� back to the application programmer who will have to decide how to\n>>>> ��� ensure\n>>>> ��� they never pass in a NULL parameter.\n>>>>\n>>>>\n>>>> That's kinda the point - if they never pass NULL they won't encounter\n>>>> any problems but as soon as the data and their application don't see\n>>>> eye-to-eye the application developer has to decide what they want to\n>>>> do about it.� We are in no position to decide for them and making it\n>>>> obvious they have a decision to make and implement here doesn't seem\n>>>> like a improper position to take.\n>>>\n>>>\n>>> The app dev can avoid this problem today by making sure they don't pass\n>>> a NULL as the value. Or they can use a wrapper function which does that\n>>> for them. So frankly this doesn't seem like much of an advance. And, as\n>>> has been noted, it's not consistent with what either MySQL or MSSQL do.\n>>> In general I'm not that keen on raising an exception for cases like\n>>> this.\n>>>\n>>\n>> I think the general premise of this thread is that the application\n>> developer does not realize that may be necessary, because it's a bit\n>> surprising behavior, particularly when having more experience with other\n>> databases that behave differently. It's also pretty easy to not notice\n>> this issue for a long time, resulting in significant data loss.\n>>\n>> Let's say you're used to the MSSQL or MySQL behavior, you migrate your\n>> application to PostgreSQL or whatever - how do you find out about this\n>> behavior? Users are likely to visit\n>>\n>> �� https://www.postgresql.org/docs/12/functions-json.html\n>>\n>> but that says nothing about how jsonb_set works with NULL values :-(\n>\n>\n>\n>We should certainly fix that. I accept some responsibility for the omission.\n>\n\n+1\n\n>\n>>\n>> You're right raising an exception may not be the \"right behavior\" for\n>> whatever definition of \"right\". But I kinda agree with David that it's\n>> somewhat reasonable when we don't know what the \"universally correct\"\n>> thing is (or when there's no such thing). IMHO that's better than\n>> silently discarding some of the data.\n>\n>\n>I'm not arguing against the idea of improving the situation. But I am\n>arguing against a minimal fix that will not provide much of value to a\n>careful app developer. i.e. I want to do more to support app devs.\n>Ideally they would not need to use wrapper functions. There will be\n>plenty of situations where it is mighty inconvenient to catch an\n>exception thrown by jsonb_set(). And catching exceptions can be\n>expensive. You want to avoid that if possible in your\n>performance-critical plpgsql code.\n>\n\nTrue. And AFAIK catching exceptions is not really possible in some code,\ne.g. in stored procedures (because we can't do subtransactions, so no\nexception blocks).\n\n>\n>>\n>> FWIW I think the JSON/JSONB part of our code base is amazing, and the\n>> fact that various other databases adopted something very similar over\n>> the last couple of years just confirms that. And if this is the only\n>> speck of dust in the API, I think that's pretty amazing.\n>\n>\n>TY. When I first saw the SQL/JSON spec I thought I should send a request\n>to the SQL standards committee for a royalty payment, since it looked so\n>familiar ;-)\n>\n\n;-)\n\n>\n>>\n>> I'm not sure how significant this issue actually is - it's true we got a\n>> couple of complaints over the years (judging by a quick search for\n>> jsonb_set and NULL in the archives), but I'm not sure that's enough to\n>> justify any changes in backbranches. I'd say no, but I have no idea how\n>> many people are affected by this but don't know about it ...\n>>\n>>\n>\n>No, no backpatching. As I said upthread, this isn't a bug, but it is\n>arguably a POLA violation, which is why we should do something for\n>release 13.\n>\n\nWFM\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 21 Oct 2019 08:07:01 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 10/21/19 2:07 AM, Tomas Vondra wrote:\n> On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:\n>>\n>>> I think the general premise of this thread is that the application\n>>> developer does not realize that may be necessary, because it's a bit\n>>> surprising behavior, particularly when having more experience with\n>>> other\n>>> databases that behave differently. It's also pretty easy to not notice\n>>> this issue for a long time, resulting in significant data loss.\n>>>\n>>> Let's say you're used to the MSSQL or MySQL behavior, you migrate your\n>>> application to PostgreSQL or whatever - how do you find out about this\n>>> behavior? Users are likely to visit\n>>>\n>>>    https://www.postgresql.org/docs/12/functions-json.html\n>>>\n>>> but that says nothing about how jsonb_set works with NULL values :-(\n>>\n>>\n>>\n>> We should certainly fix that. I accept some responsibility for the\n>> omission.\n>>\n>\n> +1\n>\n>\n\n\nSo let's add something to the JSON funcs page  like this:\n\n\nNote: All the above functions except for json_build_object,\njson_build_array, json_to_recordset, json_populate_record, and\njson_populate_recordset and their jsonb equivalents are strict\nfunctions. That is, if any argument is NULL the function result will be\nNULL and the function won't even be called. Particular care should\ntherefore be taken to avoid passing NULL arguments to those functions\nunless a NULL result is expected. This is particularly true of the\njsonb_set and jsonb_insert functions.\n\n\n\n(We do have a heck of a lot of Note: sections on that page)\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 21 Oct 2019 09:28:06 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/20/19 11:07 PM, Tomas Vondra wrote:\n> On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:\n\n> \n> True. And AFAIK catching exceptions is not really possible in some code,\n> e.g. in stored procedures (because we can't do subtransactions, so no\n> exception blocks).\n> \n\nCan you explain the above to me as I thought there are exception blocks \nin stored functions and now sub-transactions in stored procedures.\n\n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 08:06:46 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sun, Oct 20, 2019 at 3:51 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n> I'm not arguing against the idea of improving the situation. But I am\n> arguing against a minimal fix that will not provide much of value to a\n> careful app developer. i.e. I want to do more to support app devs.\n> Ideally they would not need to use wrapper functions. There will be\n> plenty of situations where it is mighty inconvenient to catch an\n> exception thrown by jsonb_set(). And catching exceptions can be\n> expensive. You want to avoid that if possible in your\n> performance-critical plpgsql code.\n>\n\nAs there is pretty much nothing that can be done at runtime if this\nexception is raised actually \"catching\" it anywhere deeper than near the\ntop of the application code is largely pointless. Its more like a\nNullPointerException in Java - if the application raises it there should be\na last line of defense error handler that basically says \"you developer\nmade a mistake somewhere and needs to fix it - tell them this happened\".\n\nPerformance critical subsections (and pretty much the whole) of the\napplication can just raise the error to the caller using normal mechanisms\nfor \"SQLException\" propogation.\n\nDavid J.\n\nOn Sun, Oct 20, 2019 at 3:51 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nI'm not arguing against the idea of improving the situation. But I am\narguing against a minimal fix that will not provide much of value to a\ncareful app developer. i.e. I want to do more to support app devs.\nIdeally they would not need to use wrapper functions. There will be\nplenty of situations where it is mighty inconvenient to catch an\nexception thrown by jsonb_set(). And catching exceptions can be\nexpensive. You want to avoid that if possible in your\nperformance-critical plpgsql code.As there is pretty much nothing that can be done at runtime if this exception is raised actually \"catching\" it anywhere deeper than near the top of the application code is largely pointless.  Its more like a NullPointerException in Java - if the application raises it there should be a last line of defense error handler that basically says \"you developer made a mistake somewhere and needs to fix it - tell them this happened\".Performance critical subsections (and pretty much the whole) of the application can just raise the error to the caller using normal mechanisms for \"SQLException\" propogation.David J.", "msg_date": "Mon, 21 Oct 2019 08:40:44 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Sun, Oct 20, 2019 at 4:31 PM raf <raf@raf.org> wrote:\n\n> Steven Pousty wrote:\n>\n> > I would think though that raising an exception is better than a\n> > default behavior which deletes data.\n>\n> I can't help but feel the need to make the point that\n> the function is not deleting anything. It is just\n> returning null. The deletion of data is being performed\n> by an update statement that uses the function's return\n> value to set a column value.\n>\n> I don't agree that raising an exception in the function\n> is a good idea (perhaps unless it's valid to assume\n> that this function will only ever be used in such a\n> context). Making the column not null (as already\n> suggested) and having the update statement itself raise\n> the exception seems more appropriate if an exception is\n> desirable. But that presumes an accurate understanding\n> of the behaviour of jsonb_set.\n>\n> Really, I think the best fix would be in the\n> documentation so that everyone who finds the function\n> in the documentation understands its behaviour\n> immediately.\n>\n>\n>\nHey Raf\n\nIn a perfect world I would agree with you. But often users do not read ALL\nthe documentation before they use the function in their code OR they are\nnot sure that the condition applies to them (until it does). Turning a\nJSON null into a SQL null and thereby \"deleting\" the data is not the path\nof least surprises.\n\nSo while we could say reading the documentation is the proper path it is\nnot the most helpful path. I am not arguing against doc'ing the behavior no\nmatter what we decide on. What I am saying is an exception is better than\nthe current situation if we can't agree to any other solution. An exception\nis better than just doc but probably not the best solution. (and it seems\nlike most other people have said as well but the lag on a mailing list is\ngetting us overlapping).\n\nI see people saying Null pointer exceptions are not helpful. I mostly\nagree, they are not the most helpful kind of exception BUT they are better\nthan some alternatives. So I think it would be better to say NPEs are not\nas helpful as they possibly could be.\n\nOn Sun, Oct 20, 2019 at 4:31 PM raf <raf@raf.org> wrote:Steven Pousty wrote:\n\n> I would think though that raising an exception is better than a\n> default behavior which deletes data.\n\nI can't help but feel the need to make the point that\nthe function is not deleting anything. It is just\nreturning null. The deletion of data is being performed\nby an update statement that uses the function's return\nvalue to set a column value.\n\nI don't agree that raising an exception in the function\nis a good idea (perhaps unless it's valid to assume\nthat this function will only ever be used in such a\ncontext). Making the column not null (as already\nsuggested) and having the update statement itself raise\nthe exception seems more appropriate if an exception is\ndesirable. But that presumes an accurate understanding\nof the behaviour of jsonb_set.\n\nReally, I think the best fix would be in the\ndocumentation so that everyone who finds the function\nin the documentation understands its behaviour\nimmediately.\n\n\nHey RafIn a perfect world I would agree with you. But often users do not read ALL the documentation before they use the function in their code OR they are not sure that the condition applies to them (until it does).  \nTurning a JSON null into a SQL null  and thereby \"deleting\" the data is not the path of least surprises. So while we could say reading the documentation is the proper path it is not the most helpful path. I am not arguing against doc'ing the behavior no matter what we decide on. What I am saying is an exception is better than the current situation if we can't agree to any other solution. An exception is better than just doc but probably not the best solution. (and it seems like most other people have said as well but the lag on a mailing list is getting us overlapping). I see people saying Null pointer exceptions are not helpful. I mostly agree, they are not the most helpful kind of exception BUT they are better than some alternatives. So I think it would be better to say NPEs are not as helpful as they possibly could be.", "msg_date": "Mon, 21 Oct 2019 09:39:13 -0700", "msg_from": "Steven Pousty <steve.pousty@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 21/10/2019 17:39, Steven Pousty wrote:\n>  Turning a JSON null into a SQL null  and thereby \"deleting\" the data \n> is not the path of least surprises.\n\nIn what situation does that happen? (If it's already been mentioned I \nmissed it, long thread, sorry).\n\nCheers,\n   Steve\n\n\n\n\n\n\n\n\n\n\nOn 21/10/2019 17:39, Steven Pousty\n wrote:\n\n\n\n\n\n\n Turning a JSON null into a SQL null  and thereby\n \"deleting\" the data is not the path of least surprises. \n\n\n\n\n\nIn what situation does that happen? (If it's already been\n mentioned I missed it, long thread, sorry).\n\nCheers,\n   Steve", "msg_date": "Mon, 21 Oct 2019 19:20:26 +0100", "msg_from": "Steve Atkins <steve@blighty.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Mon, Oct 21, 2019 at 08:06:46AM -0700, Adrian Klaver wrote:\n>On 10/20/19 11:07 PM, Tomas Vondra wrote:\n>>On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:\n>\n>>\n>>True. And AFAIK catching exceptions is not really possible in some code,\n>>e.g. in stored procedures (because we can't do subtransactions, so no\n>>exception blocks).\n>>\n>\n>Can you explain the above to me as I thought there are exception \n>blocks in stored functions and now sub-transactions in stored \n>procedures.\n>\n\nSorry for the confusion - I've not been particularly careful when\nwriting that response.\n\nLet me illustrate the issue with this example:\n\n CREATE TABLE t (a int);\n\n CREATE OR REPLACE PROCEDURE test() LANGUAGE plpgsql AS $$\n DECLARE\n msg TEXT;\n BEGIN\n -- SAVEPOINT s1;\n INSERT INTO t VALUES (1);\n -- COMMIT;\n EXCEPTION\n WHEN others THEN\n msg := SUBSTR(SQLERRM, 1, 100);\n RAISE NOTICE 'error: %', msg;\n END; $$;\n\n CALL test();\n\nIf you uncomment the SAVEPOINT, you get\n\n NOTICE: error: unsupported transaction command in PL/pgSQL\n\nbecause savepoints are not allowed in stored procedures. Fine.\n\nIf you uncomment the COMMIT, you get\n\n NOTICE: error: cannot commit while a subtransaction is active\n\nwhich happens because the EXCEPTION block creates a subtransaction, and\nwe can't commit when it's active.\n\nBut we can commit outside the exception block:\n\n CREATE OR REPLACE PROCEDURE test() LANGUAGE plpgsql AS $$\n DECLARE\n msg TEXT;\n BEGIN\n BEGIN\n INSERT INTO t VALUES (1);\n EXCEPTION\n WHEN others THEN\n msg := SUBSTR(SQLERRM, 1, 100);\n RAISE NOTICE 'error: %', msg;\n END;\n COMMIT;\n END; $$;\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 21 Oct 2019 21:50:31 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/21/19 12:50 PM, Tomas Vondra wrote:\n> On Mon, Oct 21, 2019 at 08:06:46AM -0700, Adrian Klaver wrote:\n>> On 10/20/19 11:07 PM, Tomas Vondra wrote:\n>>> On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:\n>>\n>>>\n>>> True. And AFAIK catching exceptions is not really possible in some code,\n>>> e.g. in stored procedures (because we can't do subtransactions, so no\n>>> exception blocks).\n>>>\n>>\n>> Can you explain the above to me as I thought there are exception \n>> blocks in stored functions and now sub-transactions in stored procedures.\n>>\n> \n> Sorry for the confusion - I've not been particularly careful when\n> writing that response.\n> \n> Let me illustrate the issue with this example:\n> \n>    CREATE TABLE t (a int);\n> \n>    CREATE OR REPLACE PROCEDURE test() LANGUAGE plpgsql AS $$\n>    DECLARE\n>       msg TEXT;\n>    BEGIN\n>      -- SAVEPOINT s1;\n>      INSERT INTO t VALUES (1);\n>      -- COMMIT;\n>    EXCEPTION\n>      WHEN others THEN\n>        msg := SUBSTR(SQLERRM, 1, 100);\n>        RAISE NOTICE 'error: %', msg;\n>    END; $$;\n> \n>    CALL test();\n> \n> If you uncomment the SAVEPOINT, you get\n> \n>    NOTICE:  error: unsupported transaction command in PL/pgSQL\n> \n> because savepoints are not allowed in stored procedures. Fine.\n> \n> If you uncomment the COMMIT, you get\n> \n>    NOTICE:  error: cannot commit while a subtransaction is active\n> \n> which happens because the EXCEPTION block creates a subtransaction, and\n> we can't commit when it's active.\n> \n> But we can commit outside the exception block:\n> \n>    CREATE OR REPLACE PROCEDURE test() LANGUAGE plpgsql AS $$\n>    DECLARE\n>       msg TEXT;\n>    BEGIN\n>      BEGIN\n>        INSERT INTO t VALUES (1);\n>      EXCEPTION\n>        WHEN others THEN\n>          msg := SUBSTR(SQLERRM, 1, 100);\n>          RAISE NOTICE 'error: %', msg;\n>       END;\n>       COMMIT;\n>    END; $$;\n\nYou can do something like the below though:\n\nCREATE TABLE t (a int PRIMARY KEY);\n\nCREATE OR REPLACE PROCEDURE public.test()\n LANGUAGE plpgsql\nAS $procedure$\n DECLARE\n msg TEXT;\n BEGIN\n BEGIN\n INSERT INTO t VALUES (1);\n EXCEPTION\n WHEN others THEN\n msg := SUBSTR(SQLERRM, 1, 100);\n RAISE NOTICE 'error: %', msg;\n UPDATE t set a = 2;\n END;\n COMMIT;\n END; $procedure$\n\ntest_(postgres)# CALL test();\nCALL\ntest_(postgres)# select * from t;\n a\n---\n 1\n(1 row)\n\ntest_(postgres)# CALL test();\nNOTICE: error: duplicate key value violates unique constraint \"t_pkey\"\nCALL\ntest_(postgres)# select * from t;\n a\n---\n 2\n(1 row)\n\n\n> \n> \n> regards\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 14:08:22 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Steven Pousty wrote:\n\n> On Sun, Oct 20, 2019 at 4:31 PM raf <raf@raf.org> wrote:\n> \n> > Steven Pousty wrote:\n> >\n> > > I would think though that raising an exception is better than a\n> > > default behavior which deletes data.\n> >\n> > I can't help but feel the need to make the point that\n> > the function is not deleting anything. It is just\n> > returning null. The deletion of data is being performed\n> > by an update statement that uses the function's return\n> > value to set a column value.\n> >\n> > I don't agree that raising an exception in the function\n> > is a good idea (perhaps unless it's valid to assume\n> > that this function will only ever be used in such a\n> > context). Making the column not null (as already\n> > suggested) and having the update statement itself raise\n> > the exception seems more appropriate if an exception is\n> > desirable. But that presumes an accurate understanding\n> > of the behaviour of jsonb_set.\n> >\n> > Really, I think the best fix would be in the\n> > documentation so that everyone who finds the function\n> > in the documentation understands its behaviour\n> > immediately.\n> >\n> Hey Raf\n> \n> In a perfect world I would agree with you. But often users do not read ALL\n> the documentation before they use the function in their code OR they are\n> not sure that the condition applies to them (until it does).\n\nI'm well aware of that, hence the statement that this\ninformation needs to appear at the place in the\ndocumentation where the user is first going to\nencounter the function (i.e. in the table where its\nexamples are). Even putting it in a note box further\ndown the page might not be enough (but hopefully it\nwill be).\n\ncheers,\nraf\n\n\n\n", "msg_date": "Tue, 22 Oct 2019 09:16:05 +1100", "msg_from": "raf <raf@raf.org>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 2019-10-20 13:20:23 -0700, Steven Pousty wrote:\n> I would think though that raising an exception is better than a default\n> behavior which deletes data.\n> As an app dev I am quite used to all sorts of \"APIs\" throwing exceptions and\n> have learned to deal with them.\n> \n> This is my way of saying that raising an exception is an improvement over the\n> current situation. May not be the \"best\" solution but definitely an\n> improvement.\n\nI somewhat disagree. SQL isn't in general a language which uses\nexceptions a lot. It does have the value NULL to mean \"unknown\", and\ngenerally unknown combined with something else results in an unknown\nvalue again:\n\n % psql wds\n Null display is \"(∅)\".\n Line style is unicode.\n Border style is 2.\n Unicode border line style is \"double\".\n Timing is on.\n Expanded display is used automatically.\n psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))\n Type \"help\" for help.\n\n wds=> select 4 + NULL;\n ╔══════════╗\n ║ ?column? ║\n ╟──────────╢\n ║ (∅) ║\n ╚══════════╝\n (1 row)\n\n Time: 0.924 ms\n wds=> select replace('steven', 'e', NULL);\n ╔═════════╗\n ║ replace ║\n ╟─────────╢\n ║ (∅) ║\n ╚═════════╝\n (1 row)\n\n Time: 0.918 ms\n\nThrowing an exception for a pure function seems \"un-SQLy\" to me. In\nparticular, jsonb_set does something similar for json values as replace\ndoes for strings, so it should behave similarly.\n\n hp\n\n-- \n _ | Peter J. Holzer | we build much bigger, better disasters now\n|_|_) | | because we have much more sophisticated\n| | | hjp@hjp.at | management tools.\n__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>", "msg_date": "Wed, 23 Oct 2019 00:55:13 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 2019-10-21 09:39:13 -0700, Steven Pousty wrote:\n> Turning a JSON null into a SQL null  and thereby \"deleting\" the data\n> is not the path of least surprises.\n\nBut it doesn't do that: A JSON null is perfectly fine:\n\nwds=> select jsonb_set('{\"a\": 1, \"b\": 2}'::jsonb, '{c}', 'null'::jsonb);\n╔═════════════════════════════╗\n║ jsonb_set ║\n╟─────────────────────────────╢\n║ {\"a\": 1, \"b\": 2, \"c\": null} ║\n╚═════════════════════════════╝\n(1 row)\n\n\nIt is trying to replace a part of the JSON object with an SQL NULL (i.e.\nunknown) which returns SQL NULL:\n\nwds=> select jsonb_set('{\"a\": 1, \"b\": 2}'::jsonb, '{c}', NULL);\n╔═══════════╗\n║ jsonb_set ║\n╟───────────╢\n║ (∅) ║\n╚═══════════╝\n(1 row)\n\n hp\n\n-- \n _ | Peter J. Holzer | we build much bigger, better disasters now\n|_|_) | | because we have much more sophisticated\n| | | hjp@hjp.at | management tools.\n__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>", "msg_date": "Wed, 23 Oct 2019 01:01:46 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 2019-10-22 09:16:05 +1100, raf wrote:\n> Steven Pousty wrote:\n> > In a perfect world I would agree with you. But often users do not read ALL\n> > the documentation before they use the function in their code OR they are\n> > not sure that the condition applies to them (until it does).\n> \n> I'm well aware of that, hence the statement that this\n> information needs to appear at the place in the\n> documentation where the user is first going to\n> encounter the function (i.e. in the table where its\n> examples are).\n\nI think this is a real weakness of the tabular format used for\ndocumenting functions: While it is quite compact which is nice if you\njust want to look up a function's name or parameters, it really\ndiscourages explanations longer than a single paragraph.\n\nSection 9.9 gets around this problem by limiting the in-table\ndescription to a few words and \"see Section 9.9.x\". So you basically\nhave to read the text and not just the table. Maybe that would make\nsense for the json functions, too?\n\n hp\n\n-- \n _ | Peter J. Holzer | we build much bigger, better disasters now\n|_|_) | | because we have much more sophisticated\n| | | hjp@hjp.at | management tools.\n__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>", "msg_date": "Wed, 23 Oct 2019 01:11:24 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Tue, Oct 22, 2019 at 3:55 PM Peter J. Holzer <hjp-pgsql@hjp.at> wrote:\n\n> On 2019-10-20 13:20:23 -0700, Steven Pousty wrote:\n> > I would think though that raising an exception is better than a default\n> > behavior which deletes data.\n> > As an app dev I am quite used to all sorts of \"APIs\" throwing exceptions\n> and\n> > have learned to deal with them.\n> >\n> > This is my way of saying that raising an exception is an improvement\n> over the\n> > current situation. May not be the \"best\" solution but definitely an\n> > improvement.\n>\n> I somewhat disagree. SQL isn't in general a language which uses\n> exceptions a lot. It does have the value NULL to mean \"unknown\", and\n> generally unknown combined with something else results in an unknown\n> value again:\n>\n[...]\n\n>\n> Throwing an exception for a pure function seems \"un-SQLy\" to me. In\n> particular, jsonb_set does something similar for json values as replace\n> does for strings, so it should behave similarly.\n>\n\nNow if only the vast majority of users could have and keep this level of\nunderstanding in mind while writing complex queries so that they remember\nto always add protections to compensate for the unique design decision that\nSQL has taken here...\n\nIn this case I would favor a break from the historical to a more safe\ndesign, regardless of its novelty in the codebase, since the usage patterns\nand risks involved with typical JSON using code are considerably\ndifferent/larger than those for \"replace\".\n\nJust because its always been done one way, and we won't change existing\ncode, doesn't mean we shouldn't apply lessons learned to newer code. In\nthe case of JSON maybe its too late to worry about changing (though moving\nto exception is safe) but a policy choice now could at least pave the way\nto avoid this situation when the next new datatype is implemented. In many\nfunctions we do provoke exceptions when known invalid input is provided -\nsupplying a function with a primary/important argument being undefined\nshould fall into the same \"malformed\" category of problematic input.\n\nDavid J.\n\nOn Tue, Oct 22, 2019 at 3:55 PM Peter J. Holzer <hjp-pgsql@hjp.at> wrote:On 2019-10-20 13:20:23 -0700, Steven Pousty wrote:\n> I would think though that raising an exception is better than a default\n> behavior which deletes data.\n> As an app dev I am quite used to all sorts of \"APIs\" throwing exceptions and\n> have learned to deal with them.\n> \n> This is my way of saying that raising an exception is an improvement over the\n> current situation. May not be the \"best\" solution but definitely an\n> improvement.\n\nI somewhat disagree. SQL isn't in general a language which uses\nexceptions a lot. It does have the value NULL to mean \"unknown\", and\ngenerally unknown combined with something else results in an unknown\nvalue again:\n[...] \n\nThrowing an exception for a pure function seems \"un-SQLy\" to me. In\nparticular, jsonb_set does something similar for json values as replace\ndoes for strings, so it should behave similarly.Now if only the vast majority of users could have and keep this level of understanding in mind while writing complex queries so that they remember to always add protections to compensate for the unique design decision that SQL has taken here...In this case I would favor a break from the historical to a more safe design, regardless of its novelty in the codebase, since the usage patterns and risks involved with typical JSON using code are considerably different/larger than those for \"replace\".Just because its always been done one way, and we won't change existing code, doesn't mean we shouldn't apply lessons learned to newer code.  In the case of JSON maybe its too late to worry about changing (though moving to exception is safe) but a policy choice now could at least pave the way to avoid this situation when the next new datatype is implemented.  In many functions we do provoke exceptions when known invalid input is provided - supplying a function with a primary/important argument being undefined should fall into the same \"malformed\" category of problematic input.David J.", "msg_date": "Tue, 22 Oct 2019 18:06:39 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "David G. Johnston wrote:\n> Now if only the vast majority of users could have and keep this level of understanding\n> in mind while writing complex queries so that they remember to always add protections\n> to compensate for the unique design decision that SQL has taken here...\n\nYou can only say that if you don't understand NULL (you wouldn't be alone).\nIf I modify a JSON with an unknown value, the result is unknown.\nThis seems very intuitive to me.\n\nOne could argue that whoever uses SQL should understand SQL.\n\nBut I believe that it is reasonable to suppose that many people who\nuse JSON in the database are more savvy with JSON than with SQL\n(they might not have chosen JSON otherwise), so I agree that it makes\nsense to change this particular behavior.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 23 Oct 2019 13:42:47 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Wed, Oct 23, 2019 at 4:42 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> David G. Johnston wrote:\n> > Now if only the vast majority of users could have and keep this level of\n> understanding\n> > in mind while writing complex queries so that they remember to always\n> add protections\n> > to compensate for the unique design decision that SQL has taken here...\n>\n> You can only say that if you don't understand NULL (you wouldn't be alone).\n> If I modify a JSON with an unknown value, the result is unknown.\n> This seems very intuitive to me.\n>\n> One could argue that whoever uses SQL should understand SQL.\n>\n> But I believe that it is reasonable to suppose that many people who\n> use JSON in the database are more savvy with JSON than with SQL\n> (they might not have chosen JSON otherwise), so I agree that it makes\n> sense to change this particular behavior.\n>\n\nI can and do understand SQL quite well and still likely would end up being\ntripped up by this (though not surprised when it happened) because I can't\nand don't want to think about what will happen if NULL appears in every\nexpression I write when a typical SQL query can contain tens of them. I'd\nmuch rather assume that NULL inputs aren't going to happen and have the\nsystem tell me when that assumption is wrong. Having to change my\nexpressions to: COALESCE(original_input, function(original_input,\nsomething_that_could_be_null_in_future_but_cannot_right_now)) just adds\nundesirable mental and typing overhead.\n\nDavid J.\n\nOn Wed, Oct 23, 2019 at 4:42 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:David G. Johnston wrote:\n> Now if only the vast majority of users could have and keep this level of understanding\n> in mind while writing complex queries so that they remember to always add protections\n> to compensate for the unique design decision that SQL has taken here...\n\nYou can only say that if you don't understand NULL (you wouldn't be alone).\nIf I modify a JSON with an unknown value, the result is unknown.\nThis seems very intuitive to me.\n\nOne could argue that whoever uses SQL should understand SQL.\n\nBut I believe that it is reasonable to suppose that many people who\nuse JSON in the database are more savvy with JSON than with SQL\n(they might not have chosen JSON otherwise), so I agree that it makes\nsense to change this particular behavior.I can and do understand SQL quite well and still likely would end up being tripped up by this (though not surprised when it happened) because I can't and don't want to think about what will happen if NULL appears in every expression I write when a typical SQL query can contain tens of them.  I'd much rather assume that NULL inputs aren't going to happen and have the system tell me when that assumption is wrong.  Having to change my expressions to: COALESCE(original_input, function(original_input, something_that_could_be_null_in_future_but_cannot_right_now)) just adds undesirable mental and typing overhead.David J.", "msg_date": "Wed, 23 Oct 2019 09:06:33 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 2019-10-22 18:06:39 -0700, David G. Johnston wrote:\n> On Tue, Oct 22, 2019 at 3:55 PM Peter J. Holzer <hjp-pgsql@hjp.at> wrote:\n> On 2019-10-20 13:20:23 -0700, Steven Pousty wrote:\n> > I would think though that raising an exception is better than a\n> > default behavior which deletes data.\n> > As an app dev I am quite used to all sorts of \"APIs\" throwing\n> > exceptions and have learned to deal with them.\n> >\n> > This is my way of saying that raising an exception is an\n> > improvement over the current situation. May not be the \"best\"\n> > solution but definitely an improvement.\n> \n> I somewhat disagree. SQL isn't in general a language which uses\n> exceptions a lot. It does have the value NULL to mean \"unknown\", and\n> generally unknown combined with something else results in an unknown\n> value again:\n> \n> [...] \n> \n> \n> Throwing an exception for a pure function seems \"un-SQLy\" to me. In\n> particular, jsonb_set does something similar for json values as replace\n> does for strings, so it should behave similarly.\n> \n> \n> Now if only the vast majority of users could have and keep this level of\n> understanding in mind while writing complex queries so that they remember to\n> always add protections to compensate for the unique design decision that SQL\n> has taken here...\n\nI grant that SQL NULL takes a bit to get used to. However, it is a core\npart of the SQL language and everyone who uses SQL must understand it (I\ndon't remember when I first stumbled across \"select * from t where c =\nNULL\" returning 0 rows, but it was probably within the first few days of\nusing a database). And personally I find it much easier to deal with\nconcept which are applied consistently across the whole language than\nthose which sometimes apply and sometimes don't seemingly at random,\njust because a developer thought it would be convenient for the specific\nuse-case they had in mind.\n\n hp\n\n-- \n _ | Peter J. Holzer | we build much bigger, better disasters now\n|_|_) | | because we have much more sophisticated\n| | | hjp@hjp.at | management tools.\n__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>", "msg_date": "Wed, 23 Oct 2019 20:33:06 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/23/19 5:42 AM, Laurenz Albe wrote:\n> David G. Johnston wrote:\n>> Now if only the vast majority of users could have and keep this level of understanding\n>> in mind while writing complex queries so that they remember to always add protections\n>> to compensate for the unique design decision that SQL has taken here...\n> \n> You can only say that if you don't understand NULL (you wouldn't be alone).\n> If I modify a JSON with an unknown value, the result is unknown.\n> This seems very intuitive to me.\n\nWould you expect modifying an array value with an unknown would result\nin the entire array being unknown?\n\n> One could argue that whoever uses SQL should understand SQL.\n> \n> But I believe that it is reasonable to suppose that many people who\n> use JSON in the database are more savvy with JSON than with SQL\n> (they might not have chosen JSON otherwise), so I agree that it makes\n> sense to change this particular behavior.\n> \n> Yours,\n> Laurenz Albe\n\nThat (generally) SQL NULL results in NULL for any operation has been\nbrought up multiple times in this thread, including above, as a rationale\nfor the current jsonb behavior. I don't think it is a valid argument.\n\nWhen examples are given, they typically are with scalar values where\nsuch behavior makes sense: the resulting scalar value has to be NULL\nor non-NULL, it can't be both.\n\nIt is less sensible with compound values where the rule can apply to\nindividual scalar components. And indeed that is what Postgresql does\nfor another compound type:\n\n # select array_replace(array[1,2,3],2,NULL);\n array_replace\n ---------------\n {1,NULL,3}\n\nThe returned value is not NULL. Why the inconsistency between the array\ntype and json type? Are there any cases other than json where the entire\ncompound value is set to NULL as a result of one of its components being\nNULL?\n\n\n", "msg_date": "Wed, 23 Oct 2019 13:00:47 -0600", "msg_from": "Stuart McGraw <smcg4191@mtneva.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Wed, Oct 23, 2019 at 12:01 PM Stuart McGraw <smcg4191@mtneva.com> wrote:\n> Why the inconsistency between the array\n> type and json type? Are there any cases other than json where the entire\n> compound value is set to NULL as a result of one of its components being\n> NULL?\n\nThat's a great point. It does look like hstore's delete / minus\noperator behaves like that, though:\n\n=# select 'a=>1,b=>2'::hstore - null;\n ?column?\n----------\n\n(1 row)\n\n\n", "msg_date": "Wed, 23 Oct 2019 18:00:19 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nOn Wed, 2019-10-23 at 20:33 +0200, Peter J. Holzer wrote:\n> \n> I grant that SQL NULL takes a bit to get used to. However, it is a\n> core\n> part of the SQL language and everyone who uses SQL must understand it\n> (I\n> don't remember when I first stumbled across \"select * from t where c\n> =\n> NULL\" returning 0 rows, but it was probably within the first few days\n> of\n> using a database). And personally I find it much easier to deal with\n> concept which are applied consistently across the whole language than\n> those which sometimes apply and sometimes don't seemingly at random,\n> just because a developer thought it would be convenient for the\n> specific\n> use-case they had in mind.\n> \n> hp\n> \n\n From the JSON spec:-\n\n3. Values\n\n A JSON value MUST be an object, array, number, or string, or one of\n the following three literal names:\n\n false\n null\n true\n\n The literal names MUST be lowercase. No other literal names are\n allowed.\n\nSo, you can't set a value associated to a key to SQL NULL. If a key\nshould not have a value then delete that key from the JSON.\n\nIf you decide your application is going to use one of those three\nliteral names, then you need to code accordingly. \n\nMy 2 cents.\n\n\n\n\n", "msg_date": "Thu, 24 Oct 2019 12:52:36 +1100", "msg_from": "rob stone <floriparob@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Wed, Oct 23, 2019 at 12:01 PM Stuart McGraw <smcg4191@mtneva.com> wrote:\n\n> When examples are given, they typically are with scalar values where\n> such behavior makes sense: the resulting scalar value has to be NULL\n> or non-NULL, it can't be both.\n>\n> It is less sensible with compound values where the rule can apply to\n> individual scalar components. And indeed that is what Postgresql does\n> for another compound type:\n>\n\nI agree completely. Scalar vs compound structure seems like the essential\ndifference.\nYou don't expect an operation on an element of a compound structure to be\nable to effect the entire structure.\nMaurice\n\nOn Wed, Oct 23, 2019 at 12:01 PM Stuart McGraw <smcg4191@mtneva.com> wrote:\nWhen examples are given, they typically are with scalar values where\nsuch behavior makes sense: the resulting scalar value has to be NULL\nor non-NULL, it can't be both.\n\nIt is less sensible with compound values where the rule can apply to\nindividual scalar components.  And indeed that is what Postgresql does\nfor another compound type:I agree completely. Scalar vs compound structure seems like the essential difference.You don't expect an operation on an element of a compound structure to be able to effect the entire structure.Maurice", "msg_date": "Wed, 23 Oct 2019 19:18:45 -0700", "msg_from": "Maurice Aubrey <maurice.aubrey@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Wed, 2019-10-23 at 13:00 -0600, Stuart McGraw wrote:\n> > You can only say that if you don't understand NULL (you wouldn't be alone).\n> > If I modify a JSON with an unknown value, the result is unknown.\n> > This seems very intuitive to me.\n> \n> Would you expect modifying an array value with an unknown would result\n> in the entire array being unknown?\n\nHm, yes, that is less intuitive.\nI was viewing a JSON as an atomic value above.\n\n> > One could argue that whoever uses SQL should understand SQL.\n> > \n> > But I believe that it is reasonable to suppose that many people who\n> > use JSON in the database are more savvy with JSON than with SQL\n> > (they might not have chosen JSON otherwise), so I agree that it makes\n> > sense to change this particular behavior.\n> \n> That (generally) SQL NULL results in NULL for any operation has been\n> brought up multiple times in this thread, including above, as a rationale\n> for the current jsonb behavior. I don't think it is a valid argument.\n> \n> When examples are given, they typically are with scalar values where\n> such behavior makes sense: the resulting scalar value has to be NULL\n> or non-NULL, it can't be both.\n> \n> It is less sensible with compound values where the rule can apply to\n> individual scalar components. And indeed that is what Postgresql does\n> for another compound type:\n> \n> # select array_replace(array[1,2,3],2,NULL);\n> array_replace\n> ---------------\n> {1,NULL,3}\n> \n> The returned value is not NULL. Why the inconsistency between the array\n> type and json type? Are there any cases other than json where the entire\n> compound value is set to NULL as a result of one of its components being\n> NULL?\n\nThat is a good point.\n\nI agree that the behavior should be changed.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Thu, 24 Oct 2019 21:15:42 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Wed, 2019-10-23 at 13:00 -0600, Stuart McGraw wrote:\n>> It is less sensible with compound values where the rule can apply to\n>> individual scalar components.\n\nI agree that JSON can sensibly be viewed as a composite value, but ...\n\n>> And indeed that is what Postgresql does\n>> for another compound type:\n>> \n>> # select array_replace(array[1,2,3],2,NULL);\n>> array_replace\n>> ---------------\n>> {1,NULL,3}\n>> \n>> The returned value is not NULL. Why the inconsistency between the array\n>> type and json type?\n\n... the flaw in this argument is that the array element is actually\na SQL NULL when we're done. To do something similar in the JSON case,\nwe have to translate SQL NULL to JSON null, and that's cheating to\nsome extent. They're not the same thing (and I'll generally resist\nproposals to, say, make SELECT 'null'::json IS NULL return true).\n\nMaybe it's okay to make this case work like that, but don't be too\nhigh and mighty about it being logically clean; it isn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Oct 2019 16:17:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/24/19 2:17 PM, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n>> On Wed, 2019-10-23 at 13:00 -0600, Stuart McGraw wrote:\n>>> It is less sensible with compound values where the rule can apply to\n>>> individual scalar components.\n> \n> I agree that JSON can sensibly be viewed as a composite value, but ...\n> \n>>> And indeed that is what Postgresql does\n>>> for another compound type:\n>>>\n>>> # select array_replace(array[1,2,3],2,NULL);\n>>> array_replace\n>>> ---------------\n>>> {1,NULL,3}\n>>>\n>>> The returned value is not NULL. Why the inconsistency between the array\n>>> type and json type?\n> \n> ... the flaw in this argument is that the array element is actually\n> a SQL NULL when we're done. To do something similar in the JSON case,\n> we have to translate SQL NULL to JSON null, and that's cheating to\n> some extent. They're not the same thing (and I'll generally resist\n> proposals to, say, make SELECT 'null'::json IS NULL return true).\n> \n> Maybe it's okay to make this case work like that, but don't be too\n> high and mighty about it being logically clean; it isn't.\n> \n> \t\t\tregards, tom lane\n\nSure, but my point was not that this was a perfect \"logically clean\"\nanswer, just that the argument, which was made multiple times, that\nthe entire result should be NULL because \"that's the way SQL NULLs\nwork\" is not really right.\n\nIt does seem to me that mapping NULL to \"null\" is likely a workable\napproach but that's just my uninformed opinion.\n\n\n", "msg_date": "Thu, 24 Oct 2019 22:48:58 -0600", "msg_from": "Stuart McGraw <smcg4191@mtneva.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On 10/21/19 9:28 AM, Andrew Dunstan wrote:\n> On 10/21/19 2:07 AM, Tomas Vondra wrote:\n>> On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:\n>>>> I think the general premise of this thread is that the application\n>>>> developer does not realize that may be necessary, because it's a bit\n>>>> surprising behavior, particularly when having more experience with\n>>>> other\n>>>> databases that behave differently. It's also pretty easy to not notice\n>>>> this issue for a long time, resulting in significant data loss.\n>>>>\n>>>> Let's say you're used to the MSSQL or MySQL behavior, you migrate your\n>>>> application to PostgreSQL or whatever - how do you find out about this\n>>>> behavior? Users are likely to visit\n>>>>\n>>>>    https://www.postgresql.org/docs/12/functions-json.html\n>>>>\n>>>> but that says nothing about how jsonb_set works with NULL values :-(\n>>>\n>>>\n>>> We should certainly fix that. I accept some responsibility for the\n>>> omission.\n>>>\n>> +1\n>>\n>>\n>\n> So let's add something to the JSON funcs page  like this:\n>\n>\n> Note: All the above functions except for json_build_object,\n> json_build_array, json_to_recordset, json_populate_record, and\n> json_populate_recordset and their jsonb equivalents are strict\n> functions. That is, if any argument is NULL the function result will be\n> NULL and the function won't even be called. Particular care should\n> therefore be taken to avoid passing NULL arguments to those functions\n> unless a NULL result is expected. This is particularly true of the\n> jsonb_set and jsonb_insert functions.\n>\n>\n>\n> (We do have a heck of a lot of Note: sections on that page)\n>\n>\n\n\nFor release 13+, I have given some more thought to what should be done.\nI think the bar for altering the behaviour of a function should be\nrather higher than we have in the present case, and the longer the\nfunction has been sanctioned by time the higher the bar should be.\nHowever, I think there is a case to be made for providing a non-strict\njsonb_set type function. To advance th4e discussion, attached is a POC\npatch that does that. This can also be done as an extension, meaning\nthat users of back branches could deploy it immediately. I've tested\nthis against release 12, but I think it could go probably all the way\nback to 9.5. The new function is named jsonb_ set_lax, but I'm open to\nbikeshedding.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 28 Oct 2019 09:52:11 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\n\nOn Mon, Oct 28, 2019, at 08:52, Andrew Dunstan wrote:\n> \n> For release 13+, I have given some more thought to what should be done.\n> I think the bar for altering the behaviour of a function should be\n> rather higher than we have in the present case, and the longer the\n> function has been sanctioned by time the higher the bar should be.\n> However, I think there is a case to be made for providing a non-strict\n> jsonb_set type function. To advance th4e discussion, attached is a POC\n> patch that does that. This can also be done as an extension, meaning\n> that users of back branches could deploy it immediately. I've tested\n> this against release 12, but I think it could go probably all the way\n> back to 9.5. The new function is named jsonb_ set_lax, but I'm open to\n> bikeshedding.\n> \n> \n\nThank you Andrew, and I understand the difficulty in making changes to functions that already exist in production deployments. An additional function like this would be helpful to many.\n\n\n-- \n Mark Felder\n ports-secteam & portmgr alumni\n feld@FreeBSD.org\n\n\n", "msg_date": "Mon, 28 Oct 2019 10:00:12 -0500", "msg_from": "\"Mark Felder\" <feld@FreeBSD.org>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hi\n\n\n\n> For release 13+, I have given some more thought to what should be done.\n> I think the bar for altering the behaviour of a function should be\n> rather higher than we have in the present case, and the longer the\n> function has been sanctioned by time the higher the bar should be.\n> However, I think there is a case to be made for providing a non-strict\n> jsonb_set type function. To advance th4e discussion, attached is a POC\n> patch that does that. This can also be done as an extension, meaning\n> that users of back branches could deploy it immediately. I've tested\n> this against release 12, but I think it could go probably all the way\n> back to 9.5. The new function is named jsonb_ set_lax, but I'm open to\n> bikeshedding.\n>\n>\nI am sending a review of this patch\n\n1. this patch does what was proposed and it is based on discussion.\n\n2. there are not any problem with patching or compilation, all regress\ntests passed.\n\n4. code looks well and it is well commented.\n\n5. the patch has enough regress tests\n\nMy notes:\n\na) missing documentation\n\nb) error message is not finalized\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"null jsonb value\")));\n\nAny other looks well, and this function can be very handy.\n\nRegards\n\nPavel\n\nHi\n\nFor release 13+, I have given some more thought to what should be done.\nI think the bar for altering the behaviour of a function should be\nrather higher than we have in the present case, and the longer the\nfunction has been sanctioned by time the higher the bar should be.\nHowever, I think there is a case to be made for providing a non-strict\njsonb_set type function. To advance th4e discussion, attached is a POC\npatch that does that. This can also be done as an extension, meaning\nthat users of back branches could deploy it immediately. I've tested\nthis against release 12, but I think it could go probably all the way\nback to 9.5. The new function is named jsonb_ set_lax, but I'm open to\nbikeshedding.\nI am sending a review of this patch 1. this patch does what was proposed and it is based on discussion.2. there are not any problem with patching or compilation, all regress tests passed.4. code looks well and it is well commented.5. the patch has enough regress testsMy notes:a) missing documentationb) error message is not finalized+       ereport(ERROR,+               (errcode(ERRCODE_INVALID_PARAMETER_VALUE),+                errmsg(\"null jsonb value\")));Any other looks well, and this function can be very handy.RegardsPavel", "msg_date": "Fri, 15 Nov 2019 20:14:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 11/15/19 2:14 PM, Pavel Stehule wrote:\n> Hi\n>\n>\n>\n> For release 13+, I have given some more thought to what should be\n> done.\n> I think the bar for altering the behaviour of a function should be\n> rather higher than we have in the present case, and the longer the\n> function has been sanctioned by time the higher the bar should be.\n> However, I think there is a case to be made for providing a non-strict\n> jsonb_set type function. To advance th4e discussion, attached is a POC\n> patch that does that. This can also be done as an extension, meaning\n> that users of back branches could deploy it immediately. I've tested\n> this against release 12, but I think it could go probably all the way\n> back to 9.5. The new function is named jsonb_ set_lax, but I'm open to\n> bikeshedding.\n>\n>\n> I am sending a review of this patch\n>\n> 1. this patch does what was proposed and it is based on discussion.\n>\n> 2. there are not any problem with patching or compilation, all regress\n> tests passed.\n>\n> 4. code looks well and it is well commented.\n>\n> 5. the patch has enough regress tests\n>\n> My notes:\n>\n> a) missing documentation\n>\n> b) error message is not finalized\n>\n> +       ereport(ERROR,\n> +               (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +                errmsg(\"null jsonb value\")));\n>\n> Any other looks well, and this function can be very handy.\n>\n>\n\nThanks for the review. I will add some docco.\n\n\nWhat would be a better error message? \"null jsonb replacement not\npermitted\"?\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 15 Nov 2019 15:01:19 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "pá 15. 11. 2019 v 21:01 odesílatel Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> napsal:\n\n>\n> On 11/15/19 2:14 PM, Pavel Stehule wrote:\n> > Hi\n> >\n> >\n> >\n> > For release 13+, I have given some more thought to what should be\n> > done.\n> > I think the bar for altering the behaviour of a function should be\n> > rather higher than we have in the present case, and the longer the\n> > function has been sanctioned by time the higher the bar should be.\n> > However, I think there is a case to be made for providing a\n> non-strict\n> > jsonb_set type function. To advance th4e discussion, attached is a\n> POC\n> > patch that does that. This can also be done as an extension, meaning\n> > that users of back branches could deploy it immediately. I've tested\n> > this against release 12, but I think it could go probably all the way\n> > back to 9.5. The new function is named jsonb_ set_lax, but I'm open\n> to\n> > bikeshedding.\n> >\n> >\n> > I am sending a review of this patch\n> >\n> > 1. this patch does what was proposed and it is based on discussion.\n> >\n> > 2. there are not any problem with patching or compilation, all regress\n> > tests passed.\n> >\n> > 4. code looks well and it is well commented.\n> >\n> > 5. the patch has enough regress tests\n> >\n> > My notes:\n> >\n> > a) missing documentation\n> >\n> > b) error message is not finalized\n> >\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"null jsonb value\")));\n> >\n> > Any other looks well, and this function can be very handy.\n> >\n> >\n>\n> Thanks for the review. I will add some docco.\n>\n>\n> What would be a better error message? \"null jsonb replacement not\n> permitted\"?\n>\n\nMaybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and \"NULL is not allowed\",\nerrdetail - a exception due setting \"null_value_treatment\" =>\nraise_exception\nand maybe some errhint - \"Maybe you would to use Jsonb NULL - \"null\"::jsonb\"\n\nI don't know, but in this case, the exception should be verbose. This is\n\"rich\" function with lot of functionality\n\n\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan https://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\npá 15. 11. 2019 v 21:01 odesílatel Andrew Dunstan <andrew.dunstan@2ndquadrant.com> napsal:\nOn 11/15/19 2:14 PM, Pavel Stehule wrote:\n> Hi\n>\n>\n>\n>     For release 13+, I have given some more thought to what should be\n>     done.\n>     I think the bar for altering the behaviour of a function should be\n>     rather higher than we have in the present case, and the longer the\n>     function has been sanctioned by time the higher the bar should be.\n>     However, I think there is a case to be made for providing a non-strict\n>     jsonb_set type function. To advance th4e discussion, attached is a POC\n>     patch that does that. This can also be done as an extension, meaning\n>     that users of back branches could deploy it immediately. I've tested\n>     this against release 12, but I think it could go probably all the way\n>     back to 9.5. The new function is named jsonb_ set_lax, but I'm open to\n>     bikeshedding.\n>\n>\n> I am sending a review of this patch\n>\n> 1. this patch does what was proposed and it is based on discussion.\n>\n> 2. there are not any problem with patching or compilation, all regress\n> tests passed.\n>\n> 4. code looks well and it is well commented.\n>\n> 5. the patch has enough regress tests\n>\n> My notes:\n>\n> a) missing documentation\n>\n> b) error message is not finalized\n>\n> +       ereport(ERROR,\n> +               (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +                errmsg(\"null jsonb value\")));\n>\n> Any other looks well, and this function can be very handy.\n>\n>\n\nThanks for the review. I will add some docco.\n\n\nWhat would be a better error message? \"null jsonb replacement not\npermitted\"?Maybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and \"NULL is not allowed\", errdetail - a exception due setting \"null_value_treatment\" => raise_exception and maybe some errhint - \"Maybe you would to use Jsonb NULL - \"null\"::jsonb\"I don't know, but in this case, the exception should be verbose. This is \"rich\" function with lot of functionality \n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan                https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 15 Nov 2019 21:45:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Fri, Nov 15, 2019 at 09:45:59PM +0100, Pavel Stehule wrote:\n> Maybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and \"NULL is not allowed\",\n> errdetail - a exception due setting \"null_value_treatment\" =>\n> raise_exception\n> and maybe some errhint - \"Maybe you would to use Jsonb NULL - \"null\"::jsonb\"\n> \n> I don't know, but in this case, the exception should be verbose. This is\n> \"rich\" function with lot of functionality\n\n@Andrew: This patch is waiting on input from you for a couple of days\nnow.\n--\nMichael", "msg_date": "Thu, 28 Nov 2019 11:35:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "\nOn 11/27/19 9:35 PM, Michael Paquier wrote:\n> On Fri, Nov 15, 2019 at 09:45:59PM +0100, Pavel Stehule wrote:\n>> Maybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and \"NULL is not allowed\",\n>> errdetail - a exception due setting \"null_value_treatment\" =>\n>> raise_exception\n>> and maybe some errhint - \"Maybe you would to use Jsonb NULL - \"null\"::jsonb\"\n>>\n>> I don't know, but in this case, the exception should be verbose. This is\n>> \"rich\" function with lot of functionality\n> @Andrew: This patch is waiting on input from you for a couple of days\n> now.\n>\n\n\nWill get to this on Friday - tomorrow is Thanksgiving so I'm unlikely to\nget to it then.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 27 Nov 2019 22:45:54 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Thu, Nov 28, 2019 at 2:15 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 11/27/19 9:35 PM, Michael Paquier wrote:\n> > On Fri, Nov 15, 2019 at 09:45:59PM +0100, Pavel Stehule wrote:\n> >> Maybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and \"NULL is not allowed\",\n> >> errdetail - a exception due setting \"null_value_treatment\" =>\n> >> raise_exception\n> >> and maybe some errhint - \"Maybe you would to use Jsonb NULL - \"null\"::jsonb\"\n> >>\n> >> I don't know, but in this case, the exception should be verbose. This is\n> >> \"rich\" function with lot of functionality\n> > @Andrew: This patch is waiting on input from you for a couple of days\n> > now.\n> >\n>\n>\n\n\nUpdated version including docco and better error message.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 7 Jan 2020 08:04:34 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hi\n\npo 6. 1. 2020 v 22:34 odesílatel Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> napsal:\n\n> On Thu, Nov 28, 2019 at 2:15 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n> >\n> >\n> > On 11/27/19 9:35 PM, Michael Paquier wrote:\n> > > On Fri, Nov 15, 2019 at 09:45:59PM +0100, Pavel Stehule wrote:\n> > >> Maybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and \"NULL is not allowed\",\n> > >> errdetail - a exception due setting \"null_value_treatment\" =>\n> > >> raise_exception\n> > >> and maybe some errhint - \"Maybe you would to use Jsonb NULL -\n> \"null\"::jsonb\"\n> > >>\n> > >> I don't know, but in this case, the exception should be verbose. This\n> is\n> > >> \"rich\" function with lot of functionality\n> > > @Andrew: This patch is waiting on input from you for a couple of days\n> > > now.\n> > >\n> >\n> >\n>\n>\n> Updated version including docco and better error message.\n>\n> cheers\n>\n> andrew\n>\n\nI think so my objections are solved. I have small objection\n\n+ errdetail(\"exception raised due to \\\"null_value_treatment :=\n'raise_exception'\\\"\"),\n+ errhint(\"to avoid, either change the null_value_treatment argument or\nensure that an SQL NULL is not used\")));\n\n\"null_value_treatment := 'raise_exception'\\\"\"\n\nit use proprietary PostgreSQL syntax for named parameters. Better to use\nANSI/SQL syntax\n\n\"null_value_treatment => 'raise_exception'\\\"\"\n\nIt is fixed in attached patch\n\nsource compilation without warnings,\ncompilation docs without warnings\ncheck-world passed without any problems\n\nI'll mark this patch as ready for commiter\n\nThank you for your work\n\nPavel\n\n\n>\n> --\n> Andrew Dunstan https://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>", "msg_date": "Tue, 7 Jan 2020 21:37:55 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Wed, Jan 8, 2020 at 7:08 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> po 6. 1. 2020 v 22:34 odesílatel Andrew Dunstan <andrew.dunstan@2ndquadrant.com> napsal:\n>>\n>>\n>> Updated version including docco and better error message.\n>>\n>> cheers\n>>\n>> andrew\n>\n>\n> I think so my objections are solved. I have small objection\n>\n> + errdetail(\"exception raised due to \\\"null_value_treatment := 'raise_exception'\\\"\"),\n> + errhint(\"to avoid, either change the null_value_treatment argument or ensure that an SQL NULL is not used\")));\n>\n> \"null_value_treatment := 'raise_exception'\\\"\"\n>\n> it use proprietary PostgreSQL syntax for named parameters. Better to use ANSI/SQL syntax\n>\n> \"null_value_treatment => 'raise_exception'\\\"\"\n>\n> It is fixed in attached patch\n>\n> source compilation without warnings,\n> compilation docs without warnings\n> check-world passed without any problems\n>\n> I'll mark this patch as ready for commiter\n>\n> Thank you for your work\n>\n\n\nThanks for the review. I propose to commit this shortly.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jan 2020 17:24:05 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "On Wed, Jan 08, 2020 at 05:24:05PM +1030, Andrew Dunstan wrote:\n>On Wed, Jan 8, 2020 at 7:08 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>\n>> Hi\n>>\n>> po 6. 1. 2020 v 22:34 odes�latel Andrew Dunstan <andrew.dunstan@2ndquadrant.com> napsal:\n>>>\n>>>\n>>> Updated version including docco and better error message.\n>>>\n>>> cheers\n>>>\n>>> andrew\n>>\n>>\n>> I think so my objections are solved. I have small objection\n>>\n>> + errdetail(\"exception raised due to \\\"null_value_treatment := 'raise_exception'\\\"\"),\n>> + errhint(\"to avoid, either change the null_value_treatment argument or ensure that an SQL NULL is not used\")));\n>>\n>> \"null_value_treatment := 'raise_exception'\\\"\"\n>>\n>> it use proprietary PostgreSQL syntax for named parameters. Better to use ANSI/SQL syntax\n>>\n>> \"null_value_treatment => 'raise_exception'\\\"\"\n>>\n>> It is fixed in attached patch\n>>\n>> source compilation without warnings,\n>> compilation docs without warnings\n>> check-world passed without any problems\n>>\n>> I'll mark this patch as ready for commiter\n>>\n>> Thank you for your work\n>>\n>\n>\n>Thanks for the review. I propose to commit this shortly.\n>\n\nNow that this was committed, I've updated the patch status accordingly.\n\nThanks!\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 18 Jan 2020 00:21:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "Hello,\n\nJanuary 17, 2020 5:21 PM, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com> wrote:\n\n> On Wed, Jan 08, 2020 at 05:24:05PM +1030, Andrew Dunstan wrote:\n> \n>> On Wed, Jan 8, 2020 at 7:08 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>> Hi\n>>> \n>>> po 6. 1. 2020 v 22:34 odesílatel Andrew Dunstan <andrew.dunstan@2ndquadrant.com> napsal:\n>> \n>> Updated version including docco and better error message.\n>> \n>> cheers\n>> \n>> andrew\n>>> I think so my objections are solved. I have small objection\n>>> \n>>> + errdetail(\"exception raised due to \\\"null_value_treatment := 'raise_exception'\\\"\"),\n>>> + errhint(\"to avoid, either change the null_value_treatment argument or ensure that an SQL NULL is\n>>> not used\")));\n>>> \n>>> \"null_value_treatment := 'raise_exception'\\\"\"\n>>> \n>>> it use proprietary PostgreSQL syntax for named parameters. Better to use ANSI/SQL syntax\n>>> \n>>> \"null_value_treatment => 'raise_exception'\\\"\"\n>>> \n>>> It is fixed in attached patch\n>>> \n>>> source compilation without warnings,\n>>> compilation docs without warnings\n>>> check-world passed without any problems\n>>> \n>>> I'll mark this patch as ready for commiter\n>>> \n>>> Thank you for your work\n>> \n>> Thanks for the review. I propose to commit this shortly.\n> \n> Now that this was committed, I've updated the patch status accordingly.\n\nThank you very much for coming together and finding a solution to this bug!\n\nAriadne\n\n\n", "msg_date": "Fri, 17 Jan 2020 23:28:09 +0000", "msg_from": "\"Ariadne Conill\" <ariadne@dereferenced.org>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" }, { "msg_contents": "> On Jan 17, 2020, at 4:28 PM, Ariadne Conill <ariadne@dereferenced.org> wrote:\n> \n> Hello,\n> \n> January 17, 2020 5:21 PM, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> wrote:\n> \n> Thank you very much for coming together and finding a solution to this bug!\n> \n> Ariadne\nLet’s leave it at “issue” :)\nOn Jan 17, 2020, at 4:28 PM, Ariadne Conill <ariadne@dereferenced.org> wrote:Hello,January 17, 2020 5:21 PM, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com> wrote:Thank you very much for coming together and finding a solution to this bug!AriadneLet’s leave it at “issue” :)", "msg_date": "Fri, 17 Jan 2020 16:30:59 -0700", "msg_from": "Rob Sargent <robjsargent@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_set() strictness considered harmful to data" } ]
[ { "msg_contents": "I am not sure if this causes any potential problems or not, but for\nconsistency of code seems we are missing below. All other places in code\nwhere sigsetjmp() exists for top level handling has error_context_stack set\nto NULL.\n\ndiff --git a/src/backend/postmaster/autovacuum.c\nb/src/backend/postmaster/autovacuum.c\nindex 073f313337..b06d0ad058 100644\n--- a/src/backend/postmaster/autovacuum.c\n+++ b/src/backend/postmaster/autovacuum.c\n@@ -1558,6 +1558,9 @@ AutoVacWorkerMain(int argc, char *argv[])\n */\n if (sigsetjmp(local_sigjmp_buf, 1) != 0)\n {\n+ /* Since not using PG_TRY, must reset error stack by hand */\n+ error_context_stack = NULL;\n+\n /* Prevents interrupts while cleaning up */\n HOLD_INTERRUPTS();\n\nThis was spotted by Paul during code inspection.\n\nI am not sure if this causes any potential problems or not, but for consistency of code seems we are missing below. All other places in code where sigsetjmp() exists for top level handling has error_context_stack set to NULL.diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.cindex 073f313337..b06d0ad058 100644--- a/src/backend/postmaster/autovacuum.c+++ b/src/backend/postmaster/autovacuum.c@@ -1558,6 +1558,9 @@ AutoVacWorkerMain(int argc, char *argv[])         */        if (sigsetjmp(local_sigjmp_buf, 1) != 0)        {+               /* Since not using PG_TRY, must reset error stack by hand */+               error_context_stack = NULL;+                /* Prevents interrupts while cleaning up */                HOLD_INTERRUPTS();This was spotted by Paul during code inspection.", "msg_date": "Fri, 18 Oct 2019 17:55:32 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Missing error_context_stack = NULL in AutoVacWorkerMain()" }, { "msg_contents": "On Fri, Oct 18, 2019 at 05:55:32PM -0700, Ashwin Agrawal wrote:\n> I am not sure if this causes any potential problems or not, but for\n> consistency of code seems we are missing below. All other places in code\n> where sigsetjmp() exists for top level handling has error_context_stack set\n> to NULL.\n\nResetting error_context_stack prevents calling any callbacks which may\nbe set. These would not be much useful in this context anyway, and\nvisibly that's actually not an issue with the autovacuum code so far\n(I don't recall seeing a custom callback setup in this area, but I may\nhave missed something). So fixing it would be a good thing actually,\non HEAD.\n\nAny thoughts from others?\n--\nMichael", "msg_date": "Mon, 21 Oct 2019 13:22:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Missing error_context_stack = NULL in AutoVacWorkerMain()" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Oct 18, 2019 at 05:55:32PM -0700, Ashwin Agrawal wrote:\n>> I am not sure if this causes any potential problems or not, but for\n>> consistency of code seems we are missing below. All other places in code\n>> where sigsetjmp() exists for top level handling has error_context_stack set\n>> to NULL.\n\n> Resetting error_context_stack prevents calling any callbacks which may\n> be set. These would not be much useful in this context anyway, and\n> visibly that's actually not an issue with the autovacuum code so far\n> (I don't recall seeing a custom callback setup in this area, but I may\n> have missed something). So fixing it would be a good thing actually,\n> on HEAD.\n\n> Any thoughts from others?\n\nThis seems like a real and possibly serious bug to me. Backend sigsetjmp\ncallers *must* clear error_context_stack (or restore it to a previous\nvalue), because if it isn't NULL it's surely pointing at garbage, ie a\nlocal variable that's no longer part of the valid stack.\n\nThe issue might be argued to be insignificant because the autovacuum\nworker is just going to do proc_exit anyway. But if it encountered\nanother error during proc_exit, elog.c might try to invoke error\ncallbacks using garbage callback data.\n\nIn short, I think we'd better back-patch too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Oct 2019 00:47:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing error_context_stack = NULL in AutoVacWorkerMain()" }, { "msg_contents": "I wrote:\n> The issue might be argued to be insignificant because the autovacuum\n> worker is just going to do proc_exit anyway. But if it encountered\n> another error during proc_exit, elog.c might try to invoke error\n> callbacks using garbage callback data.\n\nOh --- looking closer, proc_exit itself will clear error_context_stack\nbefore doing much. So a problem would only occur if we suffered an error\nduring EmitErrorReport, which seems somewhat unlikely. Still, it's bad\nthat this code isn't like all the others. There's certainly no downside\nto clearing the pointer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Oct 2019 00:53:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing error_context_stack = NULL in AutoVacWorkerMain()" }, { "msg_contents": "On Mon, Oct 21, 2019 at 12:47:40AM -0400, Tom Lane wrote:\n> This seems like a real and possibly serious bug to me. Backend sigsetjmp\n> callers *must* clear error_context_stack (or restore it to a previous\n> value), because if it isn't NULL it's surely pointing at garbage, ie a\n> local variable that's no longer part of the valid stack.\n\nSure. From my recollection of memories we never set it in autovacuum\ncode paths (including index entry deletions), so I don't think that we\nhave an actual live bug here.\n\n> The issue might be argued to be insignificant because the autovacuum\n> worker is just going to do proc_exit anyway. But if it encountered\n> another error during proc_exit, elog.c might try to invoke error\n> callbacks using garbage callback data.\n> \n> In short, I think we'd better back-patch too.\n\nOkay, no objections to back-patch.\n--\nMichael", "msg_date": "Mon, 21 Oct 2019 13:56:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Missing error_context_stack = NULL in AutoVacWorkerMain()" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Oct 21, 2019 at 12:47:40AM -0400, Tom Lane wrote:\n>> This seems like a real and possibly serious bug to me. Backend sigsetjmp\n>> callers *must* clear error_context_stack (or restore it to a previous\n>> value), because if it isn't NULL it's surely pointing at garbage, ie a\n>> local variable that's no longer part of the valid stack.\n\n> Sure. From my recollection of memories we never set it in autovacuum\n> code paths (including index entry deletions), so I don't think that we\n> have an actual live bug here.\n\nUh ... what about, say, auto-analyze on an expression index? That\ncould call user-defined PL functions and thus reach just about all\nof the backend.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Oct 2019 01:01:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing error_context_stack = NULL in AutoVacWorkerMain()" }, { "msg_contents": "On Mon, Oct 21, 2019 at 12:53:27AM -0400, Tom Lane wrote:\n> Oh --- looking closer, proc_exit itself will clear error_context_stack\n> before doing much. So a problem would only occur if we suffered an error\n> during EmitErrorReport, which seems somewhat unlikely. Still, it's bad\n> that this code isn't like all the others. There's certainly no downside\n> to clearing the pointer.\n\nGood point about index predicates/expressions. There is the elog()\nhook as well in the area, and it's hard to predict how people use\nthat. So applied and back-patched down 9.4.\n--\nMichael", "msg_date": "Wed, 23 Oct 2019 10:31:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Missing error_context_stack = NULL in AutoVacWorkerMain()" } ]
[ { "msg_contents": "For all ppc compilers, implement compare_exchange and fetch_add with asm.\n\nThis is more like how we handle s_lock.h and arch-x86.h.\n\nReviewed by Tom Lane.\n\nDiscussion: https://postgr.es/m/20191005173400.GA3979129@rfd.leadboat.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/30ee5d17c20dbb282a9952b3048d6ad52d56c371\n\nModified Files\n--------------\nconfigure | 40 ++++++\nconfigure.in | 20 +++\nsrc/include/pg_config.h.in | 3 +\nsrc/include/port/atomics.h | 11 +-\nsrc/include/port/atomics/arch-ppc.h | 231 +++++++++++++++++++++++++++++++++\nsrc/include/port/atomics/generic-xlc.h | 142 --------------------\nsrc/tools/pginclude/cpluspluscheck | 1 -\nsrc/tools/pginclude/headerscheck | 1 -\n8 files changed, 298 insertions(+), 151 deletions(-)", "msg_date": "Sat, 19 Oct 2019 03:27:18 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "pgsql: For all ppc compilers,\n implement compare_exchange and fetch_add " }, { "msg_contents": "Re: Noah Misch\n> For all ppc compilers, implement compare_exchange and fetch_add with asm.\n> \n> This is more like how we handle s_lock.h and arch-x86.h.\n> \n> Reviewed by Tom Lane.\n> \n> Discussion: https://postgr.es/m/20191005173400.GA3979129@rfd.leadboat.com\n\nHi,\n\npg-cron on powerpc/ppc64/ppc64el is raising this warning inside the\nppc atomics:\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -g -O2 -fdebug-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unknown-warning-option -Wno-unused-parameter -Wno-maybe-uninitialized -Wno-implicit-fallthrough -Iinclude -I/usr/include/postgresql -I. -I./ -I/usr/include/postgresql/13/server -I/usr/include/postgresql/internal -Wdate-time -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2 -c -o src/job_metadata.o src/job_metadata.c\nIn file included from /usr/include/postgresql/13/server/port/atomics.h:74,\n from /usr/include/postgresql/13/server/utils/dsa.h:17,\n from /usr/include/postgresql/13/server/nodes/tidbitmap.h:26,\n from /usr/include/postgresql/13/server/access/genam.h:19,\n from src/job_metadata.c:21:\n/usr/include/postgresql/13/server/port/atomics/arch-ppc.h: In function ‘pg_atomic_compare_exchange_u32_impl’:\n/usr/include/postgresql/13/server/port/atomics/arch-ppc.h:97:42: error: comparison of integer expressions of different signedness: ‘uint32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]\n 97 | *expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n | ^~\nsrc/job_metadata.c: At top level:\ncc1: note: unrecognized command-line option ‘-Wno-unknown-warning-option’ may have been intended to silence earlier diagnostics\ncc1: all warnings being treated as errors\n\nLooking at the pg_atomic_compare_exchange_u32_impl, this looks like a\ngenuine problem:\n\nstatic inline bool\npg_atomic_compare_exchange_u32_impl(volatile pg_atomic_uint32 *ptr,\n uint32 *expected, uint32 newval)\n...\n if (__builtin_constant_p(*expected) &&\n *expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n\nIf *expected is an unsigned integer, comparing it to PG_INT16_MIN\nwhich is a negative number doesn't make sense.\n\nsrc/include/c.h:#define PG_INT16_MIN (-0x7FFF-1)\n\nChristoph\n\n\n", "msg_date": "Fri, 9 Oct 2020 11:28:25 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "powerpc pg_atomic_compare_exchange_u32_impl: error: comparison of\n integer expressions of different signedness (Re: pgsql: For all ppc\n compilers, implement compare_exchange and) fetch_add" }, { "msg_contents": "On Fri, Oct 09, 2020 at 11:28:25AM +0200, Christoph Berg wrote:\n> pg-cron on powerpc/ppc64/ppc64el is raising this warning inside the\n> ppc atomics:\n> \n> gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -g -O2 -fdebug-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unknown-warning-option -Wno-unused-parameter -Wno-maybe-uninitialized -Wno-implicit-fallthrough -Iinclude -I/usr/include/postgresql -I. -I./ -I/usr/include/postgresql/13/server -I/usr/include/postgresql/internal -Wdate-time -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2 -c -o src/job_metadata.o src/job_metadata.c\n> In file included from /usr/include/postgresql/13/server/port/atomics.h:74,\n> from /usr/include/postgresql/13/server/utils/dsa.h:17,\n> from /usr/include/postgresql/13/server/nodes/tidbitmap.h:26,\n> from /usr/include/postgresql/13/server/access/genam.h:19,\n> from src/job_metadata.c:21:\n> /usr/include/postgresql/13/server/port/atomics/arch-ppc.h: In function ‘pg_atomic_compare_exchange_u32_impl’:\n> /usr/include/postgresql/13/server/port/atomics/arch-ppc.h:97:42: error: comparison of integer expressions of different signedness: ‘uint32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]\n> 97 | *expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n> | ^~\n> src/job_metadata.c: At top level:\n> cc1: note: unrecognized command-line option ‘-Wno-unknown-warning-option’ may have been intended to silence earlier diagnostics\n> cc1: all warnings being treated as errors\n> \n> Looking at the pg_atomic_compare_exchange_u32_impl, this looks like a\n> genuine problem:\n> \n> static inline bool\n> pg_atomic_compare_exchange_u32_impl(volatile pg_atomic_uint32 *ptr,\n> uint32 *expected, uint32 newval)\n> ...\n> if (__builtin_constant_p(*expected) &&\n> *expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n> \n> If *expected is an unsigned integer, comparing it to PG_INT16_MIN\n> which is a negative number doesn't make sense.\n> \n> src/include/c.h:#define PG_INT16_MIN (-0x7FFF-1)\n\nAgreed. I'll probably fix it like this:\n\n--- a/src/include/port/atomics/arch-ppc.h\n+++ b/src/include/port/atomics/arch-ppc.h\n@@ -96,3 +96,4 @@ pg_atomic_compare_exchange_u32_impl(volatile pg_atomic_uint32 *ptr,\n \tif (__builtin_constant_p(*expected) &&\n-\t\t*expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n+\t\t(int32) *expected <= PG_INT16_MAX &&\n+\t\t(int32) *expected >= PG_INT16_MIN)\n \t\t__asm__ __volatile__(\n@@ -185,3 +186,4 @@ pg_atomic_compare_exchange_u64_impl(volatile pg_atomic_uint64 *ptr,\n \tif (__builtin_constant_p(*expected) &&\n-\t\t*expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n+\t\t(int64) *expected <= PG_INT16_MAX &&\n+\t\t(int64) *expected >= PG_INT16_MIN)\n \t\t__asm__ __volatile__(\n\n\n", "msg_date": "Fri, 9 Oct 2020 03:01:17 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: powerpc pg_atomic_compare_exchange_u32_impl: error: comparison\n of integer expressions of different signedness (Re: pgsql: For all ppc\n compilers, implement compare_exchange and) fetch_add" }, { "msg_contents": "On Fri, Oct 09, 2020 at 03:01:17AM -0700, Noah Misch wrote:\n> On Fri, Oct 09, 2020 at 11:28:25AM +0200, Christoph Berg wrote:\n> > pg-cron on powerpc/ppc64/ppc64el is raising this warning inside the\n> > ppc atomics:\n> > \n> > gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -g -O2 -fdebug-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -std=c99 -Wall -Wextra -Werror -Wno-unknown-warning-option -Wno-unused-parameter -Wno-maybe-uninitialized -Wno-implicit-fallthrough -Iinclude -I/usr/include/postgresql -I. -I./ -I/usr/include/postgresql/13/server -I/usr/include/postgresql/internal -Wdate-time -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2 -c -o src/job_metadata.o src/job_metadata.c\n> > In file included from /usr/include/postgresql/13/server/port/atomics.h:74,\n> > from /usr/include/postgresql/13/server/utils/dsa.h:17,\n> > from /usr/include/postgresql/13/server/nodes/tidbitmap.h:26,\n> > from /usr/include/postgresql/13/server/access/genam.h:19,\n> > from src/job_metadata.c:21:\n> > /usr/include/postgresql/13/server/port/atomics/arch-ppc.h: In function ‘pg_atomic_compare_exchange_u32_impl’:\n> > /usr/include/postgresql/13/server/port/atomics/arch-ppc.h:97:42: error: comparison of integer expressions of different signedness: ‘uint32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]\n> > 97 | *expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n> > | ^~\n> > src/job_metadata.c: At top level:\n> > cc1: note: unrecognized command-line option ‘-Wno-unknown-warning-option’ may have been intended to silence earlier diagnostics\n> > cc1: all warnings being treated as errors\n> > \n> > Looking at the pg_atomic_compare_exchange_u32_impl, this looks like a\n> > genuine problem:\n> > \n> > static inline bool\n> > pg_atomic_compare_exchange_u32_impl(volatile pg_atomic_uint32 *ptr,\n> > uint32 *expected, uint32 newval)\n> > ...\n> > if (__builtin_constant_p(*expected) &&\n> > *expected <= PG_INT16_MAX && *expected >= PG_INT16_MIN)\n> > \n> > If *expected is an unsigned integer, comparing it to PG_INT16_MIN\n> > which is a negative number doesn't make sense.\n> > \n> > src/include/c.h:#define PG_INT16_MIN (-0x7FFF-1)\n> \n> Agreed. I'll probably fix it like this:\n\nThe first attachment fixes the matter you've reported. While confirming that,\nI observed that gcc builds don't even use the 64-bit code in arch-ppc.h.\nOops. The second attachment fixes that. I plan not to back-patch either of\nthese.", "msg_date": "Sat, 10 Oct 2020 22:10:43 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: powerpc pg_atomic_compare_exchange_u32_impl: error: comparison\n of integer expressions of different signedness (Re: pgsql: For all ppc\n compilers, implement compare_exchange and) fetch_add" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> The first attachment fixes the matter you've reported. While confirming that,\n> I observed that gcc builds don't even use the 64-bit code in arch-ppc.h.\n> Oops. The second attachment fixes that.\n\nI reviewed these, and tested the first one on a nearby Apple machine.\n(I lack access to 64-bit PPC, so I can't actually test the second.)\nThey look fine, and I confirmed by examining asm output that even\nthe rather-old-now gcc version that Apple last shipped for PPC does\nthe right thing with the conditionals.\n\n> I plan not to back-patch either of these.\n\nHmm, I'd argue for a back-patch. The issue of modern compilers\nwarning about the incorrect code will apply to all supported branches.\nMoreover, even if we don't use these code paths today, who's to say\nthat someone won't back-patch a bug fix that requires them? I do not\nthink it's unreasonable to expect these functions to work well in\nall branches that have them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Oct 2020 13:12:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: powerpc pg_atomic_compare_exchange_u32_impl: error: comparison of\n integer expressions of different signedness (Re: pgsql: For all ppc\n compilers, implement compare_exchange and) fetch_add" }, { "msg_contents": "Re: Tom Lane\n> > I plan not to back-patch either of these.\n> \n> Hmm, I'd argue for a back-patch. The issue of modern compilers\n> warning about the incorrect code will apply to all supported branches.\n> Moreover, even if we don't use these code paths today, who's to say\n> that someone won't back-patch a bug fix that requires them? I do not\n> think it's unreasonable to expect these functions to work well in\n> all branches that have them.\n\nOr remove them. (But fixing seems better.)\n\nChristoph\n\n\n", "msg_date": "Sun, 11 Oct 2020 20:35:13 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: powerpc pg_atomic_compare_exchange_u32_impl: error: comparison\n of integer expressions of different signedness (Re: pgsql: For all ppc\n compilers, implement compare_exchange and) fetch_add" }, { "msg_contents": "On Sun, Oct 11, 2020 at 08:35:13PM +0200, Christoph Berg wrote:\n> Re: Tom Lane\n>> Hmm, I'd argue for a back-patch. The issue of modern compilers\n>> warning about the incorrect code will apply to all supported branches.\n>> Moreover, even if we don't use these code paths today, who's to say\n>> that someone won't back-patch a bug fix that requires them? I do not\n>> think it's unreasonable to expect these functions to work well in\n>> all branches that have them.\n> \n> Or remove them. (But fixing seems better.)\n\nThe patch is not that invasive, so just fixing back-branches sounds\nlike a good idea to me. My 2c.\n--\nMichael", "msg_date": "Mon, 12 Oct 2020 10:16:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: powerpc pg_atomic_compare_exchange_u32_impl: error: comparison\n of integer expressions of different signedness (Re: pgsql: For all ppc\n compilers, implement compare_exchange and) fetch_add" }, { "msg_contents": "On Sun, Oct 11, 2020 at 01:12:40PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > The first attachment fixes the matter you've reported. While confirming that,\n> > I observed that gcc builds don't even use the 64-bit code in arch-ppc.h.\n> > Oops. The second attachment fixes that.\n> \n> I reviewed these, and tested the first one on a nearby Apple machine.\n> (I lack access to 64-bit PPC, so I can't actually test the second.)\n> They look fine, and I confirmed by examining asm output that even\n> the rather-old-now gcc version that Apple last shipped for PPC does\n> the right thing with the conditionals.\n\nThanks for reviewing and for mentioning that old-gcc behavior. I had a\ncomment asserting that gcc 7.2.0 didn't deduce constancy from those\nconditionals. Checking again now, it was just $SUBJECT preventing constancy\ndeduction. I made the patch remove that comment.\n\n> > I plan not to back-patch either of these.\n> \n> Hmm, I'd argue for a back-patch. The issue of modern compilers\n> warning about the incorrect code will apply to all supported branches.\n> Moreover, even if we don't use these code paths today, who's to say\n> that someone won't back-patch a bug fix that requires them? I do not\n> think it's unreasonable to expect these functions to work well in\n> all branches that have them.\n\nOkay, I've pushed with a back-patch. compare_exchange-ppc-immediate-v1.patch\naffects on code generation are limited to regress.o, so it's quite safe to\nback-patch. I just didn't think it was standard to back-patch for the purpose\nof removing a -Wsign-compare warning. (Every branch is noisy under\n-Wsign-compare.)\n\natomics-ppc64-gcc-v1.patch does change code generation, in the manner\ndiscussed in the big arch-ppc.h comment (starts with \"This mimics gcc\").\nStill, I've accepted the modest risk.\n\n\n", "msg_date": "Sun, 11 Oct 2020 21:46:40 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: powerpc pg_atomic_compare_exchange_u32_impl: error: comparison\n of integer expressions of different signedness (Re: pgsql: For all ppc\n compilers, implement compare_exchange and) fetch_add" } ]
[ { "msg_contents": "Hello,\n\n\nThe attached trivial patch fixes the initialization of the fake unlogged LSN. Currently, BootstrapXLOG() in initdb sets the initial fake unlogged LSN to FirstNormalUnloggedLSN (=1000), but the recovery and pg_resetwal sets it to 1. The patch modifies the latter two cases to match initdb.\n\nI don't know if this do actual harm, because the description of FirstNormalUnloggedLSN doesn't give me any idea.\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Sat, 19 Oct 2019 05:03:00 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Sat, Oct 19, 2019 at 05:03:00AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> The attached trivial patch fixes the initialization of the fake\n> unlogged LSN. Currently, BootstrapXLOG() in initdb sets the initial\n> fake unlogged LSN to FirstNormalUnloggedLSN (=1000), but the\n> recovery and pg_resetwal sets it to 1. The patch modifies the\n> latter two cases to match initdb. \n> \n> I don't know if this do actual harm, because the description of\n> FirstNormalUnloggedLSN doesn't give me any idea. \n\nFrom xlogdefs.h added by 9155580:\n/*\n * First LSN to use for \"fake\" LSNs.\n *\n * Values smaller than this can be used for special per-AM purposes.\n */\n#define FirstNormalUnloggedLSN ((XLogRecPtr) 1000)\n\nSo it seems to me that you have caught a bug here, and that we had\nbetter back-patch to v12 so as recovery and pg_resetwal don't mess up\nwith AMs using lower values than that.\n--\nMichael", "msg_date": "Mon, 21 Oct 2019 14:03:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "At Mon, 21 Oct 2019 14:03:47 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sat, Oct 19, 2019 at 05:03:00AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> > The attached trivial patch fixes the initialization of the fake\n> > unlogged LSN. Currently, BootstrapXLOG() in initdb sets the initial\n> > fake unlogged LSN to FirstNormalUnloggedLSN (=1000), but the\n> > recovery and pg_resetwal sets it to 1. The patch modifies the\n> > latter two cases to match initdb. \n> > \n> > I don't know if this do actual harm, because the description of\n> > FirstNormalUnloggedLSN doesn't give me any idea. \n> \n> From xlogdefs.h added by 9155580:\n> /*\n> * First LSN to use for \"fake\" LSNs.\n> *\n> * Values smaller than this can be used for special per-AM purposes.\n> */\n> #define FirstNormalUnloggedLSN ((XLogRecPtr) 1000)\n> \n> So it seems to me that you have caught a bug here, and that we had\n> better back-patch to v12 so as recovery and pg_resetwal don't mess up\n> with AMs using lower values than that.\n\n+1\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 24 Oct 2019 13:14:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Sat, Oct 19, 2019 at 3:18 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> Hello,\n>\n>\n> The attached trivial patch fixes the initialization of the fake unlogged LSN. Currently, BootstrapXLOG() in initdb sets the initial fake unlogged LSN to FirstNormalUnloggedLSN (=1000), but the recovery and pg_resetwal sets it to 1. The patch modifies the latter two cases to match initdb.\n>\n> I don't know if this do actual harm, because the description of FirstNormalUnloggedLSN doesn't give me any idea.\n>\n\nI have noticed that in StartupXlog also we reset it with 1, you might\nwant to fix that as well?\n\nStartupXLOG\n{\n...\n/*\n* Initialize unlogged LSN. On a clean shutdown, it's restored from the\n* control file. On recovery, all unlogged relations are blown away, so\n* the unlogged LSN counter can be reset too.\n*/\nif (ControlFile->state == DB_SHUTDOWNED)\nXLogCtl->unloggedLSN = ControlFile->unloggedLSN;\nelse\nXLogCtl->unloggedLSN = 1;\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Oct 2019 13:57:45 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Mon, 21 Oct 2019 at 06:03, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Oct 19, 2019 at 05:03:00AM +0000, tsunakawa.takay@fujitsu.com\n> wrote:\n> > The attached trivial patch fixes the initialization of the fake\n> > unlogged LSN. Currently, BootstrapXLOG() in initdb sets the initial\n> > fake unlogged LSN to FirstNormalUnloggedLSN (=1000), but the\n> > recovery and pg_resetwal sets it to 1. The patch modifies the\n> > latter two cases to match initdb.\n> >\n> > I don't know if this do actual harm, because the description of\n> > FirstNormalUnloggedLSN doesn't give me any idea.\n>\n> From xlogdefs.h added by 9155580:\n> /*\n> * First LSN to use for \"fake\" LSNs.\n> *\n> * Values smaller than this can be used for special per-AM purposes.\n> */\n> #define FirstNormalUnloggedLSN ((XLogRecPtr) 1000)\n>\n> So it seems to me that you have caught a bug here, and that we had\n> better back-patch to v12 so as recovery and pg_resetwal don't mess up\n> with AMs using lower values than that.\n>\n\nI wonder why is that value 1000, rather than an aligned value or a whole\nWAL page?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Mon, 21 Oct 2019 at 06:03, Michael Paquier <michael@paquier.xyz> wrote:On Sat, Oct 19, 2019 at 05:03:00AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> The attached trivial patch fixes the initialization of the fake\n> unlogged LSN.  Currently, BootstrapXLOG() in initdb sets the initial\n> fake unlogged LSN to FirstNormalUnloggedLSN (=1000), but the\n> recovery and pg_resetwal sets it to 1.  The patch modifies the\n> latter two cases to match initdb. \n> \n> I don't know if this do actual harm, because the description of\n> FirstNormalUnloggedLSN doesn't give me any idea. \n\n From xlogdefs.h added by 9155580:\n/*\n * First LSN to use for \"fake\" LSNs.\n *\n * Values smaller than this can be used for special per-AM purposes.\n */\n#define FirstNormalUnloggedLSN  ((XLogRecPtr) 1000)\n\nSo it seems to me that you have caught a bug here, and that we had\nbetter back-patch to v12 so as recovery and pg_resetwal don't mess up\nwith AMs using lower values than that.I wonder why is that value 1000, rather than an aligned value or a whole WAL page?-- Simon Riggs                http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 24 Oct 2019 11:57:33 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Thu, Oct 24, 2019 at 11:57:33AM +0100, Simon Riggs wrote:\n> I wonder why is that value 1000, rather than an aligned value or a whole\n> WAL page?\n\nGood question. Heikki, why this choice?\n--\nMichael", "msg_date": "Thu, 24 Oct 2019 21:08:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "From: Simon Riggs <simon@2ndquadrant.com>\r\n> \tFrom xlogdefs.h added by 9155580:\r\n> \t/*\r\n> \t * First LSN to use for \"fake\" LSNs.\r\n> \t *\r\n> \t * Values smaller than this can be used for special per-AM purposes.\r\n> \t */\r\n> \t#define FirstNormalUnloggedLSN ((XLogRecPtr) 1000)\r\n\r\nYeah, I had seen it, but I didn't understand what kind of usage is assumed.\r\n\r\n\r\n> I wonder why is that value 1000, rather than an aligned value or a whole WAL\r\n> page?\r\n\r\nI think that's because this fake LSN is not associated with the physical position of WAL records.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 25 Oct 2019 02:07:04 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix of fake unlogged LSN initialization" }, { "msg_contents": "From: Dilip Kumar <dilipbalaut@gmail.com>\r\n> I have noticed that in StartupXlog also we reset it with 1, you might\r\n> want to fix that as well?\r\n> \r\n> StartupXLOG\r\n> {\r\n> ...\r\n> /*\r\n> * Initialize unlogged LSN. On a clean shutdown, it's restored from the\r\n> * control file. On recovery, all unlogged relations are blown away, so\r\n> * the unlogged LSN counter can be reset too.\r\n> */\r\n> if (ControlFile->state == DB_SHUTDOWNED)\r\n> XLogCtl->unloggedLSN = ControlFile->unloggedLSN;\r\n> else\r\n> XLogCtl->unloggedLSN = 1;\r\n> \r\n\r\nThanks for taking a look. I'm afraid my patch includes the fix for this part.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 25 Oct 2019 02:11:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On 24/10/2019 15:08, Michael Paquier wrote:\n> On Thu, Oct 24, 2019 at 11:57:33AM +0100, Simon Riggs wrote:\n>> I wonder why is that value 1000, rather than an aligned value or a whole\n>> WAL page?\n> \n> Good question. Heikki, why this choice?\n\nNo particular reason, it's just a nice round value in decimal.\n\n- Heikki\n\n\n", "msg_date": "Fri, 25 Oct 2019 09:54:53 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Thu, Oct 24, 2019 at 01:57:45PM +0530, Dilip Kumar wrote:\n> I have noticed that in StartupXlog also we reset it with 1, you might\n> want to fix that as well?\n\nTsunakawa-san's patch fixes that spot already. Grepping for\nunloggedLSN in the code there is only pg_resetwal on top of what you\nare mentioning here.\n--\nMichael", "msg_date": "Fri, 25 Oct 2019 15:55:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Fri, Oct 25, 2019 at 02:07:04AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> From: Simon Riggs <simon@2ndquadrant.com>\n>> \tFrom xlogdefs.h added by 9155580:\n>> \t/*\n>> \t * First LSN to use for \"fake\" LSNs.\n>> \t *\n>> \t * Values smaller than this can be used for special per-AM purposes.\n>> \t */\n>> \t#define FirstNormalUnloggedLSN ((XLogRecPtr) 1000)\n> \n> Yeah, I had seen it, but I didn't understand what kind of usage is assumed.\n\nThere is an explanation in the commit message of 9155580: that's to\nmake an interlocking logic in GiST able to work where a valid LSN\nneeds to be used. So a magic value was just wanted.\n\nYour patch looks fine to me by the way after a second look, so I think\nthat we had better commit it and back-patch sooner than later. If\nthere are any objections or more comments, please feel free..\n--\nMichael", "msg_date": "Fri, 25 Oct 2019 15:58:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Fri, Oct 25, 2019 at 02:11:55AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> Thanks for taking a look. I'm afraid my patch includes the fix for this part.\n\nYes. And now this is applied and back-patched.\n--\nMichael", "msg_date": "Sun, 27 Oct 2019 13:58:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Fri, Oct 25, 2019 at 7:42 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Dilip Kumar <dilipbalaut@gmail.com>\n> > I have noticed that in StartupXlog also we reset it with 1, you might\n> > want to fix that as well?\n> >\n> > StartupXLOG\n> > {\n> > ...\n> > /*\n> > * Initialize unlogged LSN. On a clean shutdown, it's restored from the\n> > * control file. On recovery, all unlogged relations are blown away, so\n> > * the unlogged LSN counter can be reset too.\n> > */\n> > if (ControlFile->state == DB_SHUTDOWNED)\n> > XLogCtl->unloggedLSN = ControlFile->unloggedLSN;\n> > else\n> > XLogCtl->unloggedLSN = 1;\n> >\n>\n> Thanks for taking a look. I'm afraid my patch includes the fix for this part.\n\nOh, I missed that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 27 Oct 2019 12:25:07 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" }, { "msg_contents": "On Fri, Oct 25, 2019 at 09:54:53AM +0300, Heikki Linnakangas wrote:\n> No particular reason, it's just a nice round value in decimal.\n\nWell:\n$ pg_controldata | grep -i fake\nFake LSN counter for unlogged rels: 0/3E8\n\n;p\n--\nMichael", "msg_date": "Mon, 28 Oct 2019 17:27:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix of fake unlogged LSN initialization" } ]
[ { "msg_contents": "Hello hackers,\n\nI've noticed that the createuser utility supports two undocumented\noptions (--adduser, --no-adduser), that became obsolete in 2005.\nI believe that their existence should come to end someday (maybe\ntoday?). The patch to remove them is attached.\n\nBest regards.\nAlexander", "msg_date": "Sat, 19 Oct 2019 15:34:56 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Remove obsolete options for createuser" }, { "msg_contents": "On Sat, Oct 19, 2019 at 03:34:56PM +0300, Alexander Lakhin wrote:\n> I've noticed that the createuser utility supports two undocumented\n> options (--adduser, --no-adduser), that became obsolete in 2005.\n> I believe that their existence should come to end someday (maybe\n> today?). The patch to remove them is attached.\n\nThe commit in question is 8ae0d47 from 2005. So let's remove it. It\nis not even documented for ages.\n\nPerhaps somebody thinks it is not a good idea?\n--\nMichael", "msg_date": "Mon, 21 Oct 2019 13:33:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove obsolete options for createuser" }, { "msg_contents": "On Mon, Oct 21, 2019 at 01:33:08PM +0900, Michael Paquier wrote:\n> The commit in question is 8ae0d47 from 2005. So let's remove it. It\n> is not even documented for ages.\n> \n> Perhaps somebody thinks it is not a good idea?\n\nDone. A similar move could be done for --encrypted which has been\nmade a no-op as of eb61136, still that feels way too early.\n--\nMichael", "msg_date": "Wed, 23 Oct 2019 12:28:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove obsolete options for createuser" } ]
[ { "msg_contents": "Hello,\n\nThis patch propose a new way to sample statement to logs.\n\nAs a reminder, this feature was committed in PG12[1] then reverted[2] after the\nproposition of log_statement_sample_limit[3]\n\nThe first implementation added a new GUC to sample statement logged by\nlog_min_duration_statement. Then, we wanted to add the ability to log all\nstatement whose duration exceed log_statement_sample_limit.\n\nThis was confusing because log_min_duration behaves as a minimum to enable\nsampling. While log_statement_sample_limit behave as maximum to disable it.[4]\n\nTomas Vondra proposed to use two minimum thresholds:\n\n> 1) log_min_duration_sample - enables sampling of commands, using the\n> existing GUC log_statement_sample_rate\n> \n> 2) log_min_duration_statement - logs all commands exceeding this \n\nThis patch implement this idea.\n\nPS: I notice I forgot to mention \"Only superusers can change this setting\" in\nthe log_transaction_sample_rate documentation. It attached a second patch to fix\nthis.\n\n\nRegards,\n\n1:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=799e220346f1387e823a4dbdc3b1c8c3cdc5c3e0\n2:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=75506195da81d75597a4025b72f8367e6c45f60d\n3:\nhttps://www.postgresql.org/message-id/CAFj8pRDS8tQ3Wviw9%3DAvODyUciPSrGeMhJi_WPE%2BEB8%2B4gLL-Q%40mail.gmail.com\n4:\nhttps://www.postgresql.org/message-id/20190727221948.irg6sfqh57dynoc7%40development\n\n-- \nAdrien NAYRAT", "msg_date": "Sat, 19 Oct 2019 17:02:01 +0200", "msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>", "msg_from_op": true, "msg_subject": "Log statement sample - take two" }, { "msg_contents": "On Sat, Oct 19, 2019 at 05:02:01PM +0200, Adrien Nayrat wrote:\n>Hello,\n>\n>This patch propose a new way to sample statement to logs.\n>\n>As a reminder, this feature was committed in PG12[1] then reverted[2] after the\n>proposition of log_statement_sample_limit[3]\n>\n>The first implementation added a new GUC to sample statement logged by\n>log_min_duration_statement. Then, we wanted to add the ability to log all\n>statement whose duration exceed log_statement_sample_limit.\n>\n>This was confusing because log_min_duration behaves as a minimum to enable\n>sampling. While log_statement_sample_limit behave as maximum to disable it.[4]\n>\n>Tomas Vondra proposed to use two minimum thresholds:\n>\n>> 1) log_min_duration_sample - enables sampling of commands, using the\n>> existing GUC log_statement_sample_rate\n>>\n>> 2) log_min_duration_statement - logs all commands exceeding this\n>\n>This patch implement this idea.\n>\n>PS: I notice I forgot to mention \"Only superusers can change this setting\" in\n>the log_transaction_sample_rate documentation. It attached a second patch to fix\n>this.\n>\n\nSeems fine to me, mostly. I think the docs should explain how\nlog_min_duration_statement interacts with log_min_duration_sample.\nAttached is a patch doing that, by adding one para to each GUC, along\nwith some minor rewordings. I think the docs are mixing \"sampling\"\nvs. \"logging\" and \"durations\" vs. \"statements\" not sure.\n\nI also think the two new sampling GUCs (log_min_duration_sample and\nlog_statement_sample_rate) should be next to each other. We're not\nordering the GUCs alphabetically anyway.\n\nI plan to make those changes and push in a couple days.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 4 Nov 2019 02:08:07 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Log statement sample - take two" }, { "msg_contents": "On 11/4/19 2:08 AM, Tomas Vondra wrote:\n> \n> Seems fine to me, mostly. I think the docs should explain how\n> log_min_duration_statement interacts with log_min_duration_sample.\n> Attached is a patch doing that, by adding one para to each GUC, along\n> with some minor rewordings. I think the docs are mixing \"sampling\"\n> vs. \"logging\" and \"durations\" vs. \"statements\" not sure.\n\nThanks for the rewording, it's clearer now.\n\n> \n> I also think the two new sampling GUCs (log_min_duration_sample and\n> log_statement_sample_rate) should be next to each other. We're not\n> ordering the GUCs alphabetically anyway.\n\n+1\n\n> \n> I plan to make those changes and push in a couple days.\n> \n\nThanks!\n\n\n", "msg_date": "Mon, 4 Nov 2019 17:26:40 +0100", "msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>", "msg_from_op": true, "msg_subject": "Re: Log statement sample - take two" }, { "msg_contents": "Pushed, with some minor tweaks and rewording to the documentation.\n\nThe first bit, documenting the log_transaction_sample_rate as PG_SUSET,\ngot backpatched to 12, where it was introduced.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 6 Nov 2019 19:16:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Log statement sample - take two" }, { "msg_contents": "On 11/6/19 7:16 PM, Tomas Vondra wrote:\n> Pushed, with some minor tweaks and rewording to the documentation.\n> \n> The first bit, documenting the log_transaction_sample_rate as PG_SUSET,\n> got backpatched to 12, where it was introduced.\n> \n> regards\n> \n\nThanks Tomas!\n\n-- \nAdrien\n\n\n\n", "msg_date": "Wed, 6 Nov 2019 19:57:38 +0100", "msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>", "msg_from_op": true, "msg_subject": "Re: Log statement sample - take two" } ]
[ { "msg_contents": "Here’s a cut-down version of Umair Shahid’s blog post here:\n\nhttps://www.2ndquadrant.com/en/blog/postgresql-11-server-side-procedures-part-1/ <https://www.2ndquadrant.com/en/blog/postgresql-11-server-side-procedures-part-1/>\n__________\n\ncreate table t(k int primary key, v int not null);\n\ncreate or replace procedure p()\n language plpgsql\n security invoker\nas $$\nbegin\n insert into t(k, v) values(1, 17);\n rollback;\n insert into t(k, v) values(1, 42);\n commit;\nend\n$$;\n\ncall p();\nselect * from t order by k;\n__________\n\nIt runs without error and shows that the effect of “rollback” and “commit” is what the names of those statements tells you to expect.\n\nThe post starts with “Thanks to the work done by 2ndQuadrant contributors, we now have the ability to write Stored Procedures in PostgreSQL… [with] transaction control – allowing us to COMMIT and ROLLBACK inside procedures.”. I believe that Umair is referring to work done by Peter Eisentraut.\n\nBut simply change “security invoker” to “security definer” and rerun the test. You get the notorious error “2D000: invalid transaction termination”.\n\nPlease tell me that this is a plain bug—and not the intended semantics.\n\n\n\nHere’s a cut-down version of Umair Shahid’s blog post here:https://www.2ndquadrant.com/en/blog/postgresql-11-server-side-procedures-part-1/__________create table t(k int primary key, v int not null);create or replace procedure p()  language plpgsql  security invokeras $$begin  insert into t(k, v) values(1, 17);  rollback;  insert into t(k, v) values(1, 42);  commit;end$$;call p();select * from t order by k;__________It runs without error and shows that the effect of “rollback” and “commit” is what the names of those statements tells you to expect.The post starts with “Thanks to the work done by 2ndQuadrant contributors, we now have the ability to write Stored Procedures in PostgreSQL… [with] transaction control – allowing us to COMMIT and ROLLBACK inside procedures.”. I believe that Umair is referring to work done by Peter Eisentraut.But simply change “security invoker” to “security definer” and rerun the test. You get the notorious error “2D000: invalid transaction termination”.Please tell me that this is a plain bug—and not the intended semantics.", "msg_date": "Sun, 20 Oct 2019 22:43:12 -0700", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Cannot_commit_or_rollback_in_=E2=80=9Csecurity_definer?=\n =?utf-8?Q?=E2=80=9D_PL/pgSQL_proc?=" } ]
[ { "msg_contents": "While reviewing Pavel's patch for a new option in Drop Database\ncommand [1], I noticed that the check for CountDBSubscriptions in\ndropdb() is done after we kill the autovac workers and allowed other\nbackends to exit via CountOtherDBBackends. Now, if there are already\nactive subscritions due to which we can't drop database, then it is\nbetter to fail before we do CountOtherDBBackends. It is also\nindicated in a comment (\ncheck this after other error conditions) that CountOtherDBBackends has\nto be done after error checks.\n\nSo, I feel we should rearrange the code to move the subscriptions\ncheck before CountOtherDBBackends as is done in the attached patch.\n\nThis has been introduced by below commit:\ncommit 665d1fad99e7b11678b0d5fa24d2898424243cd6\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: Thu Jan 19 12:00:00 2017 -0500\n\n Logical replication\n\nThoughts?\n\n[1] - https://commitfest.postgresql.org/25/2055/\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Oct 2019 11:43:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "CountDBSubscriptions check in dropdb" }, { "msg_contents": "On Mon, Oct 21, 2019 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> While reviewing Pavel's patch for a new option in Drop Database\n> command [1], I noticed that the check for CountDBSubscriptions in\n> dropdb() is done after we kill the autovac workers and allowed other\n> backends to exit via CountOtherDBBackends. Now, if there are already\n> active subscritions due to which we can't drop database, then it is\n> better to fail before we do CountOtherDBBackends. It is also\n> indicated in a comment (\n> check this after other error conditions) that CountOtherDBBackends has\n> to be done after error checks.\n>\n> So, I feel we should rearrange the code to move the subscriptions\n> check before CountOtherDBBackends as is done in the attached patch.\n>\n> This has been introduced by below commit:\n> commit 665d1fad99e7b11678b0d5fa24d2898424243cd6\n> Author: Peter Eisentraut <peter_e@gmx.net>\n> Date: Thu Jan 19 12:00:00 2017 -0500\n>\n> Logical replication\n>\n\nI am planning to commit and backpatch this till PG10 where it was\nintroduced on Monday morning (IST). Pavel agreed that this is a good\nchange in the other thread where we need it [1]. It is not an urgent\nthing, so I can wait if we think this is not a good time to commit\nthis. Let me know if anyone has objections?\n\n\n[1] - https://www.postgresql.org/message-id/CAFj8pRD75_wYzigvhk3fLcixGSkevwnYtdwE3gf%2Bb8EqRqbXSA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Nov 2019 19:08:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CountDBSubscriptions check in dropdb" }, { "msg_contents": "On 2019-11-08 14:38, Amit Kapila wrote:\n> I am planning to commit and backpatch this till PG10 where it was\n> introduced on Monday morning (IST). Pavel agreed that this is a good\n> change in the other thread where we need it [1]. It is not an urgent\n> thing, so I can wait if we think this is not a good time to commit\n> this. Let me know if anyone has objections?\n\nI think the change makes sense for master, but I don't think it should \nbe backpatched.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 Nov 2019 11:27:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: CountDBSubscriptions check in dropdb" }, { "msg_contents": "On Sat, Nov 9, 2019 at 3:58 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-11-08 14:38, Amit Kapila wrote:\n> > I am planning to commit and backpatch this till PG10 where it was\n> > introduced on Monday morning (IST). Pavel agreed that this is a good\n> > change in the other thread where we need it [1]. It is not an urgent\n> > thing, so I can wait if we think this is not a good time to commit\n> > this. Let me know if anyone has objections?\n>\n> I think the change makes sense for master, but I don't think it should\n> be backpatched.\n>\n\nFair enough. Attached patch with a proposed commit message.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 9 Nov 2019 17:37:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CountDBSubscriptions check in dropdb" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sat, Nov 9, 2019 at 3:58 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2019-11-08 14:38, Amit Kapila wrote:\n>>> I am planning to commit and backpatch this till PG10 where it was\n>>> introduced on Monday morning (IST). Pavel agreed that this is a good\n>>> change in the other thread where we need it [1]. It is not an urgent\n>>> thing, so I can wait if we think this is not a good time to commit\n>>> this. Let me know if anyone has objections?\n\n>> I think the change makes sense for master, but I don't think it should\n>> be backpatched.\n\n> Fair enough. Attached patch with a proposed commit message.\n\nI don't have an opinion on whether it's appropriate to back-patch\nthis, but I do have an opinion that Monday morning is the worst\npossible schedule for committing such a thing. We are already\npast the point where we can expect to get reports from the slowest\nbuildfarm critters (e.g. Valgrind builds) before Monday's\nback-branch wraps. Anything that is even slightly inessential\nshould be postponed until after those releases are tagged.\n\nIf it's HEAD-only, of course, it's business as usual.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 09 Nov 2019 11:08:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CountDBSubscriptions check in dropdb" }, { "msg_contents": "On Sat, Nov 9, 2019 at 9:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sat, Nov 9, 2019 at 3:58 PM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> >> On 2019-11-08 14:38, Amit Kapila wrote:\n> >>> I am planning to commit and backpatch this till PG10 where it was\n> >>> introduced on Monday morning (IST). Pavel agreed that this is a good\n> >>> change in the other thread where we need it [1]. It is not an urgent\n> >>> thing, so I can wait if we think this is not a good time to commit\n> >>> this. Let me know if anyone has objections?\n>\n> >> I think the change makes sense for master, but I don't think it should\n> >> be backpatched.\n>\n> > Fair enough. Attached patch with a proposed commit message.\n>\n> I don't have an opinion on whether it's appropriate to back-patch\n> this, but I do have an opinion that Monday morning is the worst\n> possible schedule for committing such a thing. We are already\n> past the point where we can expect to get reports from the slowest\n> buildfarm critters (e.g. Valgrind builds) before Monday's\n> back-branch wraps. Anything that is even slightly inessential\n> should be postponed until after those releases are tagged.\n>\n> If it's HEAD-only, of course, it's business as usual.\n>\n\nI am planning to go with Peter's suggestion and will push in\nHEAD-only. So, I think that should be fine.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 10 Nov 2019 08:48:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CountDBSubscriptions check in dropdb" }, { "msg_contents": "On Sun, Nov 10, 2019 at 08:48:27AM +0530, Amit Kapila wrote:\n> I am planning to go with Peter's suggestion and will push in\n> HEAD-only. So, I think that should be fine.\n\nI was just looking at this thread, and my take would be to just apply\nthat on HEAD. Good catch by the way.\n--\nMichael", "msg_date": "Mon, 11 Nov 2019 10:13:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: CountDBSubscriptions check in dropdb" }, { "msg_contents": "On Mon, Nov 11, 2019 at 6:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Nov 10, 2019 at 08:48:27AM +0530, Amit Kapila wrote:\n> > I am planning to go with Peter's suggestion and will push in\n> > HEAD-only. So, I think that should be fine.\n>\n> I was just looking at this thread, and my take would be to just apply\n> that on HEAD. Good catch by the way.\n>\n\nOkay, thanks for looking into it. Pushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Nov 2019 08:46:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CountDBSubscriptions check in dropdb" } ]
[ { "msg_contents": "Hi,\n\nI found that the argument name of XLogFileInit() is wrong in its comment.\nAttached is the patch that fixes that typo.\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Mon, 21 Oct 2019 15:57:43 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Fix comment in XLogFileInit()" }, { "msg_contents": "On Mon, Oct 21, 2019 at 12:28 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> I found that the argument name of XLogFileInit() is wrong in its comment.\n> Attached is the patch that fixes that typo.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 21 Oct 2019 13:58:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix comment in XLogFileInit()" }, { "msg_contents": "On Mon, Oct 21, 2019 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 21, 2019 at 12:28 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > I found that the argument name of XLogFileInit() is wrong in its comment.\n> > Attached is the patch that fixes that typo.\n> >\n>\n> LGTM.\n\nThanks for checking! Committed.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Thu, 24 Oct 2019 14:14:50 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix comment in XLogFileInit()" } ]
[ { "msg_contents": "Hi hackers,\nI found this issue when restart standby node and then try to connect it.\nIt return \"psql: FATAL: the database system is starting up\".\n\n\nThe steps to reproduce this issue.\n1. Create a session to run uncommit_trans.sql\n2. Create the other session to do checkpoint\n3. Restart standby node.\n4. standby node can not provide service even it has replayed all log files.\n\n\nI think the issue is in ProcArrayApplyRecoveryInfo function.\nThe standby state is in STANDBY_SNAPSHOT_PENDING, but the lastOverflowedXid is not committed.\n\n\nAny idea to fix this issue?\nThanks.", "msg_date": "Mon, 21 Oct 2019 15:40:24 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": true, "msg_subject": "[BUG] standby node can not provide service even it replays all log\n files" }, { "msg_contents": "Can we fix this issue like the following patch?\n\n\n$git diff src/backend/access/transam/xlog.c\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 49ae97d4459..0fbdf6fd64a 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -8365,7 +8365,7 @@ CheckRecoveryConsistency(void)\n * run? If so, we can tell postmaster that the database is consistent now,\n * enabling connections.\n */\n- if (standbyState == STANDBY_SNAPSHOT_READY &&\n+ if ((standbyState == STANDBY_SNAPSHOT_READY || standbyState == STANDBY_SNAPSHOT_PENDING) &&\n !LocalHotStandbyActive &&\n reachedConsistency &&\n IsUnderPostmaster)\n\n\n\n\n\n\nAt 2019-10-21 15:40:24, \"Thunder\" <thunder1@126.com> wrote:\n\nHi hackers,\nI found this issue when restart standby node and then try to connect it.\nIt return \"psql: FATAL: the database system is starting up\".\n\n\nThe steps to reproduce this issue.\n1. Create a session to run uncommit_trans.sql\n2. Create the other session to do checkpoint\n3. Restart standby node.\n4. standby node can not provide service even it has replayed all log files.\n\n\nI think the issue is in ProcArrayApplyRecoveryInfo function.\nThe standby state is in STANDBY_SNAPSHOT_PENDING, but the lastOverflowedXid is not committed.\n\n\nAny idea to fix this issue?\nThanks.\n\n\n\n\n\n\n \nCan we fix this issue like the following patch?$git diff src/backend/access/transam/xlog.cdiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.cindex 49ae97d4459..0fbdf6fd64a 100644--- a/src/backend/access/transam/xlog.c+++ b/src/backend/access/transam/xlog.c@@ -8365,7 +8365,7 @@ CheckRecoveryConsistency(void)         * run? If so, we can tell postmaster that the database is consistent now,         * enabling connections.         */-       if (standbyState == STANDBY_SNAPSHOT_READY &&+       if ((standbyState == STANDBY_SNAPSHOT_READY || standbyState == STANDBY_SNAPSHOT_PENDING) &&                !LocalHotStandbyActive &&                reachedConsistency &&                IsUnderPostmaster)At 2019-10-21 15:40:24, \"Thunder\" <thunder1@126.com> wrote: Hi hackers,I found this issue when restart standby node and then try to connect it.It return \"psql: FATAL:  the database system is starting up\".The steps to reproduce this issue.1.  Create a session to run uncommit_trans.sql2.  Create the other session to do checkpoint3.  Restart standby node.4.  standby node can not provide service even it has replayed all log files.I think the issue is in ProcArrayApplyRecoveryInfo function.The standby state is in STANDBY_SNAPSHOT_PENDING, but the lastOverflowedXid is not committed.Any idea to fix this issue?Thanks.", "msg_date": "Mon, 21 Oct 2019 16:12:46 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:[BUG] standby node can not provide service even it replays all\n log files" }, { "msg_contents": "On Mon, Oct 21, 2019 at 4:13 AM Thunder <thunder1@126.com> wrote:\n> Can we fix this issue like the following patch?\n>\n> $git diff src/backend/access/transam/xlog.c\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index 49ae97d4459..0fbdf6fd64a 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -8365,7 +8365,7 @@ CheckRecoveryConsistency(void)\n> * run? If so, we can tell postmaster that the database is consistent now,\n> * enabling connections.\n> */\n> - if (standbyState == STANDBY_SNAPSHOT_READY &&\n> + if ((standbyState == STANDBY_SNAPSHOT_READY || standbyState == STANDBY_SNAPSHOT_PENDING) &&\n> !LocalHotStandbyActive &&\n> reachedConsistency &&\n> IsUnderPostmaster)\n\nI think that the issue you've encountered is design behavior. In\nother words, it's intended to work that way.\n\nThe comments for the code you propose to change say that we can allow\nconnections once we've got a valid snapshot. So presumably the effect\nof your change would be to allow connections even though we don't have\na valid snapshot.\n\nThat seems bad.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 21 Oct 2019 13:27:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] standby node can not provide service even it replays all\n log files" }, { "msg_contents": "Update the patch.\n\n1. The STANDBY_SNAPSHOT_PENDING state is set when we replay the first XLOG_RUNNING_XACTS and the sub transaction ids are overflow.\n2. When we log XLOG_RUNNING_XACTS in master node, can we assume that all xact IDS < oldestRunningXid are considered finished?\n3. If we can assume this, when we replay XLOG_RUNNING_XACTS and change standbyState to STANDBY_SNAPSHOT_PENDING, can we record oldestRunningXid to a shared variable, like procArray->oldest_running_xid?\n4. In standby node when call GetSnapshotData if procArray->oldest_running_xid is valid, can we set xmin to be procArray->oldest_running_xid?\n\nAppreciate any suggestion to this issue.\n\n\n\n\n\nAt 2019-10-22 01:27:58, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n>On Mon, Oct 21, 2019 at 4:13 AM Thunder <thunder1@126.com> wrote:\n>> Can we fix this issue like the following patch?\n>>\n>> $git diff src/backend/access/transam/xlog.c\n>> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n>> index 49ae97d4459..0fbdf6fd64a 100644\n>> --- a/src/backend/access/transam/xlog.c\n>> +++ b/src/backend/access/transam/xlog.c\n>> @@ -8365,7 +8365,7 @@ CheckRecoveryConsistency(void)\n>> * run? If so, we can tell postmaster that the database is consistent now,\n>> * enabling connections.\n>> */\n>> - if (standbyState == STANDBY_SNAPSHOT_READY &&\n>> + if ((standbyState == STANDBY_SNAPSHOT_READY || standbyState == STANDBY_SNAPSHOT_PENDING) &&\n>> !LocalHotStandbyActive &&\n>> reachedConsistency &&\n>> IsUnderPostmaster)\n>\n>I think that the issue you've encountered is design behavior. In\n>other words, it's intended to work that way.\n>\n>The comments for the code you propose to change say that we can allow\n>connections once we've got a valid snapshot. So presumably the effect\n>of your change would be to allow connections even though we don't have\n>a valid snapshot.\n>\n>That seems bad.\n>\n>-- \n>Robert Haas\n>EnterpriseDB: http://www.enterprisedb.com\n>The Enterprise PostgreSQL Company", "msg_date": "Tue, 22 Oct 2019 20:42:21 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:Re: [BUG] standby node can not provide service even it replays\n all log files" }, { "msg_contents": "Hello.\n\nAt Tue, 22 Oct 2019 20:42:21 +0800 (CST), Thunder <thunder1@126.com> wrote in \n> Update the patch.\n> \n> 1. The STANDBY_SNAPSHOT_PENDING state is set when we replay the first XLOG_RUNNING_XACTS and the sub transaction ids are overflow.\n> 2. When we log XLOG_RUNNING_XACTS in master node, can we assume that all xact IDS < oldestRunningXid are considered finished?\n\nUnfortunately we can't. Standby needs to know that the *standby's*\noldest active xid exceeds the pendig xmin, not master's. And it is\nalready processed in ProcArrayApplyRecoveryInfo. We cannot assume that\nthe oldest xids are not same on the both side in a replication pair.\n\n> 3. If we can assume this, when we replay XLOG_RUNNING_XACTS and change standbyState to STANDBY_SNAPSHOT_PENDING, can we record oldestRunningXid to a shared variable, like procArray->oldest_running_xid?\n> 4. In standby node when call GetSnapshotData if procArray->oldest_running_xid is valid, can we set xmin to be procArray->oldest_running_xid?\n> \n> Appreciate any suggestion to this issue.\n\nAt 2019-10-22 01:27:58, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n>On Mon, Oct 21, 2019 at 4:13 AM Thunder <thunder1@126.com> wrote:\n..\n> >I think that the issue you've encountered is design behavior. In\n> >other words, it's intended to work that way.\n> >\n> >The comments for the code you propose to change say that we can allow\n> >connections once we've got a valid snapshot. So presumably the effect\n> >of your change would be to allow connections even though we don't have\n> >a valid snapshot.\n> >\n> >That seems bad.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 23 Oct 2019 12:51:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] standby node can not provide service even it replays all\n log files" }, { "msg_contents": "At Wed, 23 Oct 2019 12:51:19 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Hello.\n> \n> At Tue, 22 Oct 2019 20:42:21 +0800 (CST), Thunder <thunder1@126.com> wrote in \n> > Update the patch.\n> > \n> > 1. The STANDBY_SNAPSHOT_PENDING state is set when we replay the first XLOG_RUNNING_XACTS and the sub transaction ids are overflow.\n> > 2. When we log XLOG_RUNNING_XACTS in master node, can we assume that all xact IDS < oldestRunningXid are considered finished?\n> \n> Unfortunately we can't. Standby needs to know that the *standby's*\n> oldest active xid exceeds the pendig xmin, not master's. And it is\n> already processed in ProcArrayApplyRecoveryInfo. We cannot assume that\n> the oldest xids are not same on the both side in a replication pair.\n\nCould we send a full xid list after new standby comes in? Or can\nSTART_REPLICATION return it?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 23 Oct 2019 13:18:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] standby node can not provide service even it replays all\n log files" }, { "msg_contents": "Thanks for replay.I feel confused about snapshot.\n\nAt 2019-10-23 11:51:19, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n>Hello.\n>\n>At Tue, 22 Oct 2019 20:42:21 +0800 (CST), Thunder <thunder1@126.com> wrote in \n>> Update the patch.\n>> \n>> 1. The STANDBY_SNAPSHOT_PENDING state is set when we replay the first XLOG_RUNNING_XACTS and the sub transaction ids are overflow.\n>> 2. When we log XLOG_RUNNING_XACTS in master node, can we assume that all xact IDS < oldestRunningXid are considered finished?\n>\n>Unfortunately we can't. Standby needs to know that the *standby's*\n>oldest active xid exceeds the pendig xmin, not master's. And it is\n>already processed in ProcArrayApplyRecoveryInfo. We cannot assume that\n\n>the oldest xids are not same on the both side in a replication pair.\n\n\nThis issue occurs when master does not commit the transaction which has lots of sub transactions, while we restart or create a new standby node.\nThe standby node can not provide service because of this issue.\nCan the standby have any active xid while it can not provide service?\n\n\n>\n>> 3. If we can assume this, when we replay XLOG_RUNNING_XACTS and change standbyState to STANDBY_SNAPSHOT_PENDING, can we record oldestRunningXid to a shared variable, like procArray->oldest_running_xid?\n>> 4. In standby node when call GetSnapshotData if procArray->oldest_running_xid is valid, can we set xmin to be procArray->oldest_running_xid?\n>> \n>> Appreciate any suggestion to this issue.\n>\n>At 2019-10-22 01:27:58, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n>>On Mon, Oct 21, 2019 at 4:13 AM Thunder <thunder1@126.com> wrote:\n>..\n>> >I think that the issue you've encountered is design behavior. In\n>> >other words, it's intended to work that way.\n>> >\n>> >The comments for the code you propose to change say that we can allow\n>> >connections once we've got a valid snapshot. So presumably the effect\n>> >of your change would be to allow connections even though we don't have\n>> >a valid snapshot.\n>> >\n>> >That seems bad.\n>\n>regards.\n>\n>-- \n>Kyotaro Horiguchi\n>NTT Open Source Software Center\n>\n\nThanks for replay.I feel confused about snapshot.At 2019-10-23 11:51:19, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n>Hello.\n>\n>At Tue, 22 Oct 2019 20:42:21 +0800 (CST), Thunder <thunder1@126.com> wrote in \n>> Update the patch.\n>> \n>> 1. The STANDBY_SNAPSHOT_PENDING state is set when we replay the first XLOG_RUNNING_XACTS and the sub transaction ids are overflow.\n>> 2. When we log XLOG_RUNNING_XACTS in master node, can we assume that all xact IDS < oldestRunningXid are considered finished?\n>\n>Unfortunately we can't. Standby needs to know that the *standby's*\n>oldest active xid exceeds the pendig xmin, not master's. And it is\n>already processed in ProcArrayApplyRecoveryInfo. We cannot assume that\n>the oldest xids are not same on the both side in a replication pair.This issue occurs when master does not commit the transaction which has lots of sub transactions, while we restart or create a new standby node.The standby node can not provide service because of this issue.Can the standby have any active xid while it can not provide service?>\n>> 3. If we can assume this, when we replay XLOG_RUNNING_XACTS and change standbyState to STANDBY_SNAPSHOT_PENDING, can we record oldestRunningXid to a shared variable, like procArray->oldest_running_xid?\n>> 4. In standby node when call GetSnapshotData if procArray->oldest_running_xid is valid, can we set xmin to be procArray->oldest_running_xid?\n>> \n>> Appreciate any suggestion to this issue.\n>\n>At 2019-10-22 01:27:58, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n>>On Mon, Oct 21, 2019 at 4:13 AM Thunder <thunder1@126.com> wrote:\n>..\n>> >I think that the issue you've encountered is design behavior. In\n>> >other words, it's intended to work that way.\n>> >\n>> >The comments for the code you propose to change say that we can allow\n>> >connections once we've got a valid snapshot. So presumably the effect\n>> >of your change would be to allow connections even though we don't have\n>> >a valid snapshot.\n>> >\n>> >That seems bad.\n>\n>regards.\n>\n>-- \n>Kyotaro Horiguchi\n>NTT Open Source Software Center\n>", "msg_date": "Thu, 24 Oct 2019 17:37:52 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:Re: [BUG] standby node can not provide service even it replays\n all log files" }, { "msg_contents": "Hi\nIn our usage scenario the standby node could be OOM killed and we have to create new standby node.\nIf master node has uncommitted long transaction and new standby node can not provide service.\nSo for us this is a critical issue.\n\n\nI do hope any suggestion to this issue.\nAnd can any one help to review the attached patch?\nThanks. \n\n\n\n\n\n\nAt 2019-10-22 20:42:21, \"Thunder\" <thunder1@126.com> wrote:\n\nUpdate the patch.\n\n1. The STANDBY_SNAPSHOT_PENDING state is set when we replay the first XLOG_RUNNING_XACTS and the sub transaction ids are overflow.\n2. When we log XLOG_RUNNING_XACTS in master node, can we assume that all xact IDS < oldestRunningXid are considered finished?\n3. If we can assume this, when we replay XLOG_RUNNING_XACTS and change standbyState to STANDBY_SNAPSHOT_PENDING, can we record oldestRunningXid to a shared variable, like procArray->oldest_running_xid?\n4. In standby node when call GetSnapshotData if procArray->oldest_running_xid is valid, can we set xmin to be procArray->oldest_running_xid?\n\nAppreciate any suggestion to this issue.\n\n\n\n\n\nAt 2019-10-22 01:27:58, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n>On Mon, Oct 21, 2019 at 4:13 AM Thunder <thunder1@126.com> wrote:\n>> Can we fix this issue like the following patch?\n>>\n>> $git diff src/backend/access/transam/xlog.c\n>> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n>> index 49ae97d4459..0fbdf6fd64a 100644\n>> --- a/src/backend/access/transam/xlog.c\n>> +++ b/src/backend/access/transam/xlog.c\n>> @@ -8365,7 +8365,7 @@ CheckRecoveryConsistency(void)\n>> * run? If so, we can tell postmaster that the database is consistent now,\n>> * enabling connections.\n>> */\n>> - if (standbyState == STANDBY_SNAPSHOT_READY &&\n>> + if ((standbyState == STANDBY_SNAPSHOT_READY || standbyState == STANDBY_SNAPSHOT_PENDING) &&\n>> !LocalHotStandbyActive &&\n>> reachedConsistency &&\n>> IsUnderPostmaster)\n>\n>I think that the issue you've encountered is design behavior. In\n>other words, it's intended to work that way.\n>\n>The comments for the code you propose to change say that we can allow\n>connections once we've got a valid snapshot. So presumably the effect\n>of your change would be to allow connections even though we don't have\n>a valid snapshot.\n>\n>That seems bad.\n>\n>-- \n>Robert Haas\n>EnterpriseDB: http://www.enterprisedb.com\n>The Enterprise PostgreSQL Company", "msg_date": "Mon, 28 Oct 2019 21:54:51 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:Re:Re: [BUG] standby node can not provide service even it\n replays all log files" }, { "msg_contents": "At Thu, 24 Oct 2019 17:37:52 +0800 (CST), Thunder <thunder1@126.com> wrote in \n> Thanks for replay.I feel confused about snapshot.\n> \n> At 2019-10-23 11:51:19, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> >Hello.\n> >\n> >At Tue, 22 Oct 2019 20:42:21 +0800 (CST), Thunder <thunder1@126.com> wrote in \n> >> Update the patch.\n> >> \n> >> 1. The STANDBY_SNAPSHOT_PENDING state is set when we replay the first XLOG_RUNNING_XACTS and the sub transaction ids are overflow.\n> >> 2. When we log XLOG_RUNNING_XACTS in master node, can we assume that all xact IDS < oldestRunningXid are considered finished?\n> >\n> >Unfortunately we can't. Standby needs to know that the *standby's*\n> >oldest active xid exceeds the pendig xmin, not master's. And it is\n> >already processed in ProcArrayApplyRecoveryInfo. We cannot assume that\n> \n> >the oldest xids are not same on the both side in a replication pair.\n> \n> \n> This issue occurs when master does not commit the transaction which has lots of sub transactions, while we restart or create a new standby node.\n> The standby node can not provide service because of this issue.\n> Can the standby have any active xid while it can not provide service?\n\nThe problem is not xid, but snapshot, information on what xids are not\ncommitted yet on the master. Standby cannot deterine what rows should\nbe visible without the information. The xid list is maintained using\nincoming commit records and vanishes on restart. So the restarted\nstandby needs non-subxid-overflown XLOG_RUNNING_XACTS to make sure the\nxid list is complete.\n\n> >> 3. If we can assume this, when we replay XLOG_RUNNING_XACTS and change standbyState to STANDBY_SNAPSHOT_PENDING, can we record oldestRunningXid to a shared variable, like procArray->oldest_running_xid?\n> >> 4. In standby node when call GetSnapshotData if procArray->oldest_running_xid is valid, can we set xmin to be procArray->oldest_running_xid?\n> >> \n> >> Appreciate any suggestion to this issue.\n\nSo, somehow we need to complete the KnownAssignedTransactionIds even\nif there's any subxid-overflown transactions. As mentioned upthread,\nI think we have at least the following choices.\n\n- Send back the complete xid list for START REPLICATION command from\n walreceiver.\n\n- The first XLOG_RUNNING_XACTS after a standby comes in while\n subxid-overflown transaction lives.\n\nI think the first is better.\n\nAny suggestions?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 29 Oct 2019 13:57:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] standby node can not provide service even it replays all\n log files" }, { "msg_contents": "Mmm..\n\nAt Tue, 29 Oct 2019 13:57:19 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> So, somehow we need to complete the KnownAssignedTransactionIds even\n> if there's any subxid-overflown transactions. As mentioned upthread,\n> I think we have at least the following choices.\n> \n> - Send back the complete xid list for START REPLICATION command from\n> walreceiver.\n> \n> - The first XLOG_RUNNING_XACTS after a standby comes in while\n> subxid-overflown transaction lives.\n> \n> I think the first is better.\n> \n> Any suggestions?\n\nOn second thought, for the first choice, currently we don't have a\nmeans to recall snapshot at arbitrary checkpoint time so it would be\nhard to do that. But the second choice doesn't seem a good way..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 29 Oct 2019 15:01:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] standby node can not provide service even it replays all\n log files" } ]
[ { "msg_contents": "Hi all,\n\nWhile digging into the issues reported lately about REINDEX\nCONCURRENTLY, I have bumped into the following, independent, issue:\n/* Now open the relation of the new index, a lock is also needed on it */\nnewIndexRel = index_open(indexId, ShareUpdateExclusiveLock)\n\nIn this code path, indexId is the OID od the old index copied, and\nnewIndexId is the OID of the new index created. So that's clearly\nincorrect, and the comment even says the intention. This causes for\nexample the same session lock to be taken twice on the old index, with\nthe new index remaining unprotected.\n\nAny objections if I fix this issue as per the attached?\n--\nMichael", "msg_date": "Mon, 21 Oct 2019 16:43:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Incorrect relation locked at beginning of REINDEX CONCURRENTLY" } ]
[ { "msg_contents": "\nBowerbird (Visual Studio 2017 / Windows 10 pro) just had a failure on\nthe pg_ctl test :\n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2019-10-21%2011%3A50%3A21>\n\n\nThere was a similar failure 17 days ago.\n\n\nI surmise that what's happening here is that the test is trying to read\ncurrent_logfiles while the server is writing it, so there's a race\ncondition.\n\n\nPerhaps what we need to do is have slurp_file sleep a bit and try again\non Windows if it gets EPERM, or else we need to have the pg_ctl test\nwait a bit before calling slurp_file. But we have seen occasional\nsimilar failures on other tests in Windows so a more systemic approach\nmight be better.\n\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 21 Oct 2019 11:07:28 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "intermittent test failure on Windows" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Bowerbird (Visual Studio 2017 / Windows 10 pro) just had a failure on\n> the pg_ctl test :\n> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2019-10-21%2011%3A50%3A21>\n\n> I surmise that what's happening here is that the test is trying to read\n> current_logfiles while the server is writing it, so there's a race\n> condition.\n\nHmm ... the server tries to replace current_logfiles atomically\nwith rename(), so this says that rename isn't atomic on Windows,\nwhich we knew already. Previous discussion (cf. commit d611175e5)\nimplies that an even worse failure condition is possible: the server\nmight fail to rename current_logfiles.tmp into place, just because\nsomebody is trying to read current_logfiles. Ugh.\n\nI found a thread about trying to make a really bulletproof rename()\nfor Windows:\n\nhttps://www.postgresql.org/message-id/flat/CAPpHfds7dyuGZt%2BPF2GL9qSSVV0OZnjNwqiCPjN7mirDw882tA%40mail.gmail.com\n\nbut it looks like we gave up in disgust.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Oct 2019 14:58:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: intermittent test failure on Windows" }, { "msg_contents": "\nOn 10/21/19 2:58 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Bowerbird (Visual Studio 2017 / Windows 10 pro) just had a failure on\n>> the pg_ctl test :\n>> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2019-10-21%2011%3A50%3A21>\n>> I surmise that what's happening here is that the test is trying to read\n>> current_logfiles while the server is writing it, so there's a race\n>> condition.\n> Hmm ... the server tries to replace current_logfiles atomically\n> with rename(), so this says that rename isn't atomic on Windows,\n> which we knew already. Previous discussion (cf. commit d611175e5)\n> implies that an even worse failure condition is possible: the server\n> might fail to rename current_logfiles.tmp into place, just because\n> somebody is trying to read current_logfiles. Ugh.\n>\n> I found a thread about trying to make a really bulletproof rename()\n> for Windows:\n>\n> https://www.postgresql.org/message-id/flat/CAPpHfds7dyuGZt%2BPF2GL9qSSVV0OZnjNwqiCPjN7mirDw882tA%40mail.gmail.com\n>\n> but it looks like we gave up in disgust.\n\n\nYeah. Looks like Alexander revived the discussion with a patch back in\nAugust, though, and it's in the next commitfest.\n<https://commitfest.postgresql.org/25/2230/>\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 22 Oct 2019 08:22:58 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: intermittent test failure on Windows" } ]
[ { "msg_contents": "After setting up logical replication of a slowly changing table using the\nbuilt in pub/sub facility, I noticed way more network traffic than made\nsense. Looking into I see that every transaction in that database on the\nmaster gets sent to the replica. 99.999+% of them are empty transactions\n('B' message and 'C' message with nothing in between) because the\ntransactions don't touch any tables in the publication, only non-replicated\ntables. Is doing it this way necessary for some reason? Couldn't we hold\nthe transmission of 'B' until something else comes along, and then if that\nnext thing is 'C' drop both of them?\n\nThere is a comment for WalSndPrepareWrite which seems to foreshadow such a\nthing, but I don't really see how to use it in this case. I want to drop\ntwo messages, not one.\n\n * Don't do anything lasting in here, it's quite possible that nothing will\nbe done\n * with the data.\n\nThis applies to all version which have support for pub/sub, including the\nrecent commits to 13dev.\n\nI've searched through the voluminous mailing list threads for when this\nfeature was being presented to see if it was already discussed, but since\nevery word I can think to search on occurs in virtually every message in\nthe threads in some context or another, I didn't have much luck.\n\nCheers,\n\nJeff\n\nAfter setting up logical replication \n\nof a slowly changing table\n\nusing the built in pub/sub facility, I noticed way more network traffic than made sense.  Looking into I see that every transaction in that database on the master gets sent to the replica.  99.999+% of them are empty transactions ('B' message and 'C' message with nothing in between) because the transactions don't touch any tables in the publication, only non-replicated tables.  Is doing it this way necessary for some reason?  Couldn't we hold the transmission of 'B' until something else comes along, and then if that next thing is 'C' drop both of them?There is a comment for WalSndPrepareWrite which seems to foreshadow such a thing, but I don't really see how to use it in this case.  I want to drop two messages, not one.  * Don't do anything lasting in here, it's quite possible that nothing will be done * with the data.This applies to all version which have support for pub/sub, including the recent commits to 13dev.I've searched through the voluminous mailing list threads for when this feature was being presented to see if it was already discussed, but since every word I can think to search on occurs in virtually every message in the threads in some context or another, I didn't have much luck.Cheers,Jeff", "msg_date": "Mon, 21 Oct 2019 20:20:21 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "logical replication empty transactions" }, { "msg_contents": "Em seg., 21 de out. de 2019 às 21:20, Jeff Janes\n<jeff.janes@gmail.com> escreveu:\n>\n> After setting up logical replication of a slowly changing table using the built in pub/sub facility, I noticed way more network traffic than made sense. Looking into I see that every transaction in that database on the master gets sent to the replica. 99.999+% of them are empty transactions ('B' message and 'C' message with nothing in between) because the transactions don't touch any tables in the publication, only non-replicated tables. Is doing it this way necessary for some reason? Couldn't we hold the transmission of 'B' until something else comes along, and then if that next thing is 'C' drop both of them?\n>\nThat is not optimal. Those empty transactions is a waste of bandwidth.\nWe can suppress them if no changes will be sent. test_decoding\nimplements \"skip empty transaction\" as you described above and I did\nsomething similar to it. Patch is attached.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Fri, 8 Nov 2019 22:58:50 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Nov 8, 2019 at 8:59 PM Euler Taveira <euler@timbira.com.br> wrote:\n\n> Em seg., 21 de out. de 2019 às 21:20, Jeff Janes\n> <jeff.janes@gmail.com> escreveu:\n> >\n> > After setting up logical replication of a slowly changing table using\n> the built in pub/sub facility, I noticed way more network traffic than made\n> sense. Looking into I see that every transaction in that database on the\n> master gets sent to the replica. 99.999+% of them are empty transactions\n> ('B' message and 'C' message with nothing in between) because the\n> transactions don't touch any tables in the publication, only non-replicated\n> tables. Is doing it this way necessary for some reason? Couldn't we hold\n> the transmission of 'B' until something else comes along, and then if that\n> next thing is 'C' drop both of them?\n> >\n> That is not optimal. Those empty transactions is a waste of bandwidth.\n> We can suppress them if no changes will be sent. test_decoding\n> implements \"skip empty transaction\" as you described above and I did\n> something similar to it. Patch is attached.\n>\n\nThanks. I didn't think it would be that simple, because I thought we would\nneed some way to fake an acknowledgement for any dropped empty\ntransactions, to keep the LSN advancing and allow WAL to get recycled on\nthe master. But it turns out the opposite. While your patch drops the\nnetwork traffic by a lot, there is still a lot of traffic. Now it is\nkeep-alives, rather than 'B' and 'C'. I don't know why I am getting a few\nhundred keep alives every second when the timeouts are at their defaults,\nbut it is better than several thousand 'B' and 'C'.\n\nMy setup here was just to create, publish, and subscribe to a inactive\ndummy table, while having pgbench running on the master (with unpublished\ntables). I have not created an intentionally slow network, but I am\ntesting it over wifi, which is inherently kind of slow.\n\nCheers,\n\nJeff\n\nOn Fri, Nov 8, 2019 at 8:59 PM Euler Taveira <euler@timbira.com.br> wrote:Em seg., 21 de out. de 2019 às 21:20, Jeff Janes\n<jeff.janes@gmail.com> escreveu:\n>\n> After setting up logical replication of a slowly changing table using the built in pub/sub facility, I noticed way more network traffic than made sense.  Looking into I see that every transaction in that database on the master gets sent to the replica.  99.999+% of them are empty transactions ('B' message and 'C' message with nothing in between) because the transactions don't touch any tables in the publication, only non-replicated tables.  Is doing it this way necessary for some reason?  Couldn't we hold the transmission of 'B' until something else comes along, and then if that next thing is 'C' drop both of them?\n>\nThat is not optimal. Those empty transactions is a waste of bandwidth.\nWe can suppress them if no changes will be sent. test_decoding\nimplements \"skip empty transaction\" as you described above and I did\nsomething similar to it. Patch is attached.Thanks.  I didn't think it would be that simple, because I thought we would need some way to fake an acknowledgement for any dropped empty transactions, to keep the LSN advancing and allow WAL to get recycled on the master.  But it turns out the opposite.  While your patch drops the network traffic by a lot, there is still a lot of traffic.  Now it is keep-alives, rather than 'B' and 'C'.  I don't know why I am getting a few hundred keep alives every second when the timeouts are at their defaults, but it is better than several thousand 'B' and 'C'. My setup here was just to create, publish, and subscribe to a inactive dummy table, while having pgbench running on the master (with unpublished tables).  I have not created an intentionally slow network, but I am testing it over wifi, which is inherently kind of slow.Cheers,Jeff", "msg_date": "Sat, 9 Nov 2019 16:28:15 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Sat, Nov 9, 2019 at 7:29 AM Euler Taveira <euler@timbira.com.br> wrote:\n>\n> Em seg., 21 de out. de 2019 às 21:20, Jeff Janes\n> <jeff.janes@gmail.com> escreveu:\n> >\n> > After setting up logical replication of a slowly changing table using the built in pub/sub facility, I noticed way more network traffic than made sense. Looking into I see that every transaction in that database on the master gets sent to the replica. 99.999+% of them are empty transactions ('B' message and 'C' message with nothing in between) because the transactions don't touch any tables in the publication, only non-replicated tables. Is doing it this way necessary for some reason? Couldn't we hold the transmission of 'B' until something else comes along, and then if that next thing is 'C' drop both of them?\n> >\n> That is not optimal. Those empty transactions is a waste of bandwidth.\n> We can suppress them if no changes will be sent. test_decoding\n> implements \"skip empty transaction\" as you described above and I did\n> something similar to it. Patch is attached.\n\nI think this significantly reduces the network bandwidth for empty\ntransactions. I have briefly reviewed the patch and it looks good to\nme.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Mar 2020 09:00:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Mar 2, 2020 at 9:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Nov 9, 2019 at 7:29 AM Euler Taveira <euler@timbira.com.br> wrote:\n> >\n> > Em seg., 21 de out. de 2019 às 21:20, Jeff Janes\n> > <jeff.janes@gmail.com> escreveu:\n> > >\n> > > After setting up logical replication of a slowly changing table using the built in pub/sub facility, I noticed way more network traffic than made sense. Looking into I see that every transaction in that database on the master gets sent to the replica. 99.999+% of them are empty transactions ('B' message and 'C' message with nothing in between) because the transactions don't touch any tables in the publication, only non-replicated tables. Is doing it this way necessary for some reason? Couldn't we hold the transmission of 'B' until something else comes along, and then if that next thing is 'C' drop both of them?\n> > >\n> > That is not optimal. Those empty transactions is a waste of bandwidth.\n> > We can suppress them if no changes will be sent. test_decoding\n> > implements \"skip empty transaction\" as you described above and I did\n> > something similar to it. Patch is attached.\n>\n> I think this significantly reduces the network bandwidth for empty\n> transactions. I have briefly reviewed the patch and it looks good to\n> me.\n>\n\nOne thing that is not clear to me is how will we advance restart_lsn\nif we don't send any empty xact in a system where there are many such\nxacts? IIRC, the restart_lsn is advanced based on confirmed_flush lsn\nsent by subscriber. After this change, the subscriber won't be able\nto send the confirmed_flush and for a long time, we won't be able to\nadvance restart_lsn. Is that correct, if so, why do we think that is\nacceptable? One might argue that restart_lsn will be advanced as soon\nas we send the first non-empty xact, but not sure if that is good\nenough. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Mar 2020 16:56:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Mar 2, 2020 at 4:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 2, 2020 at 9:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sat, Nov 9, 2019 at 7:29 AM Euler Taveira <euler@timbira.com.br> wrote:\n> > >\n> > > Em seg., 21 de out. de 2019 às 21:20, Jeff Janes\n> > > <jeff.janes@gmail.com> escreveu:\n> > > >\n> > > > After setting up logical replication of a slowly changing table using the built in pub/sub facility, I noticed way more network traffic than made sense. Looking into I see that every transaction in that database on the master gets sent to the replica. 99.999+% of them are empty transactions ('B' message and 'C' message with nothing in between) because the transactions don't touch any tables in the publication, only non-replicated tables. Is doing it this way necessary for some reason? Couldn't we hold the transmission of 'B' until something else comes along, and then if that next thing is 'C' drop both of them?\n> > > >\n> > > That is not optimal. Those empty transactions is a waste of bandwidth.\n> > > We can suppress them if no changes will be sent. test_decoding\n> > > implements \"skip empty transaction\" as you described above and I did\n> > > something similar to it. Patch is attached.\n> >\n> > I think this significantly reduces the network bandwidth for empty\n> > transactions. I have briefly reviewed the patch and it looks good to\n> > me.\n> >\n>\n> One thing that is not clear to me is how will we advance restart_lsn\n> if we don't send any empty xact in a system where there are many such\n> xacts? IIRC, the restart_lsn is advanced based on confirmed_flush lsn\n> sent by subscriber. After this change, the subscriber won't be able\n> to send the confirmed_flush and for a long time, we won't be able to\n> advance restart_lsn. Is that correct, if so, why do we think that is\n> acceptable? One might argue that restart_lsn will be advanced as soon\n> as we send the first non-empty xact, but not sure if that is good\n> enough. What do you think?\n\nIt seems like a valid point. One idea could be that we can track the\nlast commit LSN which we streamed and if the confirmed flush location\nis already greater than that then even if we skip the sending the\ncommit message we can increase the confirm flush location locally.\nLogically, it should not cause any problem because once we have got\nthe confirmation for whatever we have streamed so far. So for other\ncommits(which we are skipping), we can we advance it locally because\nwe are sure that we don't have any streamed commit which is not yet\nconfirmed by the subscriber. This is just my thought, but if we\nthink from the code and design perspective then it might complicate\nthe things and sounds hackish.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Mar 2020 09:35:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, Mar 3, 2020 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Mar 2, 2020 at 4:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > One thing that is not clear to me is how will we advance restart_lsn\n> > if we don't send any empty xact in a system where there are many such\n> > xacts? IIRC, the restart_lsn is advanced based on confirmed_flush lsn\n> > sent by subscriber. After this change, the subscriber won't be able\n> > to send the confirmed_flush and for a long time, we won't be able to\n> > advance restart_lsn. Is that correct, if so, why do we think that is\n> > acceptable? One might argue that restart_lsn will be advanced as soon\n> > as we send the first non-empty xact, but not sure if that is good\n> > enough. What do you think?\n>\n> It seems like a valid point. One idea could be that we can track the\n> last commit LSN which we streamed and if the confirmed flush location\n> is already greater than that then even if we skip the sending the\n> commit message we can increase the confirm flush location locally.\n> Logically, it should not cause any problem because once we have got\n> the confirmation for whatever we have streamed so far. So for other\n> commits(which we are skipping), we can we advance it locally because\n> we are sure that we don't have any streamed commit which is not yet\n> confirmed by the subscriber.\n>\n\nWill this work after restart? Do you want to persist the information\nof last streamed commit LSN?\n\n> This is just my thought, but if we\n> think from the code and design perspective then it might complicate\n> the things and sounds hackish.\n>\n\nAnother idea could be that we stream the transaction after some\nthreshold number (say 100 or anything we think is reasonable) of empty\nxacts. This will reduce the traffic without tinkering with the core\ndesign too much.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Mar 2020 13:54:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, Mar 3, 2020 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 3, 2020 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Mar 2, 2020 at 4:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > One thing that is not clear to me is how will we advance restart_lsn\n> > > if we don't send any empty xact in a system where there are many such\n> > > xacts? IIRC, the restart_lsn is advanced based on confirmed_flush lsn\n> > > sent by subscriber. After this change, the subscriber won't be able\n> > > to send the confirmed_flush and for a long time, we won't be able to\n> > > advance restart_lsn. Is that correct, if so, why do we think that is\n> > > acceptable? One might argue that restart_lsn will be advanced as soon\n> > > as we send the first non-empty xact, but not sure if that is good\n> > > enough. What do you think?\n> >\n> > It seems like a valid point. One idea could be that we can track the\n> > last commit LSN which we streamed and if the confirmed flush location\n> > is already greater than that then even if we skip the sending the\n> > commit message we can increase the confirm flush location locally.\n> > Logically, it should not cause any problem because once we have got\n> > the confirmation for whatever we have streamed so far. So for other\n> > commits(which we are skipping), we can we advance it locally because\n> > we are sure that we don't have any streamed commit which is not yet\n> > confirmed by the subscriber.\n> >\n>\n> Will this work after restart? Do you want to persist the information\n> of last streamed commit LSN?\n\nWe will not persist the last streamed commit LSN, this variable is in\nmemory just to track whether we have got confirmation up to that\nlocation or not, once we have confirmation up to that location and if\nwe are not streaming any transaction (because those are empty\ntransactions) then we can just advance the confirmed flush location\nand based on that we can update the restart point as well and those\nwill be persisted. Basically, \"last streamed commit LSN\" is just a\nmarker that their still something pending to be confirmed from the\nsubscriber so until that we can not simply advance the confirm flush\nlocation or restart point based on the empty transactions. But, if\nthere is nothing pending to be confirmed we can advance. So if we are\nstreaming then we will get confirmation from subscriber otherwise we\ncan advance it locally. So, in either case, the confirmed flush\nlocation and restart point will keep moving.\n\n>\n> > This is just my thought, but if we\n> > think from the code and design perspective then it might complicate\n> > the things and sounds hackish.\n> >\n>\n> Another idea could be that we stream the transaction after some\n> threshold number (say 100 or anything we think is reasonable) of empty\n> xacts. This will reduce the traffic without tinkering with the core\n> design too much.\n\nYeah, this could be also an option.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Mar 2020 14:17:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, Mar 3, 2020 at 2:17 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Mar 3, 2020 at 1:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 3, 2020 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 2, 2020 at 4:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > One thing that is not clear to me is how will we advance restart_lsn\n> > > > if we don't send any empty xact in a system where there are many such\n> > > > xacts? IIRC, the restart_lsn is advanced based on confirmed_flush lsn\n> > > > sent by subscriber. After this change, the subscriber won't be able\n> > > > to send the confirmed_flush and for a long time, we won't be able to\n> > > > advance restart_lsn. Is that correct, if so, why do we think that is\n> > > > acceptable? One might argue that restart_lsn will be advanced as soon\n> > > > as we send the first non-empty xact, but not sure if that is good\n> > > > enough. What do you think?\n> > >\n> > > It seems like a valid point. One idea could be that we can track the\n> > > last commit LSN which we streamed and if the confirmed flush location\n> > > is already greater than that then even if we skip the sending the\n> > > commit message we can increase the confirm flush location locally.\n> > > Logically, it should not cause any problem because once we have got\n> > > the confirmation for whatever we have streamed so far. So for other\n> > > commits(which we are skipping), we can we advance it locally because\n> > > we are sure that we don't have any streamed commit which is not yet\n> > > confirmed by the subscriber.\n> > >\n> >\n> > Will this work after restart? Do you want to persist the information\n> > of last streamed commit LSN?\n>\n> We will not persist the last streamed commit LSN, this variable is in\n> memory just to track whether we have got confirmation up to that\n> location or not, once we have confirmation up to that location and if\n> we are not streaming any transaction (because those are empty\n> transactions) then we can just advance the confirmed flush location\n> and based on that we can update the restart point as well and those\n> will be persisted. Basically, \"last streamed commit LSN\" is just a\n> marker that their still something pending to be confirmed from the\n> subscriber so until that we can not simply advance the confirm flush\n> location or restart point based on the empty transactions. But, if\n> there is nothing pending to be confirmed we can advance. So if we are\n> streaming then we will get confirmation from subscriber otherwise we\n> can advance it locally. So, in either case, the confirmed flush\n> location and restart point will keep moving.\n>\n\nOkay, so this might work out, but it might look a bit ad-hoc.\n\n> >\n> > > This is just my thought, but if we\n> > > think from the code and design perspective then it might complicate\n> > > the things and sounds hackish.\n> > >\n> >\n> > Another idea could be that we stream the transaction after some\n> > threshold number (say 100 or anything we think is reasonable) of empty\n> > xacts. This will reduce the traffic without tinkering with the core\n> > design too much.\n>\n> Yeah, this could be also an option.\n>\n\nOkay.\n\nPeter E, Petr J, others, do you have any opinion on what is the best\nway forward for this thread? I think it would be really good if we\ncan reduce the network traffic due to these empty transactions.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Mar 2020 15:34:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, 3 Mar 2020 at 05:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> Another idea could be that we stream the transaction after some\n> threshold number (say 100 or anything we think is reasonable) of empty\n> xacts. This will reduce the traffic without tinkering with the core\n> design too much.\n>\n>\n> Amit, I suggest an interval to control this setting. Time is something we\nhave control; transactions aren't (depending on workload).\npg_stat_replication query interval usually is not milliseconds, however,\nyou can execute thousands of transactions in a second. If we agree on that\nidea I can add it to the patch.\n\n\nRegards,\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Tue, 3 Mar 2020 at 05:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\nAnother idea could be that we stream the transaction after some\nthreshold number (say 100 or anything we think is reasonable) of empty\nxacts.  This will reduce the traffic without tinkering with the core\ndesign too much.\nAmit,\n I suggest an interval to control this setting. Time is something we \nhave control; transactions aren't (depending on workload). \npg_stat_replication query interval usually is not milliseconds, however,\n you can execute thousands of transactions in a second. If we agree on \nthat idea I can add it to the patch.Regards, -- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 3 Mar 2020 22:47:22 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 4, 2020 at 7:17 AM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> On Tue, 3 Mar 2020 at 05:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>\n>> Another idea could be that we stream the transaction after some\n>> threshold number (say 100 or anything we think is reasonable) of empty\n>> xacts. This will reduce the traffic without tinkering with the core\n>> design too much.\n>>\n>>\n> Amit, I suggest an interval to control this setting. Time is something we have control; transactions aren't (depending on workload). pg_stat_replication query interval usually is not milliseconds, however, you can execute thousands of transactions in a second. If we agree on that idea I can add it to the patch.\n>\n\nDo you mean to say that if for some threshold interval we didn't\nstream any transaction, then we can send the next empty transaction to\nthe subscriber? If so, then isn't it possible that the empty xacts\nhappen irregularly after the specified interval and then we still end\nup sending them all. I might be missing something here, so can you\nplease explain your idea in detail? Basically, how will it work and\nhow will it solve the problem.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:12:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 4, 2020 at 9:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 4, 2020 at 7:17 AM Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n> >\n> > On Tue, 3 Mar 2020 at 05:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >>\n> >> Another idea could be that we stream the transaction after some\n> >> threshold number (say 100 or anything we think is reasonable) of empty\n> >> xacts. This will reduce the traffic without tinkering with the core\n> >> design too much.\n> >>\n> >>\n> > Amit, I suggest an interval to control this setting. Time is something we have control; transactions aren't (depending on workload). pg_stat_replication query interval usually is not milliseconds, however, you can execute thousands of transactions in a second. If we agree on that idea I can add it to the patch.\n> >\n>\n> Do you mean to say that if for some threshold interval we didn't\n> stream any transaction, then we can send the next empty transaction to\n> the subscriber? If so, then isn't it possible that the empty xacts\n> happen irregularly after the specified interval and then we still end\n> up sending them all. I might be missing something here, so can you\n> please explain your idea in detail? Basically, how will it work and\n> how will it solve the problem.\n\nIMHO, the threshold should be based on the commit LSN. Our main\nreason we want to send empty transactions after a certain\ntransaction/duration is that we want the restart_lsn to be moving\nforward so that if we need to restart the replication slot we don't\nneed to process a lot of extra WAL. So assume we set the threshold\nbased on transaction count then there is still a possibility that we\nmight process a few very big transactions then we will have to process\nthem again after the restart. OTOH, if we set based on an interval\nthen even if there is not much work going on, still we end up sending\nthe empty transaction as pointed by Amit.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:51:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 4, 2020 at 9:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 4, 2020 at 9:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 4, 2020 at 7:17 AM Euler Taveira\n> > <euler.taveira@2ndquadrant.com> wrote:\n> > >\n> > > On Tue, 3 Mar 2020 at 05:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >>\n> > >>\n> > >> Another idea could be that we stream the transaction after some\n> > >> threshold number (say 100 or anything we think is reasonable) of empty\n> > >> xacts. This will reduce the traffic without tinkering with the core\n> > >> design too much.\n> > >>\n> > >>\n> > > Amit, I suggest an interval to control this setting. Time is something we have control; transactions aren't (depending on workload). pg_stat_replication query interval usually is not milliseconds, however, you can execute thousands of transactions in a second. If we agree on that idea I can add it to the patch.\n> > >\n> >\n> > Do you mean to say that if for some threshold interval we didn't\n> > stream any transaction, then we can send the next empty transaction to\n> > the subscriber? If so, then isn't it possible that the empty xacts\n> > happen irregularly after the specified interval and then we still end\n> > up sending them all. I might be missing something here, so can you\n> > please explain your idea in detail? Basically, how will it work and\n> > how will it solve the problem.\n>\n> IMHO, the threshold should be based on the commit LSN. Our main\n> reason we want to send empty transactions after a certain\n> transaction/duration is that we want the restart_lsn to be moving\n> forward so that if we need to restart the replication slot we don't\n> need to process a lot of extra WAL. So assume we set the threshold\n> based on transaction count then there is still a possibility that we\n> might process a few very big transactions then we will have to process\n> them again after the restart.\n>\n\nWon't the subscriber eventually send the flush location for the large\ntransactions which will move the restart_lsn?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 10:49:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 4, 2020 at 10:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 4, 2020 at 9:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Mar 4, 2020 at 9:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 4, 2020 at 7:17 AM Euler Taveira\n> > > <euler.taveira@2ndquadrant.com> wrote:\n> > > >\n> > > > On Tue, 3 Mar 2020 at 05:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >>\n> > > >>\n> > > >> Another idea could be that we stream the transaction after some\n> > > >> threshold number (say 100 or anything we think is reasonable) of empty\n> > > >> xacts. This will reduce the traffic without tinkering with the core\n> > > >> design too much.\n> > > >>\n> > > >>\n> > > > Amit, I suggest an interval to control this setting. Time is something we have control; transactions aren't (depending on workload). pg_stat_replication query interval usually is not milliseconds, however, you can execute thousands of transactions in a second. If we agree on that idea I can add it to the patch.\n> > > >\n> > >\n> > > Do you mean to say that if for some threshold interval we didn't\n> > > stream any transaction, then we can send the next empty transaction to\n> > > the subscriber? If so, then isn't it possible that the empty xacts\n> > > happen irregularly after the specified interval and then we still end\n> > > up sending them all. I might be missing something here, so can you\n> > > please explain your idea in detail? Basically, how will it work and\n> > > how will it solve the problem.\n> >\n> > IMHO, the threshold should be based on the commit LSN. Our main\n> > reason we want to send empty transactions after a certain\n> > transaction/duration is that we want the restart_lsn to be moving\n> > forward so that if we need to restart the replication slot we don't\n> > need to process a lot of extra WAL. So assume we set the threshold\n> > based on transaction count then there is still a possibility that we\n> > might process a few very big transactions then we will have to process\n> > them again after the restart.\n> >\n>\n> Won't the subscriber eventually send the flush location for the large\n> transactions which will move the restart_lsn?\n\nI meant large empty transactions (basically we can not send anything\nto the subscriber). So my point was if there are only large\ntransactions in the system which we can not stream because those\ntables are not published. Then keeping threshold based on transaction\ncount will not help much because even if we don't reach the\ntransaction count threshold, we still might need to process a lot of\ndata if we don't stream the commit for the empty transactions. So\ninstead of tracking transaction count can we track LSN, and LSN\ndifferent since we last stream some change cross the threshold then we\nwill stream the next empty transaction.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 11:15:52 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 4, 2020 at 11:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 4, 2020 at 10:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 4, 2020 at 9:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > >\n> > > IMHO, the threshold should be based on the commit LSN. Our main\n> > > reason we want to send empty transactions after a certain\n> > > transaction/duration is that we want the restart_lsn to be moving\n> > > forward so that if we need to restart the replication slot we don't\n> > > need to process a lot of extra WAL. So assume we set the threshold\n> > > based on transaction count then there is still a possibility that we\n> > > might process a few very big transactions then we will have to process\n> > > them again after the restart.\n> > >\n> >\n> > Won't the subscriber eventually send the flush location for the large\n> > transactions which will move the restart_lsn?\n>\n> I meant large empty transactions (basically we can not send anything\n> to the subscriber). So my point was if there are only large\n> transactions in the system which we can not stream because those\n> tables are not published. Then keeping threshold based on transaction\n> count will not help much because even if we don't reach the\n> transaction count threshold, we still might need to process a lot of\n> data if we don't stream the commit for the empty transactions. So\n> instead of tracking transaction count can we track LSN, and LSN\n> different since we last stream some change cross the threshold then we\n> will stream the next empty transaction.\n>\n\nYou have a point and it may be better to keep threshold based on LSN\nif we want to keep any threshold, but keeping on transaction count\nseems to be a bit straightforward. Let us see if anyone else has any\nopinion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 15:47:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 4, 2020 at 3:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 4, 2020 at 11:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Mar 4, 2020 at 10:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 4, 2020 at 9:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > >\n> > > > IMHO, the threshold should be based on the commit LSN. Our main\n> > > > reason we want to send empty transactions after a certain\n> > > > transaction/duration is that we want the restart_lsn to be moving\n> > > > forward so that if we need to restart the replication slot we don't\n> > > > need to process a lot of extra WAL. So assume we set the threshold\n> > > > based on transaction count then there is still a possibility that we\n> > > > might process a few very big transactions then we will have to process\n> > > > them again after the restart.\n> > > >\n> > >\n> > > Won't the subscriber eventually send the flush location for the large\n> > > transactions which will move the restart_lsn?\n> >\n> > I meant large empty transactions (basically we can not send anything\n> > to the subscriber). So my point was if there are only large\n> > transactions in the system which we can not stream because those\n> > tables are not published. Then keeping threshold based on transaction\n> > count will not help much because even if we don't reach the\n> > transaction count threshold, we still might need to process a lot of\n> > data if we don't stream the commit for the empty transactions. So\n> > instead of tracking transaction count can we track LSN, and LSN\n> > different since we last stream some change cross the threshold then we\n> > will stream the next empty transaction.\n> >\n>\n> You have a point and it may be better to keep threshold based on LSN\n> if we want to keep any threshold, but keeping on transaction count\n> seems to be a bit straightforward. Let us see if anyone else has any\n> opinion on this matter?\n\nOk, that make sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 16:03:54 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 4, 2020 at 4:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Mar 4, 2020 at 3:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 4, 2020 at 11:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 4, 2020 at 10:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Mar 4, 2020 at 9:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > IMHO, the threshold should be based on the commit LSN. Our main\n> > > > > reason we want to send empty transactions after a certain\n> > > > > transaction/duration is that we want the restart_lsn to be moving\n> > > > > forward so that if we need to restart the replication slot we don't\n> > > > > need to process a lot of extra WAL. So assume we set the threshold\n> > > > > based on transaction count then there is still a possibility that we\n> > > > > might process a few very big transactions then we will have to process\n> > > > > them again after the restart.\n> > > > >\n> > > >\n> > > > Won't the subscriber eventually send the flush location for the large\n> > > > transactions which will move the restart_lsn?\n> > >\n> > > I meant large empty transactions (basically we can not send anything\n> > > to the subscriber). So my point was if there are only large\n> > > transactions in the system which we can not stream because those\n> > > tables are not published. Then keeping threshold based on transaction\n> > > count will not help much because even if we don't reach the\n> > > transaction count threshold, we still might need to process a lot of\n> > > data if we don't stream the commit for the empty transactions. So\n> > > instead of tracking transaction count can we track LSN, and LSN\n> > > different since we last stream some change cross the threshold then we\n> > > will stream the next empty transaction.\n> > >\n> >\n> > You have a point and it may be better to keep threshold based on LSN\n> > if we want to keep any threshold, but keeping on transaction count\n> > seems to be a bit straightforward. Let us see if anyone else has any\n> > opinion on this matter?\n>\n> Ok, that make sense.\n>\n\nEuler, can we try to update the patch based on the number of\ntransactions threshold and see how it works?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 5 Mar 2020 14:15:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, 5 Mar 2020 at 05:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Euler, can we try to update the patch based on the number of\n> transactions threshold and see how it works?\n>\n> I will do.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Thu, 5 Mar 2020 at 05:45, Amit Kapila <amit.kapila16@gmail.com> wrote: Euler, can we try to update the patch based on the number of\ntransactions threshold and see how it works?\nI will do.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 5 Mar 2020 09:59:11 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, 2 Mar 2020 at 19:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> One thing that is not clear to me is how will we advance restart_lsn\n> if we don't send any empty xact in a system where there are many such\n> xacts?\n\nSame way we already do it for writes that are not replicated over\nlogical replication, like vacuum work etc. The upstream sends feedback\nwith reply-requested. The downstream replies. The upstream advances\nconfirmed_flush_lsn, and that lazily updates restart_lsn.\n\nThe bigger issue here is that if you don't send empty txns on logical\nreplication you don't get an eager, timely response from the\nreplica(s), which delays synchronous replication. You need to send\nempty txns when synchronous replication is enabled, or instead poke\nthe walsender to force immediate feedback with reply requested.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Fri, 6 Mar 2020 13:53:02 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "Hi,\n\nOn 2020-03-06 13:53:02 +0800, Craig Ringer wrote:\n> On Mon, 2 Mar 2020 at 19:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> > One thing that is not clear to me is how will we advance restart_lsn\n> > if we don't send any empty xact in a system where there are many such\n> > xacts?\n> \n> Same way we already do it for writes that are not replicated over\n> logical replication, like vacuum work etc. The upstream sends feedback\n> with reply-requested. The downstream replies. The upstream advances\n> confirmed_flush_lsn, and that lazily updates restart_lsn.\n\nIt'll still delay it a bit.\n\n\n> The bigger issue here is that if you don't send empty txns on logical\n> replication you don't get an eager, timely response from the\n> replica(s), which delays synchronous replication. You need to send\n> empty txns when synchronous replication is enabled, or instead poke\n> the walsender to force immediate feedback with reply requested.\n\nSomewhat independent from the issue at hand: It'd be really good if we\ncould evolve the syncrep framework to support per-database waiting... It\nshouldn't be that hard, and the current situation sucks quite a bit (and\nyes, I'm to blame).\n\nI'm not quite sure what you mean by \"poke the walsender\"? Kinda sounds\nlike sending a signal, but decoding happens inside after the walsender,\nso there's no need for that. Do you just mean somehow requesting that\nwalsender sends a feedback message?\n\nTo address the volume we could:\n\n1a) Introduce a pgoutput message type to indicate that the LSN has\n advanced, without needing separate BEGIN/COMMIT. Right now BEGIN is\n 21 bytes, COMMIT is 26. But we really don't need that much here. A\n single message should do the trick.\n\n1b) Add a LogicalOutputPluginWriterUpdateProgress parameter (and\n possibly rename) that indicates that we are intentionally \"ignoring\"\n WAL. For walsender that callback then could check if it could just\n forward the position of the client (if it was entirely caught up\n before), or if it should send a feedback request (if syncrep is\n enabled, or distance is big).\n\n2) Reduce the rate of 'empty transaction'/feedback request messages. If\n we know that we're not going to be blocked waiting for more WAL, or\n blocked sending messages out to the network, we don't immediately need\n to send out the messages. Instead we could continue decoding until\n there's actual data, or until we're going to get blocked.\n\n We could e.g. have a new LogicalDecodingContext callback that is\n called whenever WalSndWaitForWal() would wait. That'd check if there's\n a pending \"need\" to send out a 'empty transaction'/feedback request\n message. The \"need\" flag would get cleared whenever we send out data\n bearing an LSN for other reasons.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Mar 2020 11:30:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, 10 Mar 2020 at 02:30, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-03-06 13:53:02 +0800, Craig Ringer wrote:\n> > On Mon, 2 Mar 2020 at 19:26, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > > One thing that is not clear to me is how will we advance restart_lsn\n> > > if we don't send any empty xact in a system where there are many such\n> > > xacts?\n> >\n> > Same way we already do it for writes that are not replicated over\n> > logical replication, like vacuum work etc. The upstream sends feedback\n> > with reply-requested. The downstream replies. The upstream advances\n> > confirmed_flush_lsn, and that lazily updates restart_lsn.\n>\n> It'll still delay it a bit.\n>\n\nRight, but we don't generally care because there's no sync rep txn waiting\nfor confirmation. If we lose progress due to a crash it doesn't matter. It\ndoes delay removal of old WAL a little, but it hardly matters.\n\n\n> Somewhat independent from the issue at hand: It'd be really good if we\n> could evolve the syncrep framework to support per-database waiting... It\n> shouldn't be that hard, and the current situation sucks quite a bit (and\n> yes, I'm to blame).\n>\n\nHardly, you just didn't get the chance to fix that on top of the umpteen\nother things you had to change to make all the logical stuff work. You\ndidn't break it, just didn't implement every single possible enhancement\nall at once. Shocking, I tell you.\n\n\nI'm not quite sure what you mean by \"poke the walsender\"? Kinda sounds\n> like sending a signal, but decoding happens inside after the walsender,\n> so there's no need for that. Do you just mean somehow requesting that\n> walsender sends a feedback message?\n>\n\nRight. I had in mind something like sending a ProcSignal via our funky\nmultiplexed signal mechanism to ask the walsender to immediately generate a\nkeepalive message with a reply-requested flag, then set the walsender's\nlatch so we wake it promptly.\n\n\n> To address the volume we could:\n>\n> 1a) Introduce a pgoutput message type to indicate that the LSN has\n> advanced, without needing separate BEGIN/COMMIT. Right now BEGIN is\n> 21 bytes, COMMIT is 26. But we really don't need that much here. A\n> single message should do the trick.\n>\n\nIt would. Is it worth caring though? Especially since it seems rather\nunlikely that the actual network data volume of begin/commit msgs will be\nmuch of a concern. It's not like we're PITRing logical streams, and if we\ndid, we could just filter out empty commits on the receiver side.\n\nThat message pretty much already exists in the form of a walsender\nkeepalive anyway so we might as well re-use that and not upset the protocol.\n\n\n> 1b) Add a LogicalOutputPluginWriterUpdateProgress parameter (and\n> possibly rename) that indicates that we are intentionally \"ignoring\"\n> WAL. For walsender that callback then could check if it could just\n> forward the position of the client (if it was entirely caught up\n> before), or if it should send a feedback request (if syncrep is\n> enabled, or distance is big).\n>\n\nI can see something like that being very useful, because at present only\nthe output plugin knows if a txn is \"empty\" as far as that particular slot\nand output plugin is concerned. The reorder buffering mechanism cannot do\nrelation-level filtering before it sends the changes to the output plugin\nduring ReorderBufferCommit, since it only knows about relfilenodes not\nrelation oids. And the output plugin might be doing finer grained filtering\nusing row-filter expressions or who knows what else.\n\nBut as described above that will only help for txns done in DBs other than\nthe one the logical slot is for or txns known to have an empty\nReorderBuffer when the commit is seen.\n\nIf there's a txn in the slot's db with a non-empty reorderbuffer, the\noutput plugin won't know if the txn is empty or not until it finishes\nprocessing all callbacks and sees the commit for the txn. So it will\ngenerally have emitted the Begin message on the wire by the time it knows\nit has nothing useful to say. And Pg won't know that this txn is empty as\nfar as this output plugin with this particular slot, set of output plugin\nparams, and current user-catalog state is concerned, so it won't have any\nway to call the output plugin's \"update progress\" callback instead of the\nusual begin/change/commit callbacks.\n\nBut I think we can already skip empty txns unless sync-rep is enabled with\nno core changes, and send empty txns as walsender keepalives instead, by\naltering only output plugins, like this:\n\n* Stash BEGIN data in plugin's LogicalDecodingContext.output_plugin_private\nwhen plugin's begin callback called, don't write anything to the outstream\n* Write out BEGIN message lazily when any other callback generates a\nmessage that does need to be written out\n* If no BEGIN written by the time COMMIT callback called, discard the\nCOMMIT too. Check if sync rep enabled. if it is,\ncall LogicalDecodingContext.update_progress from within the output plugin\ncommit handler, otherwise just ignore the commit totally. Probably by\ncalling OutputPluginUpdateProgress().\n\n We could e.g. have a new LogicalDecodingContext callback that is\n> called whenever WalSndWaitForWal() would wait. That'd check if there's\n> a pending \"need\" to send out a 'empty transaction'/feedback request\n> message. The \"need\" flag would get cleared whenever we send out data\n> bearing an LSN for other reasons.\n>\n\nI can see that being handy, yes. But it won't necessarily help with the\nsync rep issue, since other sync rep txns may continue to generate WAL\nwhile others wait for commit-confirmations that won't come from the logical\nreplica.\n\nWhile we're speaking of adding output plugin hooks, I keep on trying to\nthink of a sensible way to do a plugin-defined reply handler, so the\ndownstream end can send COPY BOTH messages of some new msgkind back to the\nwalsender, which will pass them to the output plugin if it implements the\nappropriate handle_reply_message (or whatever) callback. That much is\ntrivial to implement, where I keep getting a bit stuck is with whether\nthere's a sensible snapshot that can be set to call the output plugin reply\nhandler with. We wouldn't want to switch to a current non-historic snapshot\nbecause of all the cache flushes that'd cause, but there isn't necessarily\na valid and safe historic snapshot to set when we're not within\nReorderBufferCommit is there?\n\nI'd love to get rid of the need to \"connect back\" to a provider over plain\nlibpq connections to communicate with it. The ability to run SQL on the\nwalsender conn helps. But really, so much more would be possible if we\ncould just have the downstream end *reply* on the same connection using\nCOPY BOTH, much like it sends replay progress updates right now. It'd let\nus manage relation/attribute/type metadata caches better for example.\n\nThoughts?\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 10 Mar 2020 at 02:30, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-03-06 13:53:02 +0800, Craig Ringer wrote:\n> On Mon, 2 Mar 2020 at 19:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> > One thing that is not clear to me is how will we advance restart_lsn\n> > if we don't send any empty xact in a system where there are many such\n> > xacts?\n> \n> Same way we already do it for writes that are not replicated over\n> logical replication, like vacuum work etc. The upstream sends feedback\n> with reply-requested. The downstream replies. The upstream advances\n> confirmed_flush_lsn, and that lazily updates restart_lsn.\n\nIt'll still delay it a bit.Right, but we don't generally care because there's no sync rep txn waiting for confirmation. If we lose progress due to a crash it doesn't matter. It does delay removal of old WAL a little, but it hardly matters. Somewhat independent from the issue at hand: It'd be really good if we\ncould evolve the syncrep framework to support per-database waiting... It\nshouldn't be that hard, and the current situation sucks quite a bit (and\nyes, I'm to blame).Hardly, you just didn't get the chance to fix that on top of the umpteen other things you had to change to make all the logical stuff work. You didn't break it, just didn't implement every single possible enhancement all at once. Shocking, I tell you.I'm not quite sure what you mean by \"poke the walsender\"? Kinda sounds\nlike sending a signal, but decoding happens inside after the walsender,\nso there's no need for that. Do you just mean somehow requesting that\nwalsender sends a feedback message?Right. I had in mind something like sending a ProcSignal via our funky multiplexed signal mechanism to ask the walsender to immediately generate a keepalive message with a reply-requested flag, then set the walsender's latch so we wake it promptly. To address the volume we could:\n\n1a) Introduce a pgoutput message type to indicate that the LSN has\n  advanced, without needing separate BEGIN/COMMIT. Right now BEGIN is\n  21 bytes, COMMIT is 26. But we really don't need that much here. A\n  single message should do the trick.It would. Is it worth caring though? Especially since it seems rather unlikely that the actual network data volume of begin/commit msgs will be much of a concern. It's not like we're PITRing logical streams, and if we did, we could just filter out empty commits on the receiver side.That message pretty much already exists in the form of a walsender keepalive anyway so we might as well re-use that and not upset the protocol. 1b) Add a LogicalOutputPluginWriterUpdateProgress parameter (and\n  possibly rename) that indicates that we are intentionally \"ignoring\"\n  WAL. For walsender that callback then could check if it could just\n  forward the position of the client (if it was entirely caught up\n  before), or if it should send a feedback request (if syncrep is\n  enabled, or distance is big).I can see something like that being very useful, because at present only the output plugin knows if a txn is \"empty\" as far as that particular slot and output plugin is concerned. The reorder buffering mechanism cannot do relation-level filtering before it sends the changes to the output plugin during ReorderBufferCommit, since it only knows about relfilenodes not relation oids. And the output plugin might be doing finer grained filtering using row-filter expressions or who knows what else.But as described above that will only help for txns done in DBs other than the one the logical slot is for or txns known to have an empty ReorderBuffer when the commit is seen.If there's a txn in the slot's db with a non-empty reorderbuffer, the output plugin won't know if the txn is empty or not until it finishes processing all callbacks and sees the commit for the txn. So it will generally have emitted the Begin message on the wire by the time it knows it has nothing useful to say. And Pg won't know that this txn is empty as far as this output plugin with this particular slot, set of output plugin params, and current user-catalog state is concerned, so it won't have any way to call the output plugin's \"update progress\" callback instead of the usual begin/change/commit callbacks.But I think we can already skip empty txns unless sync-rep is enabled with no core changes, and send empty txns as walsender keepalives instead, by altering only output plugins, like this:* Stash BEGIN data in plugin's LogicalDecodingContext.output_plugin_private when plugin's begin callback called, don't write anything to the outstream* Write out BEGIN message lazily when any other callback generates a message that does need to be written out* If no BEGIN written by the time COMMIT callback called, discard the COMMIT too. Check if sync rep enabled. if it is, call LogicalDecodingContext.update_progress from within the output plugin commit handler, otherwise just ignore the commit totally. Probably by calling OutputPluginUpdateProgress().  We could e.g. have a new LogicalDecodingContext callback that is\n  called whenever WalSndWaitForWal() would wait. That'd check if there's\n  a pending \"need\" to send out a 'empty transaction'/feedback request\n  message. The \"need\" flag would get cleared whenever we send out data\n  bearing an LSN for other reasons.I can see that being handy, yes. But it won't necessarily help with the sync rep issue, since other sync rep txns may continue to generate WAL while others wait for commit-confirmations that won't come from the logical replica.While we're speaking of adding output plugin hooks, I keep on trying to think of a sensible way to do a plugin-defined reply handler, so the downstream end can send COPY BOTH messages of some new msgkind back to the walsender, which will pass them to the output plugin if it implements the appropriate handle_reply_message (or whatever) callback. That much is trivial to implement, where I keep getting a bit stuck is with whether there's a sensible snapshot that can be set to call the output plugin reply handler with. We wouldn't want to switch to a current non-historic snapshot because of all the cache flushes that'd cause, but there isn't necessarily a valid and safe historic snapshot to set when we're not within ReorderBufferCommit is there?I'd love to get rid of the need to \"connect back\" to a provider over plain libpq connections to communicate with it. The ability to run SQL on the walsender conn helps. But really, so much more would be possible if we could just have the downstream end *reply* on the same connection using COPY BOTH, much like it sends replay progress updates right now. It'd let us manage relation/attribute/type metadata caches better for example. Thoughts?--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 13 Mar 2020 14:39:43 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "The patch no longer applies, because of additions in the test source. Otherwise, I have tested the patch and confirmed that updates and deletes on tables with deferred primary keys work with logical replication.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Fri, 24 Jul 2020 07:40:17 +0000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "Sorry, I replied in the wrong thread. Please ignore above mail.\n\n>\n>\n\nSorry, I replied in the wrong thread. Please ignore above mail.", "msg_date": "Fri, 24 Jul 2020 18:13:26 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "Hi,\n\nPlease see below review of the \n0001-Skip-empty-transactions-for-logical-replication.patch\n\nThe make check passes.\n\n\n  +               /* output BEGIN if we haven't yet */\n  +               if (!data->xact_wrote_changes)\n  +                       pgoutput_begin(ctx, txn);\n  +\n  +               data->xact_wrote_changes = true;\n  +\nIMO, xact_wrote_changes flag is better set inside the if condition as it \ndoes not need to\nbe set repeatedly in subsequent calls to the same function.\n\n>\n> * Stash BEGIN data in plugin's \n> LogicalDecodingContext.output_plugin_private when plugin's begin \n> callback called, don't write anything to the outstream\n> * Write out BEGIN message lazily when any other callback generates a \n> message that does need to be written out\n> * If no BEGIN written by the time COMMIT callback called, discard the \n> COMMIT too. Check if sync rep enabled. if it is, \n> call LogicalDecodingContext.update_progress\n> from within the output plugin commit handler, otherwise just ignore \n> the commit totally. Probably by calling OutputPluginUpdateProgress().\n>\n\nI think the code in the patch is similar to what has been described by \nCraig in the above snippet,\nexcept instead of stashing the BEGIN message and sending the message \nlazily, it simply maintains a flag\nin LogicalDecodingContext.output_plugin_private which defers calling \noutput plugin's begin callback,\nuntil any other callback actually generates a remote write.\n\nAlso, the patch does not contain the last part where he describes \nhaving OutputPluginUpdateProgress()\nfor synchronous replication enabled transactions.\nHowever, some basic testing suggests that the patch does not have any \nnotable adverse effect on\neither the replication lag or the sync_rep performance.\n\nI performed tests by setting up publisher and subscriber on the same \nmachine with synchronous_commit = on and\nran pgbench -c 12 -j 6 -T 300 on unpublished pgbench tables.\n\nI see that  confirmed_flush_lsn is catching up just fine without any \nnotable delay as compared to the test results without\nthe patch.\n\nAlso, the TPS for synchronous replication of empty txns with and without \nthe patch remains similar.\n\nHaving said that, these are initial findings and I understand better \nperformance tests are required to measure\nreduction in consumption of network bandwidth and impact on synchronous \nreplication and replication lag.\n\nThank you,\nRahila Syed\n\n\n\n\n\n\n\n\n\nHi,\nPlease see below review of the \n 0001-Skip-empty-transactions-for-logical-replication.patch\n\n\nThe make check passes.\n\n\n\n\n +               /* output BEGIN if we haven't yet */\n  +               if (!data->xact_wrote_changes)\n  +                       pgoutput_begin(ctx, txn);\n  +\n  +               data->xact_wrote_changes = true;\n  +\n IMO, xact_wrote_changes flag is better set inside the if\n condition as it does not need to \nbe set repeatedly in subsequent calls to the same function. \n\n\n\n\n\n\n\n* Stash BEGIN data in plugin's\n LogicalDecodingContext.output_plugin_private when plugin's\n begin callback called, don't write anything to the outstream\n* Write out BEGIN message lazily when any other callback\n generates a message that does need to be written out\n* If no BEGIN written by the time COMMIT callback called,\n discard the COMMIT too. Check if sync rep enabled. if it is,\n call LogicalDecodingContext.update_progress \n\n\n\n\n\n\nfrom within the output plugin commit handler, otherwise\n just ignore the commit totally. Probably by\n calling OutputPluginUpdateProgress().\n\n\n\n\n\n\nI think the code in the patch is similar to what has been\n described by Craig in the above snippet,  \n\nexcept instead of stashing the BEGIN message and sending the\n message lazily, it simply maintains a flag \n\nin LogicalDecodingContext.output_plugin_private which defers\n calling output plugin's begin callback, \n\nuntil any other callback actually generates a remote write. \n\n\nAlso, the patch does not contain the last part where he\n describes having OutputPluginUpdateProgress() \n\nfor synchronous replication enabled transactions. \nHowever, some basic testing suggests that the patch does not\n have any notable adverse effect on \n\neither the replication lag or the sync_rep performance. \n\n\nI performed tests by setting up publisher and subscriber on\n the same machine with synchronous_commit = on and \n\nran pgbench -c 12 -j 6 -T 300 on unpublished pgbench tables. \n\n\n\nI see that  confirmed_flush_lsn is catching up just fine\n without any notable delay as compared to the test results\n without \n\nthe patch. \n\n\n\nAlso, the TPS for synchronous replication of empty txns\n with and without the patch remains similar.\n\n\nHaving said that, these are initial findings and I understand\n better performance tests are required to measure \n\nreduction in consumption of network bandwidth and impact on\n synchronous replication and replication lag.\n\n\nThank you,\nRahila Syed", "msg_date": "Wed, 29 Jul 2020 20:08:06 +0530", "msg_from": "Rahila Syed <rahila.syed@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Jul 29, 2020 at 08:08:06PM +0530, Rahila Syed wrote:\n> The make check passes.\n\nSince then, the patch is failing to apply, waiting on author and the\nthread has died 6 weeks or so ago, so I am marking it as RwF in the\nCF.\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 14:29:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Sep 17, 2020 at 3:29 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jul 29, 2020 at 08:08:06PM +0530, Rahila Syed wrote:\n> > The make check passes.\n>\n> Since then, the patch is failing to apply, waiting on author and the\n> thread has died 6 weeks or so ago, so I am marking it as RwF in the\n> CF.\n>\n>\nI've rebased the patch and made changes so that the patch supports\n\"streaming in-progress transactions\" and handling of logical decoding\nmessages (transactional and non-transactional).\nI see that this patch not only makes sure that empty transactions are not\nsent but also does call OutputPluginUpdateProgress when an empty\ntransaction is not sent, as a result the confirmed_flush_lsn is kept\nmoving. I also see no hangs when synchronous_standby is configured.\nDo let me know your thoughts on this patch.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Thu, 15 Apr 2021 13:29:48 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Apr 15, 2021 at 1:29 PM Ajin Cherian <itsajin@gmail.com> wrote:\n\n>\n> I've rebased the patch and made changes so that the patch supports\n> \"streaming in-progress transactions\" and handling of logical decoding\n> messages (transactional and non-transactional).\n> I see that this patch not only makes sure that empty transactions are not\n> sent but also does call OutputPluginUpdateProgress when an empty\n> transaction is not sent, as a result the confirmed_flush_lsn is kept\n> moving. I also see no hangs when synchronous_standby is configured.\n> Do let me know your thoughts on this patch.\n>\n>\nRemoved some debug logs and typos.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Thu, 15 Apr 2021 16:38:40 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Apr 15, 2021 at 4:39 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n>\n>\n> On Thu, Apr 15, 2021 at 1:29 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>>\n>>\n>> I've rebased the patch and made changes so that the patch supports \"streaming in-progress transactions\" and handling of logical decoding\n>> messages (transactional and non-transactional).\n>> I see that this patch not only makes sure that empty transactions are not sent but also does call OutputPluginUpdateProgress when an empty\n>> transaction is not sent, as a result the confirmed_flush_lsn is kept moving. I also see no hangs when synchronous_standby is configured.\n>> Do let me know your thoughts on this patch.\n\nREVIEW COMMENTS\n\nI applied this patch to today's HEAD and successfully ran \"make check\"\nand also the subscription TAP tests.\n\nHere are a some review comments:\n\n------\n\n1. The patch v3 applied OK but with whitespace warnings\n\n[postgres@CentOS7-x64 oss_postgres_2PC]$ git apply\n../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch\n../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch:98:\nindent with spaces.\n /* output BEGIN if we haven't yet, avoid for streaming and\nnon-transactional messages */\n../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch:99:\nindent with spaces.\n if (!data->xact_wrote_changes && !in_streaming && transactional)\nwarning: 2 lines add whitespace errors.\n\n------\n\n2. Please create a CF entry in [1] for this patch.\n\n------\n\n3. Patch comment\n\nThe comment describes the problem and then suddenly just says\n\"Postpone the BEGIN message until the first change.\"\n\nI suggest changing it to say more like... \"(blank line) This patch\naddresses the above problem by postponing the BEGIN message until the\nfirst change.\"\n\n------\n\n4. pgoutput.h\n\nMaybe for consistency with the context member, the comment for the new\nmember should be to the right instead of above it?\n\n@@ -20,6 +20,9 @@ typedef struct PGOutputData\n MemoryContext context; /* private memory context for transient\n * allocations */\n\n+ /* flag indicating whether messages have previously been sent */\n+ bool xact_wrote_changes;\n+\n\n------\n\n5. pgoutput.h\n\n+ /* flag indicating whether messages have previously been sent */\n\n\"previously been sent\" --> \"already been sent\" ??\n\n------\n\n6. pgoutput.h - misleading member name\n\nActually, now that I have read all the rest of the code and how this\nmember is used I feel that this name is very misleading. e.g. For\n\"streaming\" case then you still are writing changes but are not\nsetting this member at all - therefore it does not always mean what it\nsays.\n\nI feel a better name for this would be something like\n\"sent_begin_txn\". Then if you have sent BEGIN it is true. If you\nhaven't sent BEGIN it is false. It eliminates all ambiguity naming it\nthis way instead.\n\n(This makes my feedback #5 redundant because the comment will be a bit\ndifferent if you do this).\n\n------\n\n7. pgoutput.c - function pgoutput_begin_txn\n\n@@ -345,6 +345,23 @@ pgoutput_startup(LogicalDecodingContext *ctx,\nOutputPluginOptions *opt,\n static void\n pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n\nI guess that you still needed to pass the txn because that is how the\nAPI is documented, right?\n\nBut I am wondering if you ought to flag it as unused so you wont get\nsome BF machine giving warnings about it.\n\ne.g. Syntax like this?\n\npgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN * txn) {\n(void)txn;\n...\n\n------\n\n8. pgoutput.c - function pgoutput_begin_txn\n\n@@ -345,6 +345,23 @@ pgoutput_startup(LogicalDecodingContext *ctx,\nOutputPluginOptions *opt,\n static void\n pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n+ PGOutputData *data = ctx->output_plugin_private;\n+\n+ /*\n+ * Don't send BEGIN message here. Instead, postpone it until the first\n+ * change. In logical replication, a common scenario is to replicate a set\n+ * of tables (instead of all tables) and transactions whose changes were on\n+ * table(s) that are not published will produce empty transactions. These\n+ * empty transactions will send BEGIN and COMMIT messages to subscribers,\n+ * using bandwidth on something with little/no use for logical replication.\n+ */\n+ data->xact_wrote_changes = false;\n+ elog(LOG,\"Holding of begin\");\n+}\n\nWhy is this loglevel LOG? Looks like leftover debugging.\n\n------\n\n9. pgoutput.c - function pgoutput_commit_txn\n\n@@ -384,8 +401,14 @@ static void\n pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n XLogRecPtr commit_lsn)\n {\n+ PGOutputData *data = ctx->output_plugin_private;\n+\n OutputPluginUpdateProgress(ctx);\n\n+ /* skip COMMIT message if nothing was sent */\n+ if (!data->xact_wrote_changes)\n+ return;\n+\n\nIn the case where you decided to do nothing does it make sense that\nyou still called the function OutputPluginUpdateProgress(ctx); ?\nI thought perhaps that your new check should come first so this call\nwould never happen.\n\n------\n\n10. pgoutput.c - variable declarations without casts\n\n+ PGOutputData *data = ctx->output_plugin_private;\n\nI noticed the new stack variable you declare have no casts.\n\nThis differs from the existing code which always looks like:\nPGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n\nThere are a couple of examples of this so please search new code to\nfind them all.\n\n------\n\n11. pgoutput.c - function pgoutput_change\n\n@@ -551,6 +574,13 @@ pgoutput_change(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n Assert(false);\n }\n\n+ /* output BEGIN if we haven't yet */\n+ if (!data->xact_wrote_changes && !in_streaming)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ data->xact_wrote_changes = true;\n+ }\n\nIf the variable is renamed as previously suggested then the assignment\ndata->sent_BEGIN_txn = true; can be assigned in just 1 common place\nINSIDE the pgoutput_begin function.\n\n------\n\n12. pgoutput.c - pgoutput_truncate function\n\n@@ -693,6 +723,13 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n\n if (nrelids > 0)\n {\n+ /* output BEGIN if we haven't yet */\n+ if (!data->xact_wrote_changes && !in_streaming)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ data->xact_wrote_changes = true;\n+ }\n\n(same comment as above)\n\nIf the variable is renamed as previously suggested then the assignment\ndata->sent_BEGIN_txn = true; can be assigned in just 1 common place\nINSIDE the pgoutput_begin function.\n\n13. pgoutput.c - pgoutput_message\n\n@@ -725,6 +762,13 @@ pgoutput_message(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n if (in_streaming)\n xid = txn->xid;\n\n+ /* output BEGIN if we haven't yet, avoid for streaming and\nnon-transactional messages */\n+ if (!data->xact_wrote_changes && !in_streaming && transactional)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ data->xact_wrote_changes = true;\n+ }\n\n(same comment as above)\n\nIf the variable is renamed as previously suggested then the assignment\ndata->sent_BEGIN_txn = true; can be assigned in just 1 common place\nINSIDE the pgoutput_begin function.\n\n------\n\n14. Test Code.\n\nI noticed that there is no test code specifically for seeing if empty\ntransactions get sent or not. Is it possible to write such a test or\nis this traffic improvement only observable using the debugger?\n\n------\n[1] https://commitfest.postgresql.org/33/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 19 Apr 2021 18:22:22 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Apr 19, 2021 at 6:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n>\n> Here are a some review comments:\n>\n> ------\n>\n> 1. The patch v3 applied OK but with whitespace warnings\n>\n> [postgres@CentOS7-x64 oss_postgres_2PC]$ git apply\n>\n> ../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch\n>\n> ../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch:98:\n> indent with spaces.\n> /* output BEGIN if we haven't yet, avoid for streaming and\n> non-transactional messages */\n>\n> ../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch:99:\n> indent with spaces.\n> if (!data->xact_wrote_changes && !in_streaming && transactional)\n> warning: 2 lines add whitespace errors.\n>\n> ------\n>\n\nFixed.\n\n\n>\n> 2. Please create a CF entry in [1] for this patch.\n>\n> ------\n>\n> 3. Patch comment\n>\n> The comment describes the problem and then suddenly just says\n> \"Postpone the BEGIN message until the first change.\"\n>\n> I suggest changing it to say more like... \"(blank line) This patch\n> addresses the above problem by postponing the BEGIN message until the\n> first change.\"\n>\n> ------\n>\n>\nUpdated.\n\n\n> 4. pgoutput.h\n>\n> Maybe for consistency with the context member, the comment for the new\n> member should be to the right instead of above it?\n>\n> @@ -20,6 +20,9 @@ typedef struct PGOutputData\n> MemoryContext context; /* private memory context for transient\n> * allocations */\n>\n> + /* flag indicating whether messages have previously been sent */\n> + bool xact_wrote_changes;\n> +\n>\n> ------\n>\n> 5. pgoutput.h\n>\n> + /* flag indicating whether messages have previously been sent */\n>\n> \"previously been sent\" --> \"already been sent\" ??\n>\n> ------\n>\n> 6. pgoutput.h - misleading member name\n>\n> Actually, now that I have read all the rest of the code and how this\n> member is used I feel that this name is very misleading. e.g. For\n> \"streaming\" case then you still are writing changes but are not\n> setting this member at all - therefore it does not always mean what it\n> says.\n>\n> I feel a better name for this would be something like\n> \"sent_begin_txn\". Then if you have sent BEGIN it is true. If you\n> haven't sent BEGIN it is false. It eliminates all ambiguity naming it\n> this way instead.\n>\n> (This makes my feedback #5 redundant because the comment will be a bit\n> different if you do this).\n>\n> ------\n>\n\nFixed above comments.\n\n>\n> 7. pgoutput.c - function pgoutput_begin_txn\n>\n> @@ -345,6 +345,23 @@ pgoutput_startup(LogicalDecodingContext *ctx,\n> OutputPluginOptions *opt,\n> static void\n> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n>\n> I guess that you still needed to pass the txn because that is how the\n> API is documented, right?\n>\n> But I am wondering if you ought to flag it as unused so you wont get\n> some BF machine giving warnings about it.\n>\n> e.g. Syntax like this?\n>\n> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN * txn) {\n> (void)txn;\n> ...\n>\n\nUpdated.\n\n> ------\n>\n> 8. pgoutput.c - function pgoutput_begin_txn\n>\n> @@ -345,6 +345,23 @@ pgoutput_startup(LogicalDecodingContext *ctx,\n> OutputPluginOptions *opt,\n> static void\n> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> + PGOutputData *data = ctx->output_plugin_private;\n> +\n> + /*\n> + * Don't send BEGIN message here. Instead, postpone it until the first\n> + * change. In logical replication, a common scenario is to replicate a set\n> + * of tables (instead of all tables) and transactions whose changes were\n> on\n> + * table(s) that are not published will produce empty transactions. These\n> + * empty transactions will send BEGIN and COMMIT messages to subscribers,\n> + * using bandwidth on something with little/no use for logical\n> replication.\n> + */\n> + data->xact_wrote_changes = false;\n> + elog(LOG,\"Holding of begin\");\n> +}\n>\n> Why is this loglevel LOG? Looks like leftover debugging.\n>\n\nRemoved.\n\n>\n> ------\n>\n> 9. pgoutput.c - function pgoutput_commit_txn\n>\n> @@ -384,8 +401,14 @@ static void\n> pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> XLogRecPtr commit_lsn)\n> {\n> + PGOutputData *data = ctx->output_plugin_private;\n> +\n> OutputPluginUpdateProgress(ctx);\n>\n> + /* skip COMMIT message if nothing was sent */\n> + if (!data->xact_wrote_changes)\n> + return;\n> +\n>\n> In the case where you decided to do nothing does it make sense that\n> you still called the function OutputPluginUpdateProgress(ctx); ?\n> I thought perhaps that your new check should come first so this call\n> would never happen.\n>\n\nEven though the empty transaction is not sent, the LSN is tracked as\ndecoded, hence the progress needs to be updated.\n\n\n> ------\n>\n> 10. pgoutput.c - variable declarations without casts\n>\n> + PGOutputData *data = ctx->output_plugin_private;\n>\n> I noticed the new stack variable you declare have no casts.\n>\n> This differs from the existing code which always looks like:\n> PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n>\n> There are a couple of examples of this so please search new code to\n> find them all.\n>\n> -----\n>\n\nFixed.\n\n\n> 11. pgoutput.c - function pgoutput_change\n>\n> @@ -551,6 +574,13 @@ pgoutput_change(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> Assert(false);\n> }\n>\n> + /* output BEGIN if we haven't yet */\n> + if (!data->xact_wrote_changes && !in_streaming)\n> + {\n> + pgoutput_begin(ctx, txn);\n> + data->xact_wrote_changes = true;\n> + }\n>\n> If the variable is renamed as previously suggested then the assignment\n> data->sent_BEGIN_txn = true; can be assigned in just 1 common place\n> INSIDE the pgoutput_begin function.\n>\n> ------\n>\n\nUpdated.\n\n>\n> 12. pgoutput.c - pgoutput_truncate function\n>\n> @@ -693,6 +723,13 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n>\n> if (nrelids > 0)\n> {\n> + /* output BEGIN if we haven't yet */\n> + if (!data->xact_wrote_changes && !in_streaming)\n> + {\n> + pgoutput_begin(ctx, txn);\n> + data->xact_wrote_changes = true;\n> + }\n>\n> (same comment as above)\n>\n> If the variable is renamed as previously suggested then the assignment\n> data->sent_BEGIN_txn = true; can be assigned in just 1 common place\n> INSIDE the pgoutput_begin function.\n>\n> 13. pgoutput.c - pgoutput_message\n>\n> @@ -725,6 +762,13 @@ pgoutput_message(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> if (in_streaming)\n> xid = txn->xid;\n>\n> + /* output BEGIN if we haven't yet, avoid for streaming and\n> non-transactional messages */\n> + if (!data->xact_wrote_changes && !in_streaming && transactional)\n> + {\n> + pgoutput_begin(ctx, txn);\n> + data->xact_wrote_changes = true;\n> + }\n>\n> (same comment as above)\n>\n> If the variable is renamed as previously suggested then the assignment\n> data->sent_BEGIN_txn = true; can be assigned in just 1 common place\n> INSIDE the pgoutput_begin function.\n>\n> ------\n>\n\nFixed.\n\n>\n> 14. Test Code.\n>\n> I noticed that there is no test code specifically for seeing if empty\n> transactions get sent or not. Is it possible to write such a test or\n> is this traffic improvement only observable using the debugger?\n>\n>\nThe 020_messages.pl actually has a test case for tracking empty messages\neven though it is part of the messages test.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 23 Apr 2021 15:46:06 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "An earlier comment from Anders:\n> We could e.g. have a new LogicalDecodingContext callback that is\n> called whenever WalSndWaitForWal() would wait. That'd check if there's\n> a pending \"need\" to send out a 'empty transaction'/feedback request\n> message. The \"need\" flag would get cleared whenever we send out data\n> bearing an LSN for other reasons.\n>\n\nI think the current Keep Alive messages already achieve this by\nsending the current LSN as part of the Keep Alive messages.\n /* construct the message... */\n resetStringInfo(&output_message);\n pq_sendbyte(&output_message, 'k');\n pq_sendint64(&output_message, sentPtr); <=== Last sent WAL LSN\n pq_sendint64(&output_message, GetCurrentTimestamp());\n pq_sendbyte(&output_message, requestReply ? 1 : 0);\n\nI'm not sure if anything more is required to keep empty transactions\nupdated as part of synchronous replicas. If my understanding on this\nis not correct, let me know.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Fri, 23 Apr 2021 15:57:34 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Apr 23, 2021 at 3:46 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n>\n>\n> On Mon, Apr 19, 2021 at 6:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>>\n>> Here are a some review comments:\n>>\n>> ------\n>>\n>> 1. The patch v3 applied OK but with whitespace warnings\n>>\n>> [postgres@CentOS7-x64 oss_postgres_2PC]$ git apply\n>> ../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch\n>> ../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch:98:\n>> indent with spaces.\n>> /* output BEGIN if we haven't yet, avoid for streaming and\n>> non-transactional messages */\n>> ../patches_misc/v3-0001-Skip-empty-transactions-for-logical-replication.patch:99:\n>> indent with spaces.\n>> if (!data->xact_wrote_changes && !in_streaming && transactional)\n>> warning: 2 lines add whitespace errors.\n>>\n>> ------\n>\n>\n> Fixed.\n>\n>>\n>>\n>> 2. Please create a CF entry in [1] for this patch.\n>>\n>> ------\n>>\n>> 3. Patch comment\n>>\n>> The comment describes the problem and then suddenly just says\n>> \"Postpone the BEGIN message until the first change.\"\n>>\n>> I suggest changing it to say more like... \"(blank line) This patch\n>> addresses the above problem by postponing the BEGIN message until the\n>> first change.\"\n>>\n>> ------\n>>\n>\n> Updated.\n>\n>>\n>> 4. pgoutput.h\n>>\n>> Maybe for consistency with the context member, the comment for the new\n>> member should be to the right instead of above it?\n>>\n>> @@ -20,6 +20,9 @@ typedef struct PGOutputData\n>> MemoryContext context; /* private memory context for transient\n>> * allocations */\n>>\n>> + /* flag indicating whether messages have previously been sent */\n>> + bool xact_wrote_changes;\n>> +\n>>\n>> ------\n>>\n>> 5. pgoutput.h\n>>\n>> + /* flag indicating whether messages have previously been sent */\n>>\n>> \"previously been sent\" --> \"already been sent\" ??\n>>\n>> ------\n>>\n>> 6. pgoutput.h - misleading member name\n>>\n>> Actually, now that I have read all the rest of the code and how this\n>> member is used I feel that this name is very misleading. e.g. For\n>> \"streaming\" case then you still are writing changes but are not\n>> setting this member at all - therefore it does not always mean what it\n>> says.\n>>\n>> I feel a better name for this would be something like\n>> \"sent_begin_txn\". Then if you have sent BEGIN it is true. If you\n>> haven't sent BEGIN it is false. It eliminates all ambiguity naming it\n>> this way instead.\n>>\n>> (This makes my feedback #5 redundant because the comment will be a bit\n>> different if you do this).\n>>\n>> ------\n>\n>\n> Fixed above comments.\n>>\n>>\n>> 7. pgoutput.c - function pgoutput_begin_txn\n>>\n>> @@ -345,6 +345,23 @@ pgoutput_startup(LogicalDecodingContext *ctx,\n>> OutputPluginOptions *opt,\n>> static void\n>> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n>> {\n>>\n>> I guess that you still needed to pass the txn because that is how the\n>> API is documented, right?\n>>\n>> But I am wondering if you ought to flag it as unused so you wont get\n>> some BF machine giving warnings about it.\n>>\n>> e.g. Syntax like this?\n>>\n>> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN * txn) {\n>> (void)txn;\n>> ...\n>\n>\n> Updated.\n>>\n>> ------\n>>\n>> 8. pgoutput.c - function pgoutput_begin_txn\n>>\n>> @@ -345,6 +345,23 @@ pgoutput_startup(LogicalDecodingContext *ctx,\n>> OutputPluginOptions *opt,\n>> static void\n>> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n>> {\n>> + PGOutputData *data = ctx->output_plugin_private;\n>> +\n>> + /*\n>> + * Don't send BEGIN message here. Instead, postpone it until the first\n>> + * change. In logical replication, a common scenario is to replicate a set\n>> + * of tables (instead of all tables) and transactions whose changes were on\n>> + * table(s) that are not published will produce empty transactions. These\n>> + * empty transactions will send BEGIN and COMMIT messages to subscribers,\n>> + * using bandwidth on something with little/no use for logical replication.\n>> + */\n>> + data->xact_wrote_changes = false;\n>> + elog(LOG,\"Holding of begin\");\n>> +}\n>>\n>> Why is this loglevel LOG? Looks like leftover debugging.\n>\n>\n> Removed.\n>>\n>>\n>> ------\n>>\n>> 9. pgoutput.c - function pgoutput_commit_txn\n>>\n>> @@ -384,8 +401,14 @@ static void\n>> pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n>> XLogRecPtr commit_lsn)\n>> {\n>> + PGOutputData *data = ctx->output_plugin_private;\n>> +\n>> OutputPluginUpdateProgress(ctx);\n>>\n>> + /* skip COMMIT message if nothing was sent */\n>> + if (!data->xact_wrote_changes)\n>> + return;\n>> +\n>>\n>> In the case where you decided to do nothing does it make sense that\n>> you still called the function OutputPluginUpdateProgress(ctx); ?\n>> I thought perhaps that your new check should come first so this call\n>> would never happen.\n>\n>\n> Even though the empty transaction is not sent, the LSN is tracked as decoded, hence the progress needs to be updated.\n>\n>>\n>> ------\n>>\n>> 10. pgoutput.c - variable declarations without casts\n>>\n>> + PGOutputData *data = ctx->output_plugin_private;\n>>\n>> I noticed the new stack variable you declare have no casts.\n>>\n>> This differs from the existing code which always looks like:\n>> PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n>>\n>> There are a couple of examples of this so please search new code to\n>> find them all.\n>>\n>> -----\n>\n>\n> Fixed.\n>\n>>\n>> 11. pgoutput.c - function pgoutput_change\n>>\n>> @@ -551,6 +574,13 @@ pgoutput_change(LogicalDecodingContext *ctx,\n>> ReorderBufferTXN *txn,\n>> Assert(false);\n>> }\n>>\n>> + /* output BEGIN if we haven't yet */\n>> + if (!data->xact_wrote_changes && !in_streaming)\n>> + {\n>> + pgoutput_begin(ctx, txn);\n>> + data->xact_wrote_changes = true;\n>> + }\n>>\n>> If the variable is renamed as previously suggested then the assignment\n>> data->sent_BEGIN_txn = true; can be assigned in just 1 common place\n>> INSIDE the pgoutput_begin function.\n>>\n>> ------\n>\n>\n> Updated.\n>>\n>>\n>> 12. pgoutput.c - pgoutput_truncate function\n>>\n>> @@ -693,6 +723,13 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\n>> ReorderBufferTXN *txn,\n>>\n>> if (nrelids > 0)\n>> {\n>> + /* output BEGIN if we haven't yet */\n>> + if (!data->xact_wrote_changes && !in_streaming)\n>> + {\n>> + pgoutput_begin(ctx, txn);\n>> + data->xact_wrote_changes = true;\n>> + }\n>>\n>> (same comment as above)\n>>\n>> If the variable is renamed as previously suggested then the assignment\n>> data->sent_BEGIN_txn = true; can be assigned in just 1 common place\n>> INSIDE the pgoutput_begin function.\n>>\n>> 13. pgoutput.c - pgoutput_message\n>>\n>> @@ -725,6 +762,13 @@ pgoutput_message(LogicalDecodingContext *ctx,\n>> ReorderBufferTXN *txn,\n>> if (in_streaming)\n>> xid = txn->xid;\n>>\n>> + /* output BEGIN if we haven't yet, avoid for streaming and\n>> non-transactional messages */\n>> + if (!data->xact_wrote_changes && !in_streaming && transactional)\n>> + {\n>> + pgoutput_begin(ctx, txn);\n>> + data->xact_wrote_changes = true;\n>> + }\n>>\n>> (same comment as above)\n>>\n>> If the variable is renamed as previously suggested then the assignment\n>> data->sent_BEGIN_txn = true; can be assigned in just 1 common place\n>> INSIDE the pgoutput_begin function.\n>>\n>> ------\n>\n>\n> Fixed.\n>>\n>>\n>> 14. Test Code.\n>>\n>> I noticed that there is no test code specifically for seeing if empty\n>> transactions get sent or not. Is it possible to write such a test or\n>> is this traffic improvement only observable using the debugger?\n>>\n>\n> The 020_messages.pl actually has a test case for tracking empty messages even though it is part of the messages test.\n>\n> regards,\n> Ajin Cherian\n> Fujitsu Australia\n\nThanks for addressing my v3 review comments above.\n\nI tested the latest v4.\n\nThe v4 patch applied cleanly.\n\nmake check-world completed successfully.\n\nSo this patch v4 looks LGTM, apart from the following 2 nitpick comments:\n\n======\n\n1. Suggest to add a blank line after the (void)txn; ?\n\n@@ -345,10 +345,29 @@ pgoutput_startup(LogicalDecodingContext *ctx,\nOutputPluginOptions *opt,\n static void\n pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n+ PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n+\n+ (void)txn; /* keep compiler quiet */\n+ /*\n+ * Don't send BEGIN message here. Instead, postpone it until the first\n\n\n======\n\n2. Unnecessary statement blocks?\n\nAFAIK those { } are not the usual PG code-style when there is only one\nstatement, so suggest to remove them.\n\nAppies to 3 places:\n\n@@ -551,6 +576,12 @@ pgoutput_change(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n Assert(false);\n }\n\n+ /* output BEGIN if we haven't yet */\n+ if (!data->sent_begin_txn && !in_streaming)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ }\n\n@@ -693,6 +724,12 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n\n if (nrelids > 0)\n {\n+ /* output BEGIN if we haven't yet */\n+ if (!data->sent_begin_txn && !in_streaming)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ }\n\n@@ -725,6 +762,12 @@ pgoutput_message(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n if (in_streaming)\n xid = txn->xid;\n\n+ /* output BEGIN if we haven't yet, avoid for streaming and\nnon-transactional messages */\n+ if (!data->sent_begin_txn && !in_streaming && transactional)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ }\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 26 Apr 2021 16:28:54 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Apr 26, 2021 at 4:29 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> The v4 patch applied cleanly.\n>\n> make check-world completed successfully.\n>\n> So this patch v4 looks LGTM, apart from the following 2 nitpick comments:\n>\n> ======\n>\n> 1. Suggest to add a blank line after the (void)txn; ?\n>\n> @@ -345,10 +345,29 @@ pgoutput_startup(LogicalDecodingContext *ctx,\n> OutputPluginOptions *opt,\n> static void\n> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> + PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n> +\n> + (void)txn; /* keep compiler quiet */\n> + /*\n> + * Don't send BEGIN message here. Instead, postpone it until the first\n>\n>\n\nFixed.\n\n> ======\n>\n> 2. Unnecessary statement blocks?\n>\n> AFAIK those { } are not the usual PG code-style when there is only one\n> statement, so suggest to remove them.\n>\n> Appies to 3 places:\n>\n> @@ -551,6 +576,12 @@ pgoutput_change(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> Assert(false);\n> }\n>\n> + /* output BEGIN if we haven't yet */\n> + if (!data->sent_begin_txn && !in_streaming)\n> + {\n> + pgoutput_begin(ctx, txn);\n> + }\n>\n> @@ -693,6 +724,12 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n>\n> if (nrelids > 0)\n> {\n> + /* output BEGIN if we haven't yet */\n> + if (!data->sent_begin_txn && !in_streaming)\n> + {\n> + pgoutput_begin(ctx, txn);\n> + }\n>\n> @@ -725,6 +762,12 @@ pgoutput_message(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> if (in_streaming)\n> xid = txn->xid;\n>\n> + /* output BEGIN if we haven't yet, avoid for streaming and\n> non-transactional messages */\n> + if (!data->sent_begin_txn && !in_streaming && transactional)\n> + {\n> + pgoutput_begin(ctx, txn);\n> + }\n>\n\nFixed.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Tue, 27 Apr 2021 13:49:58 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, Apr 27, 2021 at 1:49 PM Ajin Cherian <itsajin@gmail.com> wrote:\n\nRebased the patch as it was no longer applying.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Tue, 25 May 2021 23:06:28 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, May 25, 2021 at 6:36 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 1:49 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Rebased the patch as it was no longer applying.\n\nThanks for the updated patch, few comments:\n1) I'm not sure if we could add some tests for skip empty\ntransactions, if possible add a few tests.\n\n2) We could add some debug level log messages for the transaction that\nwill be skipped.\n\n3) You could keep this variable below the other bool variables in the structure:\n+ bool sent_begin_txn; /* flag indicating whether begin\n+\n * has already been sent */\n+\n\n4) You can split the comments to multi-line as it exceeds 80 chars\n+ /* output BEGIN if we haven't yet, avoid for streaming and\nnon-transactional messages */\n+ if (!data->sent_begin_txn && !in_streaming && transactional)\n+ pgoutput_begin(ctx, txn);\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 27 May 2021 16:28:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, May 27, 2021 at 8:58 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> Thanks for the updated patch, few comments:\n> 1) I'm not sure if we could add some tests for skip empty\n> transactions, if possible add a few tests.\n>\nAdded a few tests for prepared transactions as well as the existing\ntest in 020_messages.pl also tests regular transactions.\n\n> 2) We could add some debug level log messages for the transaction that\n> will be skipped.\n\nAdded.\n\n>\n> 3) You could keep this variable below the other bool variables in the structure:\n> + bool sent_begin_txn; /* flag indicating whether begin\n> +\n> * has already been sent */\n> +\n\nI've moved this variable around, so this comment no longer is valid.\n\n>\n> 4) You can split the comments to multi-line as it exceeds 80 chars\n> + /* output BEGIN if we haven't yet, avoid for streaming and\n> non-transactional messages */\n> + if (!data->sent_begin_txn && !in_streaming && transactional)\n> + pgoutput_begin(ctx, txn);\n\nDone.\n\nI've had to rebase the patch after a recent commit by Amit Kapila of\nsupporting two-phase commits in pub-sub [1].\nAlso I've modified the patch to also skip replicating empty prepared\ntransactions. Do let me know if you have any comments.\n\nregards,\nAjin Cherian\nFujitsu Australia\n[1]- https://www.postgresql.org/message-id/CAHut+PueG6u3vwG8DU=JhJiWa2TwmZ=bDqPchZkBky7ykzA7MA@mail.gmail.com", "msg_date": "Wed, 14 Jul 2021 22:30:17 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wednesday, July 14, 2021 9:30 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> I've had to rebase the patch after a recent commit by Amit Kapila of supporting\r\n> two-phase commits in pub-sub [1].\r\n> Also I've modified the patch to also skip replicating empty prepared\r\n> transactions. Do let me know if you have any comments.\r\nHi\r\n\r\nI started to test this patch but will give you some really minor quick feedbacks.\r\n\r\n(1) pg_logical_slot_get_binary_changes() params.\r\n\r\nTechnically, looks better to have proto_version 3 & two_phase option for the function\r\nto test empty prepare ? I felt proto_version 1 doesn't support 2PC.\r\n[1] says \"The following messages (Begin Prepare, Prepare, Commit Prepared, Rollback Prepared)\r\nare available since protocol version 3.\" Then, if the test wants to skip empty *prepares*,\r\nI suggest to update the proto_version and set two_phase 'on'.\r\n\r\n+##############################\r\n+# Test empty prepares\r\n+##############################\r\n...\r\n+# peek at the contents of the slot\r\n+$result = $node_publisher->safe_psql(\r\n+ 'postgres', qq(\r\n+ SELECT get_byte(data, 0)\r\n+ FROM pg_logical_slot_get_binary_changes('tap_sub', NULL, NULL,\r\n+ 'proto_version', '1',\r\n+ 'publication_names', 'tap_pub')\r\n+));\r\n\r\n(2) The following sentences may start with a lowercase letter.\r\nThere are other similar codes for this.\r\n\r\n+ elog(DEBUG1, \"Skipping replication of an empty transaction\");\r\n\r\n[1] - https://www.postgresql.org/docs/devel/protocol-logicalrep-message-formats.html\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Thu, 15 Jul 2021 05:50:45 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "Hi Ajin,\n\nI have reviewed the v7 patch and given my feedback comments below.\n\nApply OK\nBuild OK\nmake check OK\nTAP (subscriptions) make check OK\nBuild PG Docs (html) OK\n\nAlthough I made lots of review comments below, the important point is\nthat none of them are functional - they are only minore re-wordings\nand some code refactoring that I thought would make the code simpler\nand/or easier to read. YMMV, so please feel free to disagree with any\nof them.\n\n//////////\n\n1a. Commit Comment - wording\n\nBEFORE\nThis patch addresses the above problem by postponing the BEGIN / BEGIN\nPREPARE message until the first change.\n\nAFTER\nThis patch addresses the above problem by postponing the BEGIN / BEGIN\nPREPARE messages until the first change is encountered.\n\n------\n\n1b. Commit Comment - wording\n\nBEFORE\nWhile processing a COMMIT message or a PREPARE message, if there is no\nother change for that transaction, do not send COMMIT message or\nPREPARE message.\n\nAFTER\nIf (when processing a COMMIT / PREPARE message) we find there had been\nno other change for that transaction, then do not send the COMMIT /\nPREPARE message.\n\n------\n\n2. doc/src/sgml/logicaldecoding.sgml - wording\n\n@@ -884,11 +884,19 @@ typedef void (*LogicalDecodePrepareCB) (struct\nLogicalDecodingContext *ctx,\n The required <function>commit_prepared_cb</function> callback is called\n whenever a transaction <command>COMMIT PREPARED</command> has\nbeen decoded.\n The <parameter>gid</parameter> field, which is part of the\n- <parameter>txn</parameter> parameter, can be used in this callback.\n+ <parameter>txn</parameter> parameter, can be used in this callback. The\n+ parameters <parameter>prepare_end_lsn</parameter> and\n+ <parameter>prepare_time</parameter> can be used to check if the plugin\n+ has received this <command>PREPARE TRANSACTION</command> in which case\n+ it can commit the transaction, otherwise, it can skip the commit. The\n+ <parameter>gid</parameter> alone is not sufficient because the downstream\n+ node can have a prepared transaction with the same identifier.\n\n=>\n\n(some minor rewording of the last part)\n\nAFTER:\n\nThe parameters <parameter>prepare_end_lsn</parameter> and\n<parameter>prepare_time</parameter> can be used to check if the plugin\nhas received this <command>PREPARE TRANSACTION</command> or not. If\nyes, it can commit the transaction, otherwise, it can skip the commit.\nThe <parameter>gid</parameter> alone is not sufficient to determine\nthis because the downstream node may already have a prepared\ntransaction with the same identifier.\n\n\n------\n\n3. src/backend/replication/logical/proto.c - whitespace\n\n@@ -244,12 +248,16 @@ logicalrep_read_commit_prepared(StringInfo in,\nLogicalRepCommitPreparedTxnData *\n elog(ERROR, \"unrecognized flags %u in commit prepared message\", flags);\n\n /* read fields */\n+ prepare_data->prepare_end_lsn = pq_getmsgint64(in);\n+ if (prepare_data->prepare_end_lsn == InvalidXLogRecPtr)\n+ elog(ERROR,\"prepare_end_lsn is not set in commit prepared message\");\n\n=>\n\nThere is missing space before the 2nd elog param.\n\n------\n\n4. src/backend/replication/logical/worker.c - comment typos\n\n /*\n- * Update origin state so we can restart streaming from correct position\n- * in case of crash.\n+ * It is possible that we haven't received the prepare because\n+ * the transaction did not have any changes relevant to this\n+ * subscription and was essentially an empty prepare. In which case,\n+ * the walsender is optimized to drop the empty transaction and the\n+ * accompanying prepare. Silently ignore if we don't find the prepared\n+ * transaction.\n */\n\n4a. =>\n\n\"and was essentially an empty prepare\" --> \"so was essentially an empty prepare\"\n\n4b. =>\n\n\"In which case\" --> \"In this case\"\n\n------\n\n5. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_txn\n\n@@ -410,10 +417,32 @@ pgoutput_startup(LogicalDecodingContext *ctx,\nOutputPluginOptions *opt,\n static void\n pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n+ PGOutputTxnData *data = MemoryContextAllocZero(ctx->context,\n+ sizeof(PGOutputTxnData));\n+\n+ /*\n+ * Don't send BEGIN message here. Instead, postpone it until the first\n+ * change. In logical replication, a common scenario is to replicate a set\n+ * of tables (instead of all tables) and transactions whose changes were on\n+ * table(s) that are not published will produce empty transactions. These\n+ * empty transactions will send BEGIN and COMMIT messages to subscribers,\n+ * using bandwidth on something with little/no use for logical replication.\n+ */\n+ data->sent_begin_txn = false;\n+ txn->output_plugin_private = data;\n+}\n\n=>\n\nI felt that since this message postponement is now the new behaviour\nof this function then probably this should all be a function level\ncomment instead of the comment being in the body of the function\n\n------\n\n6. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin\n\n+\n+static void\n+pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n\n=>\n\nEven though it is kind of obvious, it is probably better to provide a\nfunction comment here too\n\n------\n\n7. src/backend/replication/pgoutput/pgoutput.c - pgoutput_commit_txn\n\n@@ -428,8 +457,22 @@ static void\n pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n XLogRecPtr commit_lsn)\n {\n+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;\n+ bool skip;\n+\n+ Assert(data);\n+ skip = !data->sent_begin_txn;\n+ pfree(data);\n+ txn->output_plugin_private = NULL;\n OutputPluginUpdateProgress(ctx);\n\n+ /* skip COMMIT message if nothing was sent */\n+ if (skip)\n+ {\n+ elog(DEBUG1, \"Skipping replication of an empty transaction\");\n+ return;\n+ }\n+\n\n7a. =>\n\nI felt that the comment \"skip COMMIT message if nothing was sent\"\nshould be done at the point where you *decide* to skip or not. So you\ncould either move that comment to where the skip variable is assigned.\nOr (my preference) leave the comment where it is but change the\nvariable name to be sent_begin = !data->sent_begin_txn;\n\n------\n\nRegardless I think the comment should be elaborated a bit to describe\nthe reason more.\n\n7b. =>\n\nBEFORE\n/* skip COMMIT message if nothing was sent */\n\nAFTER\n/* If a BEGIN message was not yet sent, then it means there were no\nrelevant changes encountered, so we can skip the COMMIT message too.\n*/\n\n------\n\n8. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_prepare_txn\n\n@@ -441,10 +484,28 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n static void\n pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n+ /*\n+ * Don't send BEGIN PREPARE message here. Instead, postpone it until the first\n+ * change. In logical replication, a common scenario is to replicate a set\n+ * of tables (instead of all tables) and transactions whose changes were on\n+ * table(s) that are not published will produce empty transactions. These\n+ * empty transactions will send BEGIN PREPARE and COMMIT PREPARED messages\n+ * to subscribers, using bandwidth on something with little/no use\n+ * for logical replication.\n+ */\n+ pgoutput_begin_txn(ctx, txn);\n+}\n\n8a. =>\n\nLike previously, I felt that this big comment should be at the\nfunction level of pgoutput_begin_prepare_txn instead of in the body of\nthe function.\n\n------\n\n8b. =>\n\nAnd then the body comment would be something simple like:\n\n/* Delegate to assign the begin sent flag as false same as for the\nBEGIN message. */\npgoutput_begin_txn(ctx, txn);\n\n------\n\n9. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_prepare\n\n+\n+static void\n+pgoutput_begin_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n\n=>\n\nProbably this needs a function comment.\n\n------\n\n10. src/backend/replication/pgoutput/pgoutput.c - pgoutput_prepare_txn\n\n@@ -459,8 +520,18 @@ static void\n pgoutput_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n XLogRecPtr prepare_lsn)\n {\n+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;\n+\n+ Assert(data);\n OutputPluginUpdateProgress(ctx);\n\n+ /* skip PREPARE message if nothing was sent */\n+ if (!data->sent_begin_txn)\n\n=>\n\nMaybe elaborate on that \"skip PREPARE message if nothing was sent\"\ncomment in a way similar to my review comment 7b. For example,\n\nAFTER\n/* If the BEGIN was not yet sent, then it means there were no relevant\nchanges encountered, so we can skip the PREPARE message too. */\n\n------\n\n11. src/backend/replication/pgoutput/pgoutput.c - pgoutput_commit_prepared_txn\n\n@@ -471,12 +542,33 @@ pgoutput_prepare_txn(LogicalDecodingContext\n*ctx, ReorderBufferTXN *txn,\n */\n static void\n pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n- XLogRecPtr commit_lsn)\n+ XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn,\n+ TimestampTz prepare_time)\n {\n+ PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;\n+\n OutputPluginUpdateProgress(ctx);\n\n+ /*\n+ * skip sending COMMIT PREPARED message if prepared transaction\n+ * has not been sent.\n+ */\n+ if (data)\n\n=>\n\nSimilar to previous review comment 10, I think the reason for the skip\nshould be elaborated a little bit. For example,\n\nAFTER\n/* If the BEGIN PREPARE was not yet sent, then it means there were no\nrelevant changes encountered, so we can skip the COMMIT PREPARED\nmessage too. */\n\n------\n\n12. src/backend/replication/pgoutput/pgoutput.c - pgoutput_rollback_prepared_txn\n\n=> Similar as for pgoutput_comment_prepared_txn (see review comment 11)\n\n------\n\n13. src/backend/replication/pgoutput/pgoutput.c - pgoutput_change\n\n@@ -639,11 +749,16 @@ pgoutput_change(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n Relation relation, ReorderBufferChange *change)\n {\n PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n MemoryContext old;\n RelationSyncEntry *relentry;\n TransactionId xid = InvalidTransactionId;\n Relation ancestor = NULL;\n\n+ /* If not streaming, should have setup txndata as part of\nBEGIN/BEGIN PREPARE */\n+ if (!in_streaming)\n+ Assert(txndata);\n+\n if (!is_publishable_relation(relation))\n return;\n\n13a. =>\n\nI felt the streaming logic with the txndata is a bit confusing. I\nthink it would be easier to have another local variable (sent_begin)\nand use it like this...\n\nbool sent_begin;\nif (in_streaming)\n{\n sent_begin = true;\nelse\n{\n PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n Assert(txndata)\n sent_begin = txn->sent_begin_txn;\n}\n\n...\n\n------\n\n+ /* output BEGIN if we haven't yet */\n\n13b. =>\n\nI thought the comment is not quite right\n\nAFTER\n/* Output BEGIN / BEGIN PREPARE if we haven't yet */\n\n------\n\n+ if (!in_streaming && !txndata->sent_begin_txn)\n+ {\n+ if (rbtxn_prepared(txn))\n+ pgoutput_begin_prepare(ctx, txn);\n+ else\n+ pgoutput_begin(ctx, txn);\n+ }\n+\n\n13.c =>\n\nIf you introduce the variable (as suggested in 13a) this code becomes\nmuch simpler:\n\nAFTER\n\nif (!sent_begin)\n{\n if (rbtxn_prepared(txn))\n pgoutput_begin_prepare(ctx, txn)\n else\n pgoutput_begin(ctx, txn);\n}\n\n\n------\n\n14. src/backend/replication/pgoutput/pgoutput.c - pgoutput_truncate\n\n=>\n\nAll the similar review comments made for pg_change (13a, 13b, 13c)\napply to pgoutput_truncate here also.\n\n------\n\n15. src/backend/replication/pgoutput/pgoutput.c - pgoutput_message\n\n@@ -842,6 +980,7 @@ pgoutput_message(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n const char *message)\n {\n PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n+ PGOutputTxnData *txndata;\n TransactionId xid = InvalidTransactionId;\n\n\n=>\n\nThis variable should be declared in the block where it is used,\nsimilar to the suggestion 13a.\n\nAlso is it just an accidental omission that you did Assert(txndata)\nfor all the other places but not here?\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 19 Jul 2021 15:24:39 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Jul 19, 2021 at 3:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> 1a. Commit Comment - wording\n>\nupdated.\n>\n> 1b. Commit Comment - wording\n>\nupdated.\n\n> 2. doc/src/sgml/logicaldecoding.sgml - wording\n>\n> @@ -884,11 +884,19 @@ typedef void (*LogicalDecodePrepareCB) (struct\n> LogicalDecodingContext *ctx,\n> The required <function>commit_prepared_cb</function> callback is called\n> whenever a transaction <command>COMMIT PREPARED</command> has\n> been decoded.\n> The <parameter>gid</parameter> field, which is part of the\n> - <parameter>txn</parameter> parameter, can be used in this callback.\n> + <parameter>txn</parameter> parameter, can be used in this callback. The\n> + parameters <parameter>prepare_end_lsn</parameter> and\n> + <parameter>prepare_time</parameter> can be used to check if the plugin\n> + has received this <command>PREPARE TRANSACTION</command> in which case\n> + it can commit the transaction, otherwise, it can skip the commit. The\n> + <parameter>gid</parameter> alone is not sufficient because the downstream\n> + node can have a prepared transaction with the same identifier.\n>\n> =>\n>\n> (some minor rewording of the last part)\n\nupdated.\n\n>\n> 3. src/backend/replication/logical/proto.c - whitespace\n>\n> @@ -244,12 +248,16 @@ logicalrep_read_commit_prepared(StringInfo in,\n> LogicalRepCommitPreparedTxnData *\n> elog(ERROR, \"unrecognized flags %u in commit prepared message\", flags);\n>\n> /* read fields */\n> + prepare_data->prepare_end_lsn = pq_getmsgint64(in);\n> + if (prepare_data->prepare_end_lsn == InvalidXLogRecPtr)\n> + elog(ERROR,\"prepare_end_lsn is not set in commit prepared message\");\n>\n> =>\n>\n> There is missing space before the 2nd elog param.\n>\n\nfixed.\n\n>\n> 4a. =>\n>\n> \"and was essentially an empty prepare\" --> \"so was essentially an empty prepare\"\n>\n> 4b. =>\n>\n> \"In which case\" --> \"In this case\"\n>\n> ------\n\nfixed.\n\n> I felt that since this message postponement is now the new behaviour\n> of this function then probably this should all be a function level\n> comment instead of the comment being in the body of the function\n>\n> ------\n>\n> 6. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin\n>\n> +\n> +static void\n> +pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n>\n> =>\n>\n> Even though it is kind of obvious, it is probably better to provide a\n> function comment here too\n>\n> ------\n\nChanged accordingly.\n\n>\n\n> I felt that the comment \"skip COMMIT message if nothing was sent\"\n> should be done at the point where you *decide* to skip or not. So you\n> could either move that comment to where the skip variable is assigned.\n> Or (my preference) leave the comment where it is but change the\n> variable name to be sent_begin = !data->sent_begin_txn;\n>\n\nUpdated the comment to where the skip variable is assigned.\n\n\n> ------\n>\n> Regardless I think the comment should be elaborated a bit to describe\n> the reason more.\n>\n> 7b. =>\n>\n> BEFORE\n> /* skip COMMIT message if nothing was sent */\n>\n> AFTER\n> /* If a BEGIN message was not yet sent, then it means there were no\n> relevant changes encountered, so we can skip the COMMIT message too.\n> */\n>\n\nUpdated accordingly.\n\n\n> ------\n\n> Like previously, I felt that this big comment should be at the\n> function level of pgoutput_begin_prepare_txn instead of in the body of\n> the function.\n>\n> ------\n>\n> 8b. =>\n>\n> And then the body comment would be something simple like:\n>\n> /* Delegate to assign the begin sent flag as false same as for the\n> BEGIN message. */\n> pgoutput_begin_txn(ctx, txn);\n>\n\nUpdated accordingly.\n\n> ------\n>\n> 9. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_prepare\n>\n> +\n> +static void\n> +pgoutput_begin_prepare(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n>\n> =>\n>\n> Probably this needs a function comment.\n>\n\nUpdated.\n\n> ------\n>\n> 10. src/backend/replication/pgoutput/pgoutput.c - pgoutput_prepare_txn\n>\n> @@ -459,8 +520,18 @@ static void\n> pgoutput_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> XLogRecPtr prepare_lsn)\n> {\n> + PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> + Assert(data);\n> OutputPluginUpdateProgress(ctx);\n>\n> + /* skip PREPARE message if nothing was sent */\n> + if (!data->sent_begin_txn)\n>\n> =>\n>\n> Maybe elaborate on that \"skip PREPARE message if nothing was sent\"\n> comment in a way similar to my review comment 7b. For example,\n>\n> AFTER\n> /* If the BEGIN was not yet sent, then it means there were no relevant\n> changes encountered, so we can skip the PREPARE message too. */\n>\n\nUpdated.\n\n> ------\n>\n> 11. src/backend/replication/pgoutput/pgoutput.c - pgoutput_commit_prepared_txn\n>\n> @@ -471,12 +542,33 @@ pgoutput_prepare_txn(LogicalDecodingContext\n> *ctx, ReorderBufferTXN *txn,\n> */\n> static void\n> pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> - XLogRecPtr commit_lsn)\n> + XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn,\n> + TimestampTz prepare_time)\n> {\n> + PGOutputTxnData *data = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> OutputPluginUpdateProgress(ctx);\n>\n> + /*\n> + * skip sending COMMIT PREPARED message if prepared transaction\n> + * has not been sent.\n> + */\n> + if (data)\n>\n> =>\n>\n> Similar to previous review comment 10, I think the reason for the skip\n> should be elaborated a little bit. For example,\n>\n> AFTER\n> /* If the BEGIN PREPARE was not yet sent, then it means there were no\n> relevant changes encountered, so we can skip the COMMIT PREPARED\n> message too. */\n>\n> ------\n\nUpdated accordingly.\n\n>\n> 12. src/backend/replication/pgoutput/pgoutput.c - pgoutput_rollback_prepared_txn\n>\n> => Similar as for pgoutput_comment_prepared_txn (see review comment 11)\n>\n> ------\n\nUpdated,\n\n>\n> 13. src/backend/replication/pgoutput/pgoutput.c - pgoutput_change\n>\n> @@ -639,11 +749,16 @@ pgoutput_change(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> Relation relation, ReorderBufferChange *change)\n> {\n> PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> MemoryContext old;\n> RelationSyncEntry *relentry;\n> TransactionId xid = InvalidTransactionId;\n> Relation ancestor = NULL;\n>\n> + /* If not streaming, should have setup txndata as part of\n> BEGIN/BEGIN PREPARE */\n> + if (!in_streaming)\n> + Assert(txndata);\n> +\n> if (!is_publishable_relation(relation))\n> return;\n>\n> 13a. =>\n>\n> I felt the streaming logic with the txndata is a bit confusing. I\n> think it would be easier to have another local variable (sent_begin)\n> and use it like this...\n>\n> bool sent_begin;\n> if (in_streaming)\n> {\n> sent_begin = true;\n> else\n> {\n> PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> Assert(txndata)\n> sent_begin = txn->sent_begin_txn;\n> }\n>\n\nI did not make the change, because in case of streaming \"Sent_begin\"\nis not true, so it seemed incorrect coding it\nthat way. Instead , I have modified the comment to mention that\nstreaming transaction do not send BEG / BEGIN PREPARE.\n\n> ...\n>\n> ------\n>\n> + /* output BEGIN if we haven't yet */\n>\n> 13b. =>\n>\n> I thought the comment is not quite right\n>\n> AFTER\n> /* Output BEGIN / BEGIN PREPARE if we haven't yet */\n>\n> ------\n\nUpdated.\n\n>\n> + if (!in_streaming && !txndata->sent_begin_txn)\n> + {\n> + if (rbtxn_prepared(txn))\n> + pgoutput_begin_prepare(ctx, txn);\n> + else\n> + pgoutput_begin(ctx, txn);\n> + }\n> +\n>\n> 13.c =>\n>\n> If you introduce the variable (as suggested in 13a) this code becomes\n> much simpler:\n>\n\nSkipped this. (reason mentioned above)\n\n> ------\n>\n> 14. src/backend/replication/pgoutput/pgoutput.c - pgoutput_truncate\n>\n> =>\n>\n> All the similar review comments made for pg_change (13a, 13b, 13c)\n> apply to pgoutput_truncate here also.\n>\n> ------\n\nUpdated.\n\n>\n> 15. src/backend/replication/pgoutput/pgoutput.c - pgoutput_message\n>\n> @@ -842,6 +980,7 @@ pgoutput_message(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> const char *message)\n> {\n> PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n> + PGOutputTxnData *txndata;\n> TransactionId xid = InvalidTransactionId;\n>\n>\n> =>\n>\n> This variable should be declared in the block where it is used,\n> similar to the suggestion 13a.\n>\n> Also is it just an accidental omission that you did Assert(txndata)\n> for all the other places but not here?\n>\n\nMoved location of the variable and added an assert.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Wed, 21 Jul 2021 20:58:45 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Jul 15, 2021 at 3:50 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> I started to test this patch but will give you some really minor quick feedbacks.\n>\n> (1) pg_logical_slot_get_binary_changes() params.\n>\n> Technically, looks better to have proto_version 3 & two_phase option for the function\n> to test empty prepare ? I felt proto_version 1 doesn't support 2PC.\n> [1] says \"The following messages (Begin Prepare, Prepare, Commit Prepared, Rollback Prepared)\n> are available since protocol version 3.\" Then, if the test wants to skip empty *prepares*,\n> I suggest to update the proto_version and set two_phase 'on'.\n\nUpdated accordingly.\n\n> (2) The following sentences may start with a lowercase letter.\n> There are other similar codes for this.\n>\n> + elog(DEBUG1, \"Skipping replication of an empty transaction\");\n\nFixed this.\n\nI've addressed these comments in version 8 of the patch.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Jul 2021 21:00:14 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "Hi Ajin.\n\nI have reviewed the v8 patch and my feedback comments are below:\n\n//////////\n\n1. Apply v8 gave multiple whitespace warnings.\n\n------\n\n2. Commit comment - wording\n\nIf (when processing a COMMIT / PREPARE message) we find there had been\nno other change for that transaction, then do not send the COMMIT /\nPREPARE message. This means that pgoutput will skip BEGIN / COMMIT\nor BEGIN PREPARE / PREPARE messages for transactions that are empty.\n\n=>\n\nShouldn't this also mention some other messages that may be skipped?\n- COMMIT PREPARED\n- ROLLBACK PREPARED\n\n------\n\n3. doc/src/sgml/logicaldecoding.sgml - wording\n\n@@ -884,11 +884,20 @@ typedef void (*LogicalDecodePrepareCB) (struct\nLogicalDecodingContext *ctx,\n The required <function>commit_prepared_cb</function> callback is called\n whenever a transaction <command>COMMIT PREPARED</command> has\nbeen decoded.\n The <parameter>gid</parameter> field, which is part of the\n- <parameter>txn</parameter> parameter, can be used in this callback.\n+ <parameter>txn</parameter> parameter, can be used in this callback. The\n+ parameters <parameter>prepare_end_lsn</parameter> and\n+ <parameter>prepare_time</parameter> can be used to check if the plugin\n+ has received this <command>PREPARE TRANSACTION</command> command or not.\n+ If yes, it can commit the transaction, otherwise, it can skip the commit.\n+ The <parameter>gid</parameter> alone is not sufficient to determine this\n+ because the downstream may already have a prepared transaction with the\n+ same identifier.\n\n=>\n\nTypo: Should that say \"downstream node\" instead of just \"downstream\" ?\n\n------\n\n4. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_txn\ncallback comment\n\n@@ -406,14 +413,38 @@ pgoutput_startup(LogicalDecodingContext *ctx,\nOutputPluginOptions *opt,\n\n /*\n * BEGIN callback\n+ * Don't send BEGIN message here. Instead, postpone it until the first\n+ * change. In logical replication, a common scenario is to replicate a set\n+ * of tables (instead of all tables) and transactions whose changes were on\n\n=>\n\nTypo: \"BEGIN callback\" --> \"BEGIN callback.\" (with the period).\n\nAnd, I think maybe it will be better if it has a separating blank line too.\n\ne.g.\n\n/*\n * BEGIN callback.\n *\n * Don't send BEGIN ....\n\n(NOTE: this review comment applies to other callback function comments\ntoo, so please hunt them all down)\n\n------\n\n5. src/backend/replication/pgoutput/pgoutput.c - data / txndata\n\n static void\n pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n+ PGOutputTxnData *data = MemoryContextAllocZero(ctx->context,\n+ sizeof(PGOutputTxnData));\n\n=>\n\nThere is some inconsistent naming of the local variable in the patch.\nSometimes it is called \"data\"; Sometimes it is called \"txdata\" etc. It\nwould be better to just stick with the same variable name everywhere.\n\n(NOTE: this comment applies to several places in this patch)\n\n------\n\n6. src/backend/replication/pgoutput/pgoutput.c - Strange way to use Assert\n\n+ /* If not streaming, should have setup txndata as part of\nBEGIN/BEGIN PREPARE */\n+ if (!in_streaming)\n+ Assert(txndata);\n+\n\n=>\n\nThis style of Assert code seemed strange to me. In production mode\nisn't that going to evaluate to some condition with a ((void) true)\nbody? IMO it might be better to just include the streaming check as\npart of the Assert. For example:\n\nBEFORE\nif (!in_streaming)\nAssert(txndata);\n\nAFTER\nAssert(in_streaming || txndata);\n\n(NOTE: This same review comment applies in at least 3 places in this\npatch, so please hunt them all down)\n\n------\n\n7. src/backend/replication/pgoutput/pgoutput.c - comment wording\n\n@@ -677,6 +810,18 @@ pgoutput_change(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n Assert(false);\n }\n\n+ /*\n+ * output BEGIN / BEGIN PREPARE if we haven't yet,\n+ * while streaming no need to send BEGIN / BEGIN PREPARE.\n+ */\n+ if (!in_streaming && !txndata->sent_begin_txn)\n\n=>\n\nEnglish not really that comment is. The comment should also start with\nuppercase.\n\n(NOTE: This same comment was in couple of places in the patch)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 22 Jul 2021 18:11:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Jul 22, 2021 at 6:11 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Ajin.\n>\n> I have reviewed the v8 patch and my feedback comments are below:\n>\n> //////////\n>\n> 1. Apply v8 gave multiple whitespace warnings.\n>\n> ------\n>\n> 2. Commit comment - wording\n>\n> If (when processing a COMMIT / PREPARE message) we find there had been\n> no other change for that transaction, then do not send the COMMIT /\n> PREPARE message. This means that pgoutput will skip BEGIN / COMMIT\n> or BEGIN PREPARE / PREPARE messages for transactions that are empty.\n>\n> =>\n>\n> Shouldn't this also mention some other messages that may be skipped?\n> - COMMIT PREPARED\n> - ROLLBACK PREPARED\n>\n\nUpdated.\n\n> ------\n>\n> 3. doc/src/sgml/logicaldecoding.sgml - wording\n>\n> @@ -884,11 +884,20 @@ typedef void (*LogicalDecodePrepareCB) (struct\n> LogicalDecodingContext *ctx,\n> The required <function>commit_prepared_cb</function> callback is called\n> whenever a transaction <command>COMMIT PREPARED</command> has\n> been decoded.\n> The <parameter>gid</parameter> field, which is part of the\n> - <parameter>txn</parameter> parameter, can be used in this callback.\n> + <parameter>txn</parameter> parameter, can be used in this callback. The\n> + parameters <parameter>prepare_end_lsn</parameter> and\n> + <parameter>prepare_time</parameter> can be used to check if the plugin\n> + has received this <command>PREPARE TRANSACTION</command> command or not.\n> + If yes, it can commit the transaction, otherwise, it can skip the commit.\n> + The <parameter>gid</parameter> alone is not sufficient to determine this\n> + because the downstream may already have a prepared transaction with the\n> + same identifier.\n>\n> =>\n>\n> Typo: Should that say \"downstream node\" instead of just \"downstream\" ?\n>\n> ------\n\nUpdated.\n\n>\n> 4. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_txn\n> callback comment\n>\n> @@ -406,14 +413,38 @@ pgoutput_startup(LogicalDecodingContext *ctx,\n> OutputPluginOptions *opt,\n>\n> /*\n> * BEGIN callback\n> + * Don't send BEGIN message here. Instead, postpone it until the first\n> + * change. In logical replication, a common scenario is to replicate a set\n> + * of tables (instead of all tables) and transactions whose changes were on\n>\n> =>\n>\n> Typo: \"BEGIN callback\" --> \"BEGIN callback.\" (with the period).\n>\n> And, I think maybe it will be better if it has a separating blank line too.\n>\n> e.g.\n>\n> /*\n> * BEGIN callback.\n> *\n> * Don't send BEGIN ....\n>\n> (NOTE: this review comment applies to other callback function comments\n> too, so please hunt them all down)\n>\n> ------\n\nUpdated.\n\n>\n> 5. src/backend/replication/pgoutput/pgoutput.c - data / txndata\n>\n> static void\n> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> + PGOutputTxnData *data = MemoryContextAllocZero(ctx->context,\n> + sizeof(PGOutputTxnData));\n>\n> =>\n>\n> There is some inconsistent naming of the local variable in the patch.\n> Sometimes it is called \"data\"; Sometimes it is called \"txdata\" etc. It\n> would be better to just stick with the same variable name everywhere.\n>\n> (NOTE: this comment applies to several places in this patch)\n>\n> ------\n\nI've changed all occurance of PGOutputTxnData to txndata. Note that\nthere is another structure PGOutputData which still uses the name\ndata.\n\n>\n> 6. src/backend/replication/pgoutput/pgoutput.c - Strange way to use Assert\n>\n> + /* If not streaming, should have setup txndata as part of\n> BEGIN/BEGIN PREPARE */\n> + if (!in_streaming)\n> + Assert(txndata);\n> +\n>\n> =>\n>\n> This style of Assert code seemed strange to me. In production mode\n> isn't that going to evaluate to some condition with a ((void) true)\n> body? IMO it might be better to just include the streaming check as\n> part of the Assert. For example:\n>\n> BEFORE\n> if (!in_streaming)\n> Assert(txndata);\n>\n> AFTER\n> Assert(in_streaming || txndata);\n>\n> (NOTE: This same review comment applies in at least 3 places in this\n> patch, so please hunt them all down)\n>\n\nUpdated.\n\n> ------\n>\n> 7. src/backend/replication/pgoutput/pgoutput.c - comment wording\n>\n> @@ -677,6 +810,18 @@ pgoutput_change(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> Assert(false);\n> }\n>\n> + /*\n> + * output BEGIN / BEGIN PREPARE if we haven't yet,\n> + * while streaming no need to send BEGIN / BEGIN PREPARE.\n> + */\n> + if (!in_streaming && !txndata->sent_begin_txn)\n>\n> =>\n>\n> English not really that comment is. The comment should also start with\n> uppercase.\n>\n> (NOTE: This same comment was in couple of places in the patch)\n>\n\nUpdated.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Thu, 22 Jul 2021 23:36:39 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "I have reviewed the v9 patch and my feedback comments are below:\n\n//////////\n\n1. Apply v9 gave multiple whitespace warnings\n\n$ git apply v9-0001-Skip-empty-transactions-for-logical-replication.patch\nv9-0001-Skip-empty-transactions-for-logical-replication.patch:479:\nindent with spaces.\n * If the BEGIN PREPARE was not yet sent, then it means there were no\nv9-0001-Skip-empty-transactions-for-logical-replication.patch:480:\nindent with spaces.\n * relevant changes encountered, so we can skip the ROLLBACK PREPARED\nv9-0001-Skip-empty-transactions-for-logical-replication.patch:481:\nindent with spaces.\n * messsage too.\nv9-0001-Skip-empty-transactions-for-logical-replication.patch:482:\nindent with spaces.\n */\nwarning: 4 lines add whitespace errors.\n\n------\n\n2. Commit comment - wording\n\npgoutput will also skip COMMIT PREPARED and ROLLBACK PREPARED messages\nfor transactions which were skipped.\n\n=>\n\nIs that correct? Or did you mean to say:\n\nAFTER\npgoutput will also skip COMMIT PREPARED and ROLLBACK PREPARED messages\nfor transactions that are empty.\n\n------\n\n3. src/backend/replication/pgoutput/pgoutput.c - typo\n\n+ /*\n+ * If the BEGIN PREPARE was not yet sent, then it means there were no\n+ * relevant changes encountered, so we can skip the COMMIT PREPARED\n+ * messsage too.\n+ */\n\nTypo: \"messsage\" --> \"message\"\n\n(NOTE this same typo is in 2 places)\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 23 Jul 2021 10:13:00 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Jul 22, 2021 at 11:37 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n\nI have some minor comments on the v9 patch:\n\n(1) Several whitespace warnings on patch application\n\n(2) Suggested patch comment change:\n\nBEFORE:\nThe current logical replication behaviour is to send every transaction to\nsubscriber even though the transaction is empty (because it does not\nAFTER:\nThe current logical replication behaviour is to send every transaction to\nsubscriber even though the transaction might be empty (because it does not\n\n(3) Comment needed for added struct defn:\n\ntypedef struct PGOutputTxnData\n\n(4) Improve comment.\n\nCan you add a comma (or add words) in the below sentence, so we know\nhow to read it?\n\n+ /*\n+ * Delegate to assign the begin sent flag as false same as for the\n+ * BEGIN message.\n+ */\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 23 Jul 2021 10:26:16 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Jul 23, 2021 at 10:26 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Jul 22, 2021 at 11:37 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n>\n> I have some minor comments on the v9 patch:\n>\n> (1) Several whitespace warnings on patch application\n>\n\nFixed.\n\n> (2) Suggested patch comment change:\n>\n> BEFORE:\n> The current logical replication behaviour is to send every transaction to\n> subscriber even though the transaction is empty (because it does not\n> AFTER:\n> The current logical replication behaviour is to send every transaction to\n> subscriber even though the transaction might be empty (because it does not\n>\nChanged accordingly.\n\n> (3) Comment needed for added struct defn:\n>\n> typedef struct PGOutputTxnData\n>\n\nAdded.\n\n> (4) Improve comment.\n>\n> Can you add a comma (or add words) in the below sentence, so we know\n> how to read it?\n>\n\nUpdated.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 23 Jul 2021 15:41:07 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Jul 23, 2021 at 10:13 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I have reviewed the v9 patch and my feedback comments are below:\n>\n> //////////\n>\n> 1. Apply v9 gave multiple whitespace warnings\n\nFixed.\n\n>\n> ------\n>\n> 2. Commit comment - wording\n>\n> pgoutput will also skip COMMIT PREPARED and ROLLBACK PREPARED messages\n> for transactions which were skipped.\n>\n> =>\n>\n> Is that correct? Or did you mean to say:\n>\n> AFTER\n> pgoutput will also skip COMMIT PREPARED and ROLLBACK PREPARED messages\n> for transactions that are empty.\n>\n> ------\n\nUpdated.\n\n>\n> 3. src/backend/replication/pgoutput/pgoutput.c - typo\n>\n> + /*\n> + * If the BEGIN PREPARE was not yet sent, then it means there were no\n> + * relevant changes encountered, so we can skip the COMMIT PREPARED\n> + * messsage too.\n> + */\n>\n> Typo: \"messsage\" --> \"message\"\n>\n> (NOTE this same typo is in 2 places)\n>\nFixed.\n\nI have made these changes in v10 of the patch.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Fri, 23 Jul 2021 15:42:09 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "I have reviewed the v10 patch.\n\nApply / build / test was all OK.\n\nJust one review comment:\n\n//////////\n\n1. Typo\n\n@@ -130,6 +132,17 @@ typedef struct RelationSyncEntry\n TupleConversionMap *map;\n } RelationSyncEntry;\n\n+/*\n+ * Maintain a per-transaction level variable to track whether the\n+ * transaction has sent BEGIN or BEGIN PREPARE. BEGIN or BEGIN PREPARE\n+ * is only sent when the first change in a transaction is processed.\n+ * This make it possible to skip transactions that are empty.\n+ */\n\n=>\n\ntypo: \"make it possible\" --> \"makes it possible\"\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 23 Jul 2021 19:38:07 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Jul 23, 2021 at 7:38 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I have reviewed the v10 patch.\n>\n> Apply / build / test was all OK.\n>\n> Just one review comment:\n>\n> //////////\n>\n> 1. Typo\n>\n> @@ -130,6 +132,17 @@ typedef struct RelationSyncEntry\n> TupleConversionMap *map;\n> } RelationSyncEntry;\n>\n> +/*\n> + * Maintain a per-transaction level variable to track whether the\n> + * transaction has sent BEGIN or BEGIN PREPARE. BEGIN or BEGIN PREPARE\n> + * is only sent when the first change in a transaction is processed.\n> + * This make it possible to skip transactions that are empty.\n> + */\n>\n> =>\n>\n> typo: \"make it possible\" --> \"makes it possible\"\n>\n\nfixed.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 23 Jul 2021 20:09:33 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "FYI - I have checked the v11 patch. Everything applies, builds, and\ntests OK for me, and I have no more review comments. So v11 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 26 Jul 2021 11:20:23 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Jul 23, 2021 at 8:09 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> fixed.\n\n\nThe v11 patch LGTM.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 26 Jul 2021 12:03:59 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Friday, July 23, 2021 7:10 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> On Fri, Jul 23, 2021 at 7:38 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > I have reviewed the v10 patch.\r\nThe patch v11 looks good to me as well. \r\nThanks for addressing my past comments.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 26 Jul 2021 04:12:16 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "Hi Ajin.\n\nI have spent some time studying how your \"empty transaction\" (v11)\npatch will affect network traffic and transaction throughput.\n\nBLUF\n====\n\nFor my test environment the general observations with the patch applied are:\n- There is a potentially large reduction of network traffic (depends\non the number of empty transactions sent)\n- Transaction throughput improved up to 7% (average ~2% across\nmixtures) for Synchronous mode\n- Transaction throughput improved up to 7% (average ~3% across\nmixtures) for NOT Synchronous mode\n\nSo this patch LGTM.\n\n\nTEST INFORMATION\n================\n\nOverview\n-------------\n\n1. There are 2 similar tables. One table is published; the other is not.\n\n2. Equivalent simple SQL operations are performed on these tables. E.g.\n- INSERT/UPDATE/DELETE using normal COMMIT\n- INSERT/UPDATE/DELETE using 2PC COMMIT PREPARED\n\n3. pg_bench is used to measure the throughput for different mixes of\nempty and not-empty transactions sent. E.g.\n- 0% are empty\n- 25% are empty\n- 50% are empty\n- 75% are empty\n- 100% are empty\n\n4. The apply_dispatch code has been temporarily modified to log the\nnumber of protocol messages/bytes being processed.\n- At the conclusion of the test run the logs are processed to extract\nthe numbers.\n\n5. Each test run is 15 minutes elapsed time.\n\n6. The tests are repeated without/with your patch applied\n- So, there are 2 (without/with patch) x 5 (different mixes) = 10 test results\n- Transaction throughput results are from pg_bench\n- Protocol message bytes are extracted from the logs (from modified\napply_dispatch)\n\n7. Also, the entire set of 10 test cases was repeated with\nsynchronous_standby_names setting enable/disabled.\n- Enabled, so the results are for total round-trip processing of the pub/sub.\n- Disabled. no waiting at the publisher side.\n\n\nConfiguration\n-------------------\n\nMy environment is a single test machine with 2 PG instances (for pub and sub).\n\nUsing default configs except:\n\nPUB-node\n- wal_level = logical\n- max_wal_senders = 10\n- logical_decoding_work_mem = 64kB\n- checkpoint_timeout = 30min\n- min_wal_size = 10GB\n- max_wal_size = 20GB\n- shared_buffers = 2GB\n- synchronous_standby_names = 'sync_sub' (for synchronous testing only)\n\nSUB-node\n- max_worker_processes = 11\n- max_logical_replication_workers = 10\n- checkpoint_timeout = 30min\n- min_wal_size = 10GB\n- max_wal_size = 20GB\n- shared_buffers = 2GB\n\nSQL files\n-------------\n\nContents of test_empty_not_published.sql:\n\n-- Operations for table not published\nBEGIN;\nINSERT INTO test_tab_nopub VALUES(1, 'foo');\nUPDATE test_tab_nopub SET b = 'bar' WHERE a = 1;\nDELETE FROM test_tab_nopub WHERE a = 1;\nCOMMIT;\n\n-- 2PC operations for table not published\nBEGIN;\nINSERT INTO test_tab_nopub VALUES(2, 'fizz');\nUPDATE test_tab_nopub SET b = 'bang' WHERE a = 2;\nDELETE FROM test_tab_nopub WHERE a = 2;\nPREPARE TRANSACTION 'gid_nopub';\nCOMMIT PREPARED 'gid_nopub';\n\n~~\n\nContents of test_empty_published.sql:\n\n(same as above but the table is called test_tab)\n\n\nSQL Tables\n----------------\n\n(tables are the same apart from the name)\n\nCREATE TABLE test_tab (a int primary key, b text, c timestamptz\nDEFAULT now(), d bigint DEFAULT 999);\n\nCREATE TABLE test_tab_nopub (a int primary key, b text, c timestamptz\nDEFAULT now(), d bigint DEFAULT 999);\n\n\nExample pg_bench command\n------------------------\n\n(this example is showing a test for a 25% mix of empty transactions)\n\npgbench -s 100 -T 900 -c 1 -f test_empty_not_published.sql@5 -f\ntest_empty_published.sql@15 test_pub\n\n\nRESULTS / OBSERVATIONS\n======================\n\nSynchronous Mode\n----------------\n\n- As the percentage mix of empty transactions increases, so does the\ntransaction throughput. I assume this is because we are using\nsynchronous mode; so when there is less waiting time, then there is\nmore time available for transaction processing\n\n- The performance was generally similar before/after the patch, but\nthere was an observed throughput improvement of ~2% (averaged across\nall mixes)\n\n- The number of protocol bytes is associated with the number of\ntransactions that are processed during the test time of 15 minutes.\nThis adds up to a significant number of bytes even when the\ntransactions are empty.\n\n- For the unpatched code as the transaction rate increases, then so\ndoes the number of traffic bytes.\n\n- The patch improves this significantly by eliminating all the empty\ntransaction traffic.\n\n- Before the patch, even \"empty transactions\" are processing some\nbytes, so it can never reach zero. After the patch, empty transaction\ntraffic is eliminated entirely.\n\n\nNOT Synchronous Mode\n--------------------\n\n- Since there is no synchronous waiting for round trips, the\ntransaction throughput is generally consistent regardless of the empty\ntransaction mix.\n\n- There is a hint of a small overall improvement in throughput as the\nempty transaction mix approaches near 100%. For my test environment\nboth the pub/sub nodes are using the same machine/CPU, so I guess is\nthat when there is less CPU spent processing messages in the Apply\nWorker then there is more CPU available to pump transactions at the\npublisher side.\n\n- The patch transaction throughput seems ~3% better than for\nnon-patched. This might also be attributable to the same reason\nmentioned above - less CPU spent processing empty messages at the\nsubscriber side leaves more CPU available to pump transactions from\nthe publisher side.\n\n- The number of protocol bytes is associated with the number of\ntransactions that are processed during the test time of 15 minutes.\n\n- Because the transaction throughput is consistent, the traffic of\nprotocol bytes here is determined mainly by the proportion of \"empty\ntransactions\" in the mixture.\n\n- Before the patch, even “empty transactions” are processing some\nbytes, so it can never reach zero. After the patch, the empty\ntransaction traffic is eliminated entirely.\n\n- Before the patch, even “empty transactions” are processing some\nbytes, so it can never reach zero. After the patch, the empty\ntransaction traffic is eliminated entirely.\n\n\nATTACHMENTS\n===========\n\nPSA\n\nA1. A PDF version of my test report (also includes raw result data)\nA2. Sync: Graph of Transaction throughput\nA3. Sync: Graph of Protocol bytes (total)\nA4. Sync: Graph of Protocol bytes (per transaction)\nA5. Not-Sync: Graph of Transaction throughput\nA6. Not-Sync: Graph of Protocol bytes (total)\nA7. Not-Sync: Graph of Protocol bytes (per transaction)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Fri, 30 Jul 2021 15:40:52 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Jul 23, 2021 at 3:39 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n\nLet's first split the patch for prepared and non-prepared cases as\nthat will help to focus on each of them separately. BTW, why haven't\nyou considered implementing point 1b as explained by Andres in his\nemail [1]? I think we can send a keepalive message in case of\nsynchronous replication when we skip an empty transaction, otherwise,\nit might delay in responding to transactions synchronous_commit mode.\nI think in the tests done in the thread, it might not have been shown\nbecause we are already sending keepalives too frequently. But what if\nsomeone disables wal_sender_timeout or kept it to a very large value?\nSee WalSndKeepaliveIfNecessary. The other thing you might want to look\nat is if the reason for frequent keepalives is the same as described\nin the email [2].\n\nFew other miscellaneous comments:\n1.\nstatic void\n pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n- XLogRecPtr commit_lsn)\n+ XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn,\n+ TimestampTz prepare_time)\n {\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n+\n OutputPluginUpdateProgress(ctx);\n\n+ /*\n+ * If the BEGIN PREPARE was not yet sent, then it means there were no\n+ * relevant changes encountered, so we can skip the COMMIT PREPARED\n+ * message too.\n+ */\n+ if (txndata)\n+ {\n+ bool skip = !txndata->sent_begin_txn;\n+ pfree(txndata);\n+ txn->output_plugin_private = NULL;\n\nHow is this supposed to work after the restart when prepared is sent\nbefore the restart and we are just sending commit_prepared after\nrestart? Won't this lead to sending commit_prepared even when the\ncorresponding prepare is not sent? Can we think of a better way to\ndeal with this?\n\n2.\n@@ -222,8 +224,10 @@ logicalrep_write_commit_prepared(StringInfo out,\nReorderBufferTXN *txn,\n pq_sendbyte(out, flags);\n\n /* send fields */\n+ pq_sendint64(out, prepare_end_lsn);\n pq_sendint64(out, commit_lsn);\n pq_sendint64(out, txn->end_lsn);\n+ pq_sendint64(out, prepare_time);\n\nDoesn't this means a change of protocol and how is it suppose to work\nwhen say publisher is 15 and subscriber from 14 which I think works\nwithout such a change?\n\n\n[1] - https://www.postgresql.org/message-id/20200309183018.tzkzwu635sd366ej%40alap3.anarazel.de\n[2] - https://www.postgresql.org/message-id/CALtH27cip5uQNJb4uHjLXtx1R52ELqXVfcP9fhHr%3DAvFo1dtqw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 2 Aug 2021 14:50:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 23, 2021 at 3:39 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n>\n> Let's first split the patch for prepared and non-prepared cases as\n> that will help to focus on each of them separately.\n\nAs a first shot, I have split the patch into prepared and non-prepared cases,\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Sat, 7 Aug 2021 00:01:42 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Sat, Aug 7, 2021 at 12:01 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 23, 2021 at 3:39 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> >\n> > Let's first split the patch for prepared and non-prepared cases as\n> > that will help to focus on each of them separately.\n>\n> As a first shot, I have split the patch into prepared and non-prepared cases,\n\nI have reviewed the v12* split patch set.\n\nApply / build / test was all OK\n\nBelow are my code review comments (mostly cosmetic).\n\n//////////\n\nComments for v12-0001\n=====================\n\n1. Patch comment\n\n=>\n\nThis comment as-is might have been OK before the 2PC code was\ncommitted, but now that the 2PC is part of the HEAD perhaps this\ncomment needs to be expanded just to say this patch is ONLY for fixing\nempty transactions for the cases of non-\"streaming\" and\nnon-\"two_phase\", and the other kinds will be tackled separately.\n\n------\n\n2. src/backend/replication/pgoutput/pgoutput.c - PGOutputTxnData comment\n\n+/*\n+ * Maintain a per-transaction level variable to track whether the\n+ * transaction has sent BEGIN or BEGIN PREPARE. BEGIN or BEGIN PREPARE\n+ * is only sent when the first change in a transaction is processed.\n+ * This makes it possible to skip transactions that are empty.\n+ */\n\n=>\n\nMaybe this is true for the combined v12-0001/v12-0002 case but just\nfor the v12-0001 patch I think it is nor right to imply that some\nskipping of the BEGIN_PREPARE is possible, because IIUC it isn;t\nimplemented in the *this* patch/\n\n------\n\n3. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_txn whitespace\n\n+ PGOutputTxnData *txndata = MemoryContextAllocZero(ctx->context,\n+ sizeof(PGOutputTxnData));\n\n=>\n\nMisaligned indentation?\n\n------\n\n4. src/backend/replication/pgoutput/pgoutput.c - pgoutput_change brackets\n\n+ /*\n+ * Output BEGIN if we haven't yet, unless streaming.\n+ */\n+ if (!in_streaming && !txndata->sent_begin_txn)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ }\n\n=>\n\nThe brackets are not needed for the if with a single statement.\n\n------\n\n5. src/backend/replication/pgoutput/pgoutput.c - pgoutput_truncate\nbrackets/comment\n\n+ /*\n+ * output BEGIN if we haven't yet,\n+ * while streaming no need to send BEGIN / BEGIN PREPARE.\n+ */\n+ if (!in_streaming && !txndata->sent_begin_txn)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ }\n\n5a. =>\n\nSame as review comment 4. The brackets are not needed for the if with\na single statement.\n\n5b. =>\n\nNotice this code is the same as cited in review comment 4. So probably\nthe code comment should be consistent/same also?\n\n------\n\n6. src/backend/replication/pgoutput/pgoutput.c - pgoutput_message brackets\n\n+ Assert(txndata);\n+ if (!txndata->sent_begin_txn)\n+ {\n+ pgoutput_begin(ctx, txn);\n+ }\n\n=>\n\nThe brackets are not needed for the if with a single statement.\n\n------\n\n7. typdefs.list\n\n=> The structure PGOutputTxnData was added in v12-0001, so the\ntypedefs.list probably should also be updated.\n\n//////////\n\nComments for v12-0002\n=====================\n\n8. Patch comment\n\nThis patch addresses the above problem by postponing the BEGIN / BEGIN\nPREPARE messages until the first change is encountered.\nIf (when processing a COMMIT / PREPARE message) we find there had been\nno other change for that transaction, then do not send the COMMIT /\nPREPARE message. This means that pgoutput will skip BEGIN / COMMIT\nor BEGIN PREPARE / PREPARE messages for transactions that are empty.\npgoutput will also skip COMMIT PREPARED and ROLLBACK PREPARED messages\nfor transactions that are empty.\n\n8a. =>\n\nI’m not sure this comment is 100% correct for this specific patch. The\nwhole BEGIN/COMMIT was already handled by the v12-0001 patch, right?\nSo really this comment should only be mentioning about BEGIN PREPARE\nand COMMIT PREPARED I thought.\n\n8b. =>\n\nI think there should also be some mention that this patch is not\nhandling the \"streaming\" case of empty tx at all.\n\n------\n\n9. src/backend/replication/logical/proto.c - protocol version\n\n@@ -248,8 +250,10 @@ logicalrep_write_commit_prepared(StringInfo out,\nReorderBufferTXN *txn,\n pq_sendbyte(out, flags);\n\n /* send fields */\n+ pq_sendint64(out, prepare_end_lsn);\n pq_sendint64(out, commit_lsn);\n pq_sendint64(out, txn->end_lsn);\n+ pq_sendint64(out, prepare_time);\n pq_sendint64(out, txn->xact_time.commit_time);\n pq_sendint32(out, txn->xid);\n\n=>\n\nI agree with a previous feedback comment from Amit - Probably there is\nsome protocol version requirement/implications here because the\nmessage format has been changed in logicalrep_write_commit_prepared\nand logicalrep_read_commit_prepared.\n\ne.g. Does this code need to be cognisant of the version and behave\ndifferently accordingly?\n\n------\n\n10. src/backend/replication/pgoutput/pgoutput.c -\npgoutput_begin_prepare flag moved?\n\n+ Assert(txndata);\n OutputPluginPrepareWrite(ctx, !send_replication_origin);\n logicalrep_write_begin_prepare(ctx->out, txn);\n+ txndata->sent_begin_txn = true;\n\n send_repl_origin(ctx, txn->origin_id, txn->origin_lsn,\n send_replication_origin);\n\n OutputPluginWrite(ctx, true);\n- txndata->sent_begin_txn = true;\n- txn->output_plugin_private = txndata;\n }\n\n=>\n\nIn the v12-0001 patch, you set the begin_txn flags AFTER the\nOuputPluginWrite, but in the v12-0002 you set them BEFORE the\nOuputPluginWrite. Why the difference? Maybe it should be consistent?\n\n------\n\n11. src/test/subscription/t/021_twophase.pl - proto_version tests needed?\n\nDoes this need some other tests to make sure the older proto_version\nis still usable? Refer also to the review comment 9.\n\n------\n\n12. src/tools/pgindent/typedefs.list - PGOutputTxnData\n\n PGOutputData\n+PGOutputTxnData\n PGPROC\n\n=>\n\nThis change looks good, but I think it should have been done in\nv12-0001 and not here in v12-0002.\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 9 Aug 2021 17:05:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 23, 2021 at 3:39 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n>\n> Let's first split the patch for prepared and non-prepared cases as\n> that will help to focus on each of them separately. BTW, why haven't\n> you considered implementing point 1b as explained by Andres in his\n> email [1]? I think we can send a keepalive message in case of\n> synchronous replication when we skip an empty transaction, otherwise,\n> it might delay in responding to transactions synchronous_commit mode.\n> I think in the tests done in the thread, it might not have been shown\n> because we are already sending keepalives too frequently. But what if\n> someone disables wal_sender_timeout or kept it to a very large value?\n> See WalSndKeepaliveIfNecessary. The other thing you might want to look\n> at is if the reason for frequent keepalives is the same as described\n> in the email [2].\n>\n\nI have tried to address the comment here by modifying the\nctx->update_progress callback function (WalSndUpdateProgress) provided\nfor plugins. I have added an option\nby which the callback can specify if it wants to send keep_alives. And\nwhen the callback is called with that option set, walsender updates a\nflag force_keep_alive_syncrep.\nThe Walsender in the WalSndWaitForWal for loop, checks this flag and\nif synchronous replication is enabled, then sends a keep alive.\nCurrently this logic\nis added as an else to the current logic that is already there in\nWalSndWaitForWal, which is probably considered unnecessary and a\nsource of the keep alive flood\nthat you talked about. So, I can change that according to how that fix\nshapes up there. I have also added an extern function in syncrep.c\nthat makes it possible\nfor walsender to query if synchronous replication is turned on.\n\nThe reason I had to turn on a flag and rely on the WalSndWaitForWal to\nsend the keep alive in its next iteration is because I tried doing\nthis directly when a\ncommit is skipped but it didn't work. The reason for this is that when\nthe commit is being decoded the sentptr at the moment is at the commit\nLSN and the keep alive\nwill be sent for the commit LSN but the syncrep wait is waiting for\nend_lsn of the transaction which is the next LSN. So, sending a keep\nalive at the moment the\ncommit is decoded doesn't seem to solve the problem of the waiting\nsynchronous reply.\n\n> Few other miscellaneous comments:\n> 1.\n> static void\n> pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> - XLogRecPtr commit_lsn)\n> + XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn,\n> + TimestampTz prepare_time)\n> {\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> OutputPluginUpdateProgress(ctx);\n>\n> + /*\n> + * If the BEGIN PREPARE was not yet sent, then it means there were no\n> + * relevant changes encountered, so we can skip the COMMIT PREPARED\n> + * message too.\n> + */\n> + if (txndata)\n> + {\n> + bool skip = !txndata->sent_begin_txn;\n> + pfree(txndata);\n> + txn->output_plugin_private = NULL;\n>\n> How is this supposed to work after the restart when prepared is sent\n> before the restart and we are just sending commit_prepared after\n> restart? Won't this lead to sending commit_prepared even when the\n> corresponding prepare is not sent? Can we think of a better way to\n> deal with this?\n>\n\nI have tried to resolve this by adding logic in worker,c to silently\nignore spurious commit_prepareds. But this change required checking if\nthe prepare exists on the\nsubscriber before attempting the commit_prepared but the current API\nthat checks this requires prepare time and transaction end_lsn. But\nfor this I had to\nchange the protocol of commit_prepared, and I understand that this\nwould break backward compatibility between subscriber and publisher\n(you have raised this issue as well).\nI am not sure how else to handle this, let me know if you have any\nother ideas. One option could be to have another API to check if the\nprepare exists on the subscriber with\nthe prepared 'gid' alone, without checking prepare_time or end_lsn.\nLet me know if this idea works.\n\nI have left out the patch 0002 for prepared transactions until we\narrive at a decision on how to address the above issue.\n\nPeter,\nI have also addressed the comments you've raised on patch 0001, please\nhave a look and confirm.\n\nRegards,\nAjin Cherian\nFujitsu Australia.", "msg_date": "Fri, 13 Aug 2021 21:00:58 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Aug 13, 2021 at 9:01 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 23, 2021 at 3:39 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> >\n> > Let's first split the patch for prepared and non-prepared cases as\n> > that will help to focus on each of them separately. BTW, why haven't\n> > you considered implementing point 1b as explained by Andres in his\n> > email [1]? I think we can send a keepalive message in case of\n> > synchronous replication when we skip an empty transaction, otherwise,\n> > it might delay in responding to transactions synchronous_commit mode.\n> > I think in the tests done in the thread, it might not have been shown\n> > because we are already sending keepalives too frequently. But what if\n> > someone disables wal_sender_timeout or kept it to a very large value?\n> > See WalSndKeepaliveIfNecessary. The other thing you might want to look\n> > at is if the reason for frequent keepalives is the same as described\n> > in the email [2].\n> >\n>\n> I have tried to address the comment here by modifying the\n> ctx->update_progress callback function (WalSndUpdateProgress) provided\n> for plugins. I have added an option\n> by which the callback can specify if it wants to send keep_alives. And\n> when the callback is called with that option set, walsender updates a\n> flag force_keep_alive_syncrep.\n> The Walsender in the WalSndWaitForWal for loop, checks this flag and\n> if synchronous replication is enabled, then sends a keep alive.\n> Currently this logic\n> is added as an else to the current logic that is already there in\n> WalSndWaitForWal, which is probably considered unnecessary and a\n> source of the keep alive flood\n> that you talked about. So, I can change that according to how that fix\n> shapes up there. I have also added an extern function in syncrep.c\n> that makes it possible\n> for walsender to query if synchronous replication is turned on.\n>\n> The reason I had to turn on a flag and rely on the WalSndWaitForWal to\n> send the keep alive in its next iteration is because I tried doing\n> this directly when a\n> commit is skipped but it didn't work. The reason for this is that when\n> the commit is being decoded the sentptr at the moment is at the commit\n> LSN and the keep alive\n> will be sent for the commit LSN but the syncrep wait is waiting for\n> end_lsn of the transaction which is the next LSN. So, sending a keep\n> alive at the moment the\n> commit is decoded doesn't seem to solve the problem of the waiting\n> synchronous reply.\n>\n> > Few other miscellaneous comments:\n> > 1.\n> > static void\n> > pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,\n> > ReorderBufferTXN *txn,\n> > - XLogRecPtr commit_lsn)\n> > + XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn,\n> > + TimestampTz prepare_time)\n> > {\n> > + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> > +\n> > OutputPluginUpdateProgress(ctx);\n> >\n> > + /*\n> > + * If the BEGIN PREPARE was not yet sent, then it means there were no\n> > + * relevant changes encountered, so we can skip the COMMIT PREPARED\n> > + * message too.\n> > + */\n> > + if (txndata)\n> > + {\n> > + bool skip = !txndata->sent_begin_txn;\n> > + pfree(txndata);\n> > + txn->output_plugin_private = NULL;\n> >\n> > How is this supposed to work after the restart when prepared is sent\n> > before the restart and we are just sending commit_prepared after\n> > restart? Won't this lead to sending commit_prepared even when the\n> > corresponding prepare is not sent? Can we think of a better way to\n> > deal with this?\n> >\n>\n> I have tried to resolve this by adding logic in worker,c to silently\n> ignore spurious commit_prepareds. But this change required checking if\n> the prepare exists on the\n> subscriber before attempting the commit_prepared but the current API\n> that checks this requires prepare time and transaction end_lsn. But\n> for this I had to\n> change the protocol of commit_prepared, and I understand that this\n> would break backward compatibility between subscriber and publisher\n> (you have raised this issue as well).\n> I am not sure how else to handle this, let me know if you have any\n> other ideas. One option could be to have another API to check if the\n> prepare exists on the subscriber with\n> the prepared 'gid' alone, without checking prepare_time or end_lsn.\n> Let me know if this idea works.\n>\n> I have left out the patch 0002 for prepared transactions until we\n> arrive at a decision on how to address the above issue.\n>\n> Peter,\n> I have also addressed the comments you've raised on patch 0001, please\n> have a look and confirm.\n\nI have reviewed the v13-0001 patch.\n\nApply / build / test was all OK\n\nBelow are my code review comments.\n\n//////////\n\nComments for v13-0001\n=====================\n\n1. Patch comment\n\n=>\n\nProbably this comment should include some description for the new\n\"keepalive\" logic as well.\n\n------\n\n2. src/backend/replication/syncrep.c - new function\n\n@@ -330,6 +330,18 @@ SyncRepWaitForLSN(XLogRecPtr lsn, bool commit)\n }\n\n /*\n+ * Check if Sync Rep is enabled\n+ */\n+bool\n+SyncRepEnabled(void)\n+{\n+ if (SyncRepRequested() && ((volatile WalSndCtlData *)\nWalSndCtl)->sync_standbys_defined)\n+ return true;\n+ else\n+ return false;\n+}\n+\n\n2a. Function comment =>\n\nWhy abbreviations in the comment? Why not say \"synchronous\nreplication\" instead of \"Sync Rep\".\n\n~~\n\n2b. if/else =>\n\nRemove the if/else. e.g.\n\nreturn SyncRepRequested() && ((volatile WalSndCtlData *)\nWalSndCtl)->sync_standbys_defined;\n\n~~\n\n2c. Call the new function =>\n\nThere is some existing similar code in SyncRepWaitForLSN(), e.g.\n\nif (!SyncRepRequested() ||\n!((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\nreturn;\n\nNow that you have a new function you maybe can call it from here, e.g.\n\nif (!SyncRepEnabled())\nreturn;\n\n------\n\n3. src/backend/replication/walsender.c - whitespace\n\n+ if (send_keep_alive)\n+ force_keep_alive_syncrep = true;\n+\n+\n\n=>\n\nExtra blank line?\n\n------\n\n4. src/backend/replication/walsender.c - call keepalive\n\n if (MyWalSnd->flush < sentPtr &&\n MyWalSnd->write < sentPtr &&\n !waiting_for_ping_response)\n+ {\n WalSndKeepalive(false);\n+ }\n+ else\n+ {\n+ if (force_keep_alive_syncrep && SyncRepEnabled())\n+ WalSndKeepalive(false);\n+ }\n\n\n4a. Move the SynRepEnabled() call =>\n\nI think it is not necessary to call the SynRepEnabled() here. Instead,\nit might be better if this is called back when you assign the\nforce_keep_alive_syncrep flag. So change the WalSndUpdateProgress,\ne.g.\n\nBEFORE\nif (send_keep_alive)\n force_keep_alive_syncrep = true;\nAFTER\nforce_keep_alive_syncrep = send_keep_alive && SyncRepEnabled();\n\nNote: Also, that assignment also deserves a big comment to say what it is doing.\n\n~~\n\n4b. Change the if/else =>\n\nIf you make the change for 4a. then perhaps the keepalive if/else is\noverkill and could be changed.e.g.\n\nif (force_keep_alive_syncrep ||\n MyWalSnd->flush < sentPtr &&\n MyWalSnd->write < sentPtr &&\n !waiting_for_ping_response)\n WalSndKeepalive(false);\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 16 Aug 2021 16:43:51 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Aug 16, 2021 at 4:44 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> I have reviewed the v13-0001 patch.\n>\n> Apply / build / test was all OK\n>\n> Below are my code review comments.\n>\n> //////////\n>\n> Comments for v13-0001\n> =====================\n>\n> 1. Patch comment\n>\n> =>\n>\n> Probably this comment should include some description for the new\n> \"keepalive\" logic as well.\n\nAdded.\n\n>\n> ------\n>\n> 2. src/backend/replication/syncrep.c - new function\n>\n> @@ -330,6 +330,18 @@ SyncRepWaitForLSN(XLogRecPtr lsn, bool commit)\n> }\n>\n> /*\n> + * Check if Sync Rep is enabled\n> + */\n> +bool\n> +SyncRepEnabled(void)\n> +{\n> + if (SyncRepRequested() && ((volatile WalSndCtlData *)\n> WalSndCtl)->sync_standbys_defined)\n> + return true;\n> + else\n> + return false;\n> +}\n> +\n>\n> 2a. Function comment =>\n>\n> Why abbreviations in the comment? Why not say \"synchronous\n> replication\" instead of \"Sync Rep\".\n>\n\nChanged.\n\n> ~~\n>\n> 2b. if/else =>\n>\n> Remove the if/else. e.g.\n>\n> return SyncRepRequested() && ((volatile WalSndCtlData *)\n> WalSndCtl)->sync_standbys_defined;\n>\n> ~~\n\nChanged.\n\n>\n> 2c. Call the new function =>\n>\n> There is some existing similar code in SyncRepWaitForLSN(), e.g.\n>\n> if (!SyncRepRequested() ||\n> !((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n> return;\n>\n> Now that you have a new function you maybe can call it from here, e.g.\n>\n> if (!SyncRepEnabled())\n> return;\n>\n\nUpdated.\n\n> ------\n>\n> 3. src/backend/replication/walsender.c - whitespace\n>\n> + if (send_keep_alive)\n> + force_keep_alive_syncrep = true;\n> +\n> +\n>\n> =>\n>\n> Extra blank line?\n\nRemoved.\n\n>\n> ------\n>\n> 4. src/backend/replication/walsender.c - call keepalive\n>\n> if (MyWalSnd->flush < sentPtr &&\n> MyWalSnd->write < sentPtr &&\n> !waiting_for_ping_response)\n> + {\n> WalSndKeepalive(false);\n> + }\n> + else\n> + {\n> + if (force_keep_alive_syncrep && SyncRepEnabled())\n> + WalSndKeepalive(false);\n> + }\n>\n>\n> 4a. Move the SynRepEnabled() call =>\n>\n> I think it is not necessary to call the SynRepEnabled() here. Instead,\n> it might be better if this is called back when you assign the\n> force_keep_alive_syncrep flag. So change the WalSndUpdateProgress,\n> e.g.\n>\n> BEFORE\n> if (send_keep_alive)\n> force_keep_alive_syncrep = true;\n> AFTER\n> force_keep_alive_syncrep = send_keep_alive && SyncRepEnabled();\n>\n> Note: Also, that assignment also deserves a big comment to say what it is doing.\n>\n> ~~\n\nchanged.\n\n>\n> 4b. Change the if/else =>\n>\n> If you make the change for 4a. then perhaps the keepalive if/else is\n> overkill and could be changed.e.g.\n>\n> if (force_keep_alive_syncrep ||\n> MyWalSnd->flush < sentPtr &&\n> MyWalSnd->write < sentPtr &&\n> !waiting_for_ping_response)\n> WalSndKeepalive(false);\n>\n\nChanged.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Tue, 17 Aug 2021 22:58:42 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "I reviewed the v14-0001 patch.\n\nAll my previous comments have been addressed.\n\nApply / build / test was all OK.\n\n------\n\nMore review comments:\n\n1. Params names in the function declarations should match the rest of the code.\n\n1a. src/include/replication/logical.h\n\n@@ -26,7 +26,8 @@ typedef LogicalOutputPluginWriterWrite\nLogicalOutputPluginWriterPrepareWrite;\n\n typedef void (*LogicalOutputPluginWriterUpdateProgress) (struct\nLogicalDecodingContext *lr,\n XLogRecPtr Ptr,\n- TransactionId xid\n+ TransactionId xid,\n+ bool send_keep_alive\n\n=>\nChange \"send_keep_alive\" --> \"send_keepalive\"\n\n~~\n\n1b. src/include/replication/output_plugin.h\n\n@@ -243,6 +243,6 @@ typedef struct OutputPluginCallbacks\n /* Functions in replication/logical/logical.c */\n extern void OutputPluginPrepareWrite(struct LogicalDecodingContext\n*ctx, bool last_write);\n extern void OutputPluginWrite(struct LogicalDecodingContext *ctx,\nbool last_write);\n-extern void OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx);\n+extern void OutputPluginUpdateProgress(struct LogicalDecodingContext\n*ctx, bool send_keep_alive);\n\n=>\nChange \"send_keep_alive\" --> \"send_keepalive\"\n\n------\n\n2. Comment should be capitalized - src/backend/replication/walsender.c\n\n@@ -170,6 +170,9 @@ static TimestampTz last_reply_timestamp = 0;\n /* Have we sent a heartbeat message asking for reply, since last reply? */\n static bool waiting_for_ping_response = false;\n\n+/* force keep alive when skipping transactions in synchronous\nreplication mode */\n+static bool force_keepalive_syncrep = false;\n\n=>\n\"force\" --> \"Force\"\n\n------\n\nOtherwise, v14-0001 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 25 Aug 2021 17:15:00 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Aug 25, 2021 at 5:15 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I reviewed the v14-0001 patch.\n>\n> All my previous comments have been addressed.\n>\n> Apply / build / test was all OK.\n>\n> ------\n>\n> More review comments:\n>\n> 1. Params names in the function declarations should match the rest of the code.\n>\n> 1a. src/include/replication/logical.h\n>\n> @@ -26,7 +26,8 @@ typedef LogicalOutputPluginWriterWrite\n> LogicalOutputPluginWriterPrepareWrite;\n>\n> typedef void (*LogicalOutputPluginWriterUpdateProgress) (struct\n> LogicalDecodingContext *lr,\n> XLogRecPtr Ptr,\n> - TransactionId xid\n> + TransactionId xid,\n> + bool send_keep_alive\n>\n> =>\n> Change \"send_keep_alive\" --> \"send_keepalive\"\n>\n> ~~\n>\n> 1b. src/include/replication/output_plugin.h\n>\n> @@ -243,6 +243,6 @@ typedef struct OutputPluginCallbacks\n> /* Functions in replication/logical/logical.c */\n> extern void OutputPluginPrepareWrite(struct LogicalDecodingContext\n> *ctx, bool last_write);\n> extern void OutputPluginWrite(struct LogicalDecodingContext *ctx,\n> bool last_write);\n> -extern void OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx);\n> +extern void OutputPluginUpdateProgress(struct LogicalDecodingContext\n> *ctx, bool send_keep_alive);\n>\n> =>\n> Change \"send_keep_alive\" --> \"send_keepalive\"\n>\n> ------\n>\n> 2. Comment should be capitalized - src/backend/replication/walsender.c\n>\n> @@ -170,6 +170,9 @@ static TimestampTz last_reply_timestamp = 0;\n> /* Have we sent a heartbeat message asking for reply, since last reply? */\n> static bool waiting_for_ping_response = false;\n>\n> +/* force keep alive when skipping transactions in synchronous\n> replication mode */\n> +static bool force_keepalive_syncrep = false;\n>\n> =>\n> \"force\" --> \"Force\"\n>\n> ------\n>\n> Otherwise, v14-0001 LGTM.\n>\n\nThanks for the comments. Addressed them in the attached patch.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Wed, 1 Sep 2021 20:57:39 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Sep 1, 2021 at 8:57 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Thanks for the comments. Addressed them in the attached patch.\n>\n> regards,\n> Ajin Cherian\n> Fujitsu Australia\n\nMinor update to rebase the patch so that it applies clean on HEAD.\n\nregards,\nAjin Cherian\n\nregards,\nAjin Cherian", "msg_date": "Tue, 11 Jan 2022 20:43:23 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tuesday, January 11, 2022 6:43 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> Minor update to rebase the patch so that it applies clean on HEAD.\r\nHi, thanks for you rebase.\r\n\r\nSeveral comments.\r\n\r\n(1) the commit message\r\n\r\n\"\r\ntransactions, keepalive messages are sent to keep the LSN locations updated\r\non the standby.\r\nThis patch does not skip empty transactions that are \"streaming\" or \"two-phase\".\r\n\"\r\n\r\nI suggest that one blank line might be needed before the last paragraph.\r\n\r\n(2) Could you please remove one pair of curly brackets for one sentence below ?\r\n\r\n@@ -1546,10 +1557,13 @@ WalSndWaitForWal(XLogRecPtr loc)\r\n * otherwise idle, this keepalive will trigger a reply. Processing the\r\n * reply will update these MyWalSnd locations.\r\n */\r\n- if (MyWalSnd->flush < sentPtr &&\r\n+ if (force_keepalive_syncrep ||\r\n+ (MyWalSnd->flush < sentPtr &&\r\n MyWalSnd->write < sentPtr &&\r\n- !waiting_for_ping_response)\r\n+ !waiting_for_ping_response))\r\n+ {\r\n WalSndKeepalive(false);\r\n+ }\r\n\r\n\r\n(3) Is this patch's reponsibility to intialize the data in pgoutput_begin_prepare_txn ?\r\n\r\n@@ -433,6 +487,8 @@ static void\r\n pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\r\n {\r\n bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\r\n+ PGOutputTxnData *txndata = MemoryContextAllocZero(ctx->context,\r\n+ sizeof(PGOutputTxnData));\r\n\r\n OutputPluginPrepareWrite(ctx, !send_replication_origin);\r\n logicalrep_write_begin_prepare(ctx->out, txn);\r\n\r\n\r\nEven if we need this initialization for either non streaming case\r\nor non two_phase case, there can be another issue.\r\nWe don't free the allocated memory for this data, right ?\r\nThere's only one place to use free in the entire patch,\r\nwhich is in the pgoutput_commit_txn(). So,\r\ncorresponding free of memory looked necessary\r\nin the two phase commit functions.\r\n\r\n(4) SyncRepEnabled's better alignment.\r\n\r\nIIUC, SyncRepEnabled is called not only by the walsender but also by other backends\r\nvia CommitTransaction -> RecordTransactionCommit -> SyncRepWaitForLSN.\r\nThen, the place to add the prototype function for SyncRepEnabled seems not appropriate,\r\nstrictly speaking or requires a comment like /* called by wal sender or other backends */.\r\n\r\n@@ -90,6 +90,7 @@ extern void SyncRepCleanupAtProcExit(void);\r\n /* called by wal sender */\r\n extern void SyncRepInitConfig(void);\r\n extern void SyncRepReleaseWaiters(void);\r\n+extern bool SyncRepEnabled(void);\r\n\r\nEven if we intend it is only used by the walsender, the current code place\r\nof SyncRepEnabled in the syncrep.c might not be perfect.\r\nIn this file, seemingly we have a section for functions for wal sender processes\r\nand the place where you wrote it is not here.\r\n\r\nat src/backend/replication/syncrep.c, find a comment below.\r\n/*\r\n * ===========================================================\r\n * Synchronous Replication functions for wal sender processes\r\n * ===========================================================\r\n */\r\n\r\n(5) minor alignment for expressing a couple of messages.\r\n\r\n@@ -777,6 +846,9 @@ pgoutput_truncate(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n Oid *relids;\r\n TransactionId xid = InvalidTransactionId;\r\n\r\n+ /* If not streaming, should have setup txndata as part of BEGIN/BEGIN PREPARE */\r\n+ Assert(in_streaming || txndata);\r\n\r\n\r\nIn the commit message, the way you write is below.\r\n...\r\nskip BEGIN / COMMIT messages for transactions that are empty. The patch\r\n...\r\n\r\nIn this case, we have spaces back and forth for \"BEGIN / COMMIT\".\r\nThen, I suggest to unify all of those to show better alignment.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 26 Jan 2022 09:32:54 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Tuesday, January 11, 2022 6:43 PM From: Ajin Cherian <itsajin@gmail.com> wrote:\r\n> Minor update to rebase the patch so that it applies clean on HEAD.\r\nHi, let me share some additional comments on v16.\r\n\r\n\r\n(1) comment of pgoutput_change\r\n\r\n@@ -630,11 +688,15 @@ pgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n Relation relation, ReorderBufferChange *change)\r\n {\r\n PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\r\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n MemoryContext old;\r\n RelationSyncEntry *relentry;\r\n TransactionId xid = InvalidTransactionId;\r\n Relation ancestor = NULL;\r\n\r\n+ /* If not streaming, should have setup txndata as part of BEGIN/BEGIN PREPARE */\r\n+ Assert(in_streaming || txndata);\r\n+\r\n\r\nIn my humble opinion, the comment should not touch BEGIN PREPARE,\r\nbecause this patch's scope doesn't include two phase commit.\r\n(We could add this in another patch to extend the scope after the commit ?)\r\n\r\nThis applies to pgoutput_truncate's comment.\r\n\r\n(2) \"keep alive\" should be \"keepalive\" in WalSndUpdateProgress\r\n\r\n /*\r\n+ * When skipping empty transactions in synchronous replication, we need\r\n+ * to send a keep alive to keep the MyWalSnd locations updated.\r\n+ */\r\n+ force_keepalive_syncrep = send_keepalive && SyncRepEnabled();\r\n+\r\n\r\nAlso, this applies to the comment for force_keepalive_syncrep.\r\n\r\n(3) Should finish the second sentence with period in the comment of pgoutput_message.\r\n\r\n@@ -845,6 +923,19 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n if (in_streaming)\r\n xid = txn->xid;\r\n\r\n+ /*\r\n+ * Output BEGIN if we haven't yet.\r\n+ * Avoid for streaming and non-transactional messages\r\n\r\n(4) \"begin\" can be changed to \"BEGIN\" in the comment of PGOutputTxnData definition.\r\n\r\nIn the entire patch, when we express BEGIN message,\r\nwe use capital letters \"BEGIN\" except for one place.\r\nWe can apply the same to this place as well.\r\n\r\n+typedef struct PGOutputTxnData\r\n+{\r\n+ bool sent_begin_txn; /* flag indicating whether begin has been sent */\r\n+} PGOutputTxnData;\r\n+\r\n\r\n(5) inconsistent way to write Assert statements with blank lines\r\n\r\nIn the below case, it'd be better to insert one blank line\r\nafter the Assert();\r\n\r\n+static void\r\n+pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\r\n+{\r\n bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\r\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n\r\n+ Assert(txndata);\r\n OutputPluginPrepareWrite(ctx, !send_replication_origin);\r\n\r\n\r\n(6) new codes in the pgoutput_commit_txn looks messy slightly\r\n\r\n@@ -419,7 +455,25 @@ static void\r\n pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n XLogRecPtr commit_lsn)\r\n {\r\n- OutputPluginUpdateProgress(ctx);\r\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n+ bool skip;\r\n+\r\n+ Assert(txndata);\r\n+\r\n+ /*\r\n+ * If a BEGIN message was not yet sent, then it means there were no relevant\r\n+ * changes encountered, so we can skip the COMMIT message too.\r\n+ */\r\n+ skip = !txndata->sent_begin_txn;\r\n+ pfree(txndata);\r\n+ txn->output_plugin_private = NULL;\r\n+ OutputPluginUpdateProgress(ctx, skip);\r\n\r\nCould we conduct a refactoring for this new part ?\r\nIMO, writing codes to free the data structure at the top\r\nof function seems weird.\r\n\r\nOne idea is to export some part there\r\nand write a new function, something like below.\r\n\r\nstatic bool\r\ntxn_sent_begin(ReorderBufferTXN *txn)\r\n{\r\n PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n bool needs_skip;\r\n\r\n Assert(txndata);\r\n\r\n needs_skip = !txndata->sent_begin_txn;\r\n\r\n pfree(txndata);\r\n txn->output_plugin_private = NULL;\r\n\r\n return needs_skip;\r\n}\r\n\r\nFYI, I had a look at the v12-0002-Skip-empty-prepared-transactions-for-logical-rep.patch\r\nfor reference of pgoutput_rollback_prepared_txn and pgoutput_commit_prepared_txn.\r\nLooks this kind of function might work for future extensions as well.\r\nWhat did you think ?\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 26 Jan 2022 13:16:45 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Wed, Jan 26, 2022 at 8:33 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, January 11, 2022 6:43 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > Minor update to rebase the patch so that it applies clean on HEAD.\n> Hi, thanks for you rebase.\n>\n> Several comments.\n>\n> (1) the commit message\n>\n> \"\n> transactions, keepalive messages are sent to keep the LSN locations updated\n> on the standby.\n> This patch does not skip empty transactions that are \"streaming\" or \"two-phase\".\n> \"\n>\n> I suggest that one blank line might be needed before the last paragraph.\n\nChanged.\n\n>\n> (2) Could you please remove one pair of curly brackets for one sentence below ?\n>\n> @@ -1546,10 +1557,13 @@ WalSndWaitForWal(XLogRecPtr loc)\n> * otherwise idle, this keepalive will trigger a reply. Processing the\n> * reply will update these MyWalSnd locations.\n> */\n> - if (MyWalSnd->flush < sentPtr &&\n> + if (force_keepalive_syncrep ||\n> + (MyWalSnd->flush < sentPtr &&\n> MyWalSnd->write < sentPtr &&\n> - !waiting_for_ping_response)\n> + !waiting_for_ping_response))\n> + {\n> WalSndKeepalive(false);\n> + }\n>\n>\n\nChanged.\n\n> (3) Is this patch's reponsibility to intialize the data in pgoutput_begin_prepare_txn ?\n>\n> @@ -433,6 +487,8 @@ static void\n> pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\n> + PGOutputTxnData *txndata = MemoryContextAllocZero(ctx->context,\n> + sizeof(PGOutputTxnData));\n>\n> OutputPluginPrepareWrite(ctx, !send_replication_origin);\n> logicalrep_write_begin_prepare(ctx->out, txn);\n>\n>\n> Even if we need this initialization for either non streaming case\n> or non two_phase case, there can be another issue.\n> We don't free the allocated memory for this data, right ?\n> There's only one place to use free in the entire patch,\n> which is in the pgoutput_commit_txn(). So,\n> corresponding free of memory looked necessary\n> in the two phase commit functions.\n>\n\nActually it is required for begin_prepare to set the data type, so\nthat the checks in the pgoutput_change can make sure that\nthe begin prepare is sent. I've also added a free in commit_prepared code.\n\n> (4) SyncRepEnabled's better alignment.\n>\n> IIUC, SyncRepEnabled is called not only by the walsender but also by other backends\n> via CommitTransaction -> RecordTransactionCommit -> SyncRepWaitForLSN.\n> Then, the place to add the prototype function for SyncRepEnabled seems not appropriate,\n> strictly speaking or requires a comment like /* called by wal sender or other backends */.\n>\n> @@ -90,6 +90,7 @@ extern void SyncRepCleanupAtProcExit(void);\n> /* called by wal sender */\n> extern void SyncRepInitConfig(void);\n> extern void SyncRepReleaseWaiters(void);\n> +extern bool SyncRepEnabled(void);\n>\n> Even if we intend it is only used by the walsender, the current code place\n> of SyncRepEnabled in the syncrep.c might not be perfect.\n> In this file, seemingly we have a section for functions for wal sender processes\n> and the place where you wrote it is not here.\n>\n> at src/backend/replication/syncrep.c, find a comment below.\n> /*\n> * ===========================================================\n> * Synchronous Replication functions for wal sender processes\n> * ===========================================================\n> */\n\nChanged.\n>\n> (5) minor alignment for expressing a couple of messages.\n>\n> @@ -777,6 +846,9 @@ pgoutput_truncate(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> Oid *relids;\n> TransactionId xid = InvalidTransactionId;\n>\n> + /* If not streaming, should have setup txndata as part of BEGIN/BEGIN PREPARE */\n> + Assert(in_streaming || txndata);\n>\n>\n> In the commit message, the way you write is below.\n> ...\n> skip BEGIN / COMMIT messages for transactions that are empty. The patch\n> ...\n>\n> In this case, we have spaces back and forth for \"BEGIN / COMMIT\".\n> Then, I suggest to unify all of those to show better alignment.\n\nfixed.\n\nregards,\nAjin Cherian", "msg_date": "Thu, 27 Jan 2022 23:56:41 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Jan 27, 2022 at 12:16 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, January 11, 2022 6:43 PM From: Ajin Cherian <itsajin@gmail.com> wrote:\n> > Minor update to rebase the patch so that it applies clean on HEAD.\n> Hi, let me share some additional comments on v16.\n>\n>\n> (1) comment of pgoutput_change\n>\n> @@ -630,11 +688,15 @@ pgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> Relation relation, ReorderBufferChange *change)\n> {\n> PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> MemoryContext old;\n> RelationSyncEntry *relentry;\n> TransactionId xid = InvalidTransactionId;\n> Relation ancestor = NULL;\n>\n> + /* If not streaming, should have setup txndata as part of BEGIN/BEGIN PREPARE */\n> + Assert(in_streaming || txndata);\n> +\n>\n> In my humble opinion, the comment should not touch BEGIN PREPARE,\n> because this patch's scope doesn't include two phase commit.\n> (We could add this in another patch to extend the scope after the commit ?)\n>\n\nWe have to include BEGIN PREPARE as well, as the txndata has to be\nsetup. Only difference is that we will not skip empty transaction in\nBEGIN PREPARE\n\n> This applies to pgoutput_truncate's comment.\n>\n> (2) \"keep alive\" should be \"keepalive\" in WalSndUpdateProgress\n>\n> /*\n> + * When skipping empty transactions in synchronous replication, we need\n> + * to send a keep alive to keep the MyWalSnd locations updated.\n> + */\n> + force_keepalive_syncrep = send_keepalive && SyncRepEnabled();\n> +\n>\n> Also, this applies to the comment for force_keepalive_syncrep.\n\nFixed.\n\n>\n> (3) Should finish the second sentence with period in the comment of pgoutput_message.\n>\n> @@ -845,6 +923,19 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> if (in_streaming)\n> xid = txn->xid;\n>\n> + /*\n> + * Output BEGIN if we haven't yet.\n> + * Avoid for streaming and non-transactional messages\n>\n\nFixed.\n\n> (4) \"begin\" can be changed to \"BEGIN\" in the comment of PGOutputTxnData definition.\n>\n> In the entire patch, when we express BEGIN message,\n> we use capital letters \"BEGIN\" except for one place.\n> We can apply the same to this place as well.\n>\n> +typedef struct PGOutputTxnData\n> +{\n> + bool sent_begin_txn; /* flag indicating whether begin has been sent */\n> +} PGOutputTxnData;\n> +\n>\n\nFixed.\n\n> (5) inconsistent way to write Assert statements with blank lines\n>\n> In the below case, it'd be better to insert one blank line\n> after the Assert();\n>\n> +static void\n> +pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> +{\n> bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n>\n> + Assert(txndata);\n> OutputPluginPrepareWrite(ctx, !send_replication_origin);\n>\n>\n\nFixed.\n\n> (6) new codes in the pgoutput_commit_txn looks messy slightly\n>\n> @@ -419,7 +455,25 @@ static void\n> pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> XLogRecPtr commit_lsn)\n> {\n> - OutputPluginUpdateProgress(ctx);\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> + bool skip;\n> +\n> + Assert(txndata);\n> +\n> + /*\n> + * If a BEGIN message was not yet sent, then it means there were no relevant\n> + * changes encountered, so we can skip the COMMIT message too.\n> + */\n> + skip = !txndata->sent_begin_txn;\n> + pfree(txndata);\n> + txn->output_plugin_private = NULL;\n> + OutputPluginUpdateProgress(ctx, skip);\n>\n> Could we conduct a refactoring for this new part ?\n> IMO, writing codes to free the data structure at the top\n> of function seems weird.\n>\n> One idea is to export some part there\n> and write a new function, something like below.\n>\n> static bool\n> txn_sent_begin(ReorderBufferTXN *txn)\n> {\n> PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> bool needs_skip;\n>\n> Assert(txndata);\n>\n> needs_skip = !txndata->sent_begin_txn;\n>\n> pfree(txndata);\n> txn->output_plugin_private = NULL;\n>\n> return needs_skip;\n> }\n>\n> FYI, I had a look at the v12-0002-Skip-empty-prepared-transactions-for-logical-rep.patch\n> for reference of pgoutput_rollback_prepared_txn and pgoutput_commit_prepared_txn.\n> Looks this kind of function might work for future extensions as well.\n> What did you think ?\n\nI changed a bit, but I'd hold a comprehensive rewrite when a future\npatch supports skipping\nempty transactions in two-phase transactions and streaming transactions.\n\nregards,\nAjin Cherian\n\n\n", "msg_date": "Fri, 28 Jan 2022 00:04:04 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thursday, January 27, 2022 9:57 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\nHi, thanks for your patch update.\r\n\r\n\r\n> On Wed, Jan 26, 2022 at 8:33 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, January 11, 2022 6:43 PM Ajin Cherian <itsajin@gmail.com>\r\n> wrote:\r\n> > (3) Is this patch's reponsibility to intialize the data in\r\n> pgoutput_begin_prepare_txn ?\r\n> >\r\n> > @@ -433,6 +487,8 @@ static void\r\n> > pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,\r\n> > ReorderBufferTXN *txn) {\r\n> > bool send_replication_origin = txn->origin_id !=\r\n> InvalidRepOriginId;\r\n> > + PGOutputTxnData *txndata =\r\n> MemoryContextAllocZero(ctx->context,\r\n> > +\r\n> > + sizeof(PGOutputTxnData));\r\n> >\r\n> > OutputPluginPrepareWrite(ctx, !send_replication_origin);\r\n> > logicalrep_write_begin_prepare(ctx->out, txn);\r\n> >\r\n> >\r\n> > Even if we need this initialization for either non streaming case or\r\n> > non two_phase case, there can be another issue.\r\n> > We don't free the allocated memory for this data, right ?\r\n> > There's only one place to use free in the entire patch, which is in\r\n> > the pgoutput_commit_txn(). So, corresponding free of memory looked\r\n> > necessary in the two phase commit functions.\r\n> >\r\n> \r\n> Actually it is required for begin_prepare to set the data type, so that the checks\r\n> in the pgoutput_change can make sure that the begin prepare is sent. I've also\r\n> added a free in commit_prepared code.\r\nOkay, but if we choose the design that this patch takes\r\ncare of the initialization in pgoutput_begin_prepare_txn(),\r\nwe need another free in pgoutput_rollback_prepared_txn().\r\nCould you please add some codes similar to pgoutput_commit_prepared_txn() to the same ?\r\nIf we simply execute rollback prepared for non streaming transaction,\r\nwe don't free it.\r\n\r\n\r\nSome other new minor comments.\r\n\r\n(a) can be \"synchronous replication\", instead of \"Synchronous Replication\"\r\n\r\nWhen we have a look at the syncrep.c, we use the former usually in\r\na normal comment.\r\n\r\n /*\r\n+ * Check if Synchronous Replication is enabled\r\n+ */\r\n\r\n(b) move below pgoutput_truncate two codes to the case where if nrelids > 0.\r\n\r\n@@ -770,6 +850,7 @@ pgoutput_truncate(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n int nrelations, Relation relations[], ReorderBufferChange *change)\r\n {\r\n PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\r\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n MemoryContext old;\r\n RelationSyncEntry *relentry;\r\n int i;\r\n@@ -777,6 +858,9 @@ pgoutput_truncate(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n Oid *relids;\r\n TransactionId xid = InvalidTransactionId;\r\n\r\n+ /* If not streaming, should have setup txndata as part of BEGIN/BEGIN PREPARE */\r\n+ Assert(in_streaming || txndata);\r\n+\r\n\r\n(c) fix indent with spaces (for the one sentence of SyncRepEnabled)\r\n\r\n@@ -539,6 +538,15 @@ SyncRepReleaseWaiters(void)\r\n }\r\n\r\n /*\r\n+ * Check if Synchronous Replication is enabled\r\n+ */\r\n+bool\r\n+SyncRepEnabled(void)\r\n+{\r\n+ return SyncRepRequested() && ((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined;\r\n+}\r\n+\r\n+/*\r\n\r\nThis can be detected by git am.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Sun, 30 Jan 2022 08:04:48 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Sun, Jan 30, 2022 at 7:04 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, January 27, 2022 9:57 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> Hi, thanks for your patch update.\n>\n>\n> > On Wed, Jan 26, 2022 at 8:33 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, January 11, 2022 6:43 PM Ajin Cherian <itsajin@gmail.com>\n> > wrote:\n> > > (3) Is this patch's reponsibility to intialize the data in\n> > pgoutput_begin_prepare_txn ?\n> > >\n> > > @@ -433,6 +487,8 @@ static void\n> > > pgoutput_begin_prepare_txn(LogicalDecodingContext *ctx,\n> > > ReorderBufferTXN *txn) {\n> > > bool send_replication_origin = txn->origin_id !=\n> > InvalidRepOriginId;\n> > > + PGOutputTxnData *txndata =\n> > MemoryContextAllocZero(ctx->context,\n> > > +\n> > > + sizeof(PGOutputTxnData));\n> > >\n> > > OutputPluginPrepareWrite(ctx, !send_replication_origin);\n> > > logicalrep_write_begin_prepare(ctx->out, txn);\n> > >\n> > >\n> > > Even if we need this initialization for either non streaming case or\n> > > non two_phase case, there can be another issue.\n> > > We don't free the allocated memory for this data, right ?\n> > > There's only one place to use free in the entire patch, which is in\n> > > the pgoutput_commit_txn(). So, corresponding free of memory looked\n> > > necessary in the two phase commit functions.\n> > >\n> >\n> > Actually it is required for begin_prepare to set the data type, so that the checks\n> > in the pgoutput_change can make sure that the begin prepare is sent. I've also\n> > added a free in commit_prepared code.\n> Okay, but if we choose the design that this patch takes\n> care of the initialization in pgoutput_begin_prepare_txn(),\n> we need another free in pgoutput_rollback_prepared_txn().\n> Could you please add some codes similar to pgoutput_commit_prepared_txn() to the same ?\n> If we simply execute rollback prepared for non streaming transaction,\n> we don't free it.\n>\n\nFixed.\n\n>\n> Some other new minor comments.\n>\n> (a) can be \"synchronous replication\", instead of \"Synchronous Replication\"\n>\n> When we have a look at the syncrep.c, we use the former usually in\n> a normal comment.\n>\n> /*\n> + * Check if Synchronous Replication is enabled\n> + */\n\nFixed.\n\n>\n> (b) move below pgoutput_truncate two codes to the case where if nrelids > 0.\n>\n> @@ -770,6 +850,7 @@ pgoutput_truncate(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> int nrelations, Relation relations[], ReorderBufferChange *change)\n> {\n> PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> MemoryContext old;\n> RelationSyncEntry *relentry;\n> int i;\n> @@ -777,6 +858,9 @@ pgoutput_truncate(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> Oid *relids;\n> TransactionId xid = InvalidTransactionId;\n>\n> + /* If not streaming, should have setup txndata as part of BEGIN/BEGIN PREPARE */\n> + Assert(in_streaming || txndata);\n> +\n>\n\nFixed.\n\n> (c) fix indent with spaces (for the one sentence of SyncRepEnabled)\n>\n> @@ -539,6 +538,15 @@ SyncRepReleaseWaiters(void)\n> }\n>\n> /*\n> + * Check if Synchronous Replication is enabled\n> + */\n> +bool\n> +SyncRepEnabled(void)\n> +{\n> + return SyncRepRequested() && ((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined;\n> +}\n> +\n> +/*\n>\n> This can be detected by git am.\n>\n\nFixed.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Mon, 31 Jan 2022 23:48:47 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "Hi,\r\n\r\n\r\nThank you for your updating the patch.\r\n\r\nI'll quote one of the past discussions\r\nin order to make this thread go forward or more active.\r\nOn Friday, August 13, 2021 8:01 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, Jul 23, 2021 at 3:39 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> > >\r\n> >\r\n> > Let's first split the patch for prepared and non-prepared cases as\r\n> > that will help to focus on each of them separately. BTW, why haven't\r\n> > you considered implementing point 1b as explained by Andres in his\r\n> > email [1]? I think we can send a keepalive message in case of\r\n> > synchronous replication when we skip an empty transaction, otherwise,\r\n> > it might delay in responding to transactions synchronous_commit mode.\r\n> > I think in the tests done in the thread, it might not have been shown\r\n> > because we are already sending keepalives too frequently. But what if\r\n> > someone disables wal_sender_timeout or kept it to a very large value?\r\n> > See WalSndKeepaliveIfNecessary. The other thing you might want to look\r\n> > at is if the reason for frequent keepalives is the same as described\r\n> > in the email [2].\r\n> >\r\n> \r\n> I have tried to address the comment here by modifying the\r\n> ctx->update_progress callback function (WalSndUpdateProgress) provided\r\n> for plugins. I have added an option\r\n> by which the callback can specify if it wants to send keep_alives. And when\r\n> the callback is called with that option set, walsender updates a flag\r\n> force_keep_alive_syncrep.\r\n> The Walsender in the WalSndWaitForWal for loop, checks this flag and if\r\n> synchronous replication is enabled, then sends a keep alive.\r\n> Currently this logic\r\n> is added as an else to the current logic that is already there in\r\n> WalSndWaitForWal, which is probably considered unnecessary and a source of\r\n> the keep alive flood that you talked about. So, I can change that according to\r\n> how that fix shapes up there. I have also added an extern function in syncrep.c\r\n> that makes it possible for walsender to query if synchronous replication is\r\n> turned on.\r\nChanging the timing to send the keepalive to the decoding commit\r\ntiming didn't look impossible to me, although my suggestion\r\ncan be ad-hoc.\r\n\r\nAfter the initialization of sentPtr(by confirmed_flush lsn),\r\nsentPtr is updated from logical_decoding_ctx->reader->EndRecPtr in XLogSendLogical.\r\nIn the XLogSendLogical, we update it after we execute LogicalDecodingProcessRecord.\r\nThis order leads to the current implementation to wait the next iteration\r\nto send a keepalive in WalSndWaitForWal.\r\n\r\nBut, I felt we can utilize end_lsn passed to ReorderBufferCommit for updating\r\nsentPtr. The end_lsn is the lsn same as the ctx->reader->EndRecPtr,\r\nwhich means advancing the timing to update the sentPtr for the commit case.\r\nThen if the transaction is empty in synchronous mode,\r\nsend the keepalive in WalSndUpdateProgress directly,\r\ninstead of having the force_keepalive_syncrep flag and having it true.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Mon, 7 Feb 2022 23:57:02 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "Hi\r\n\r\n\r\nI'll quote one other remaining discussion of this thread again\r\nto invoke more attentions from the community.\r\nOn Friday, August 13, 2021 8:01 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > Few other miscellaneous comments:\r\n> > 1.\r\n> > static void\r\n> > pgoutput_commit_prepared_txn(LogicalDecodingContext *ctx,\r\n> > ReorderBufferTXN *txn,\r\n> > - XLogRecPtr commit_lsn)\r\n> > + XLogRecPtr commit_lsn, XLogRecPtr prepare_end_lsn, TimestampTz\r\n> > + prepare_time)\r\n> > {\r\n> > + PGOutputTxnData *txndata = (PGOutputTxnData *)\r\n> txn->output_plugin_private;\r\n> > +\r\n> > OutputPluginUpdateProgress(ctx);\r\n> >\r\n> > + /*\r\n> > + * If the BEGIN PREPARE was not yet sent, then it means there were no\r\n> > + * relevant changes encountered, so we can skip the COMMIT PREPARED\r\n> > + * message too.\r\n> > + */\r\n> > + if (txndata)\r\n> > + {\r\n> > + bool skip = !txndata->sent_begin_txn; pfree(txndata);\r\n> > + txn->output_plugin_private = NULL;\r\n> >\r\n> > How is this supposed to work after the restart when prepared is sent\r\n> > before the restart and we are just sending commit_prepared after\r\n> > restart? Won't this lead to sending commit_prepared even when the\r\n> > corresponding prepare is not sent? Can we think of a better way to\r\n> > deal with this?\r\n> >\r\n> \r\n> I have tried to resolve this by adding logic in worker,c to silently ignore spurious\r\n> commit_prepareds. But this change required checking if the prepare exists on\r\n> the subscriber before attempting the commit_prepared but the current API that\r\n> checks this requires prepare time and transaction end_lsn. But for this I had to\r\n> change the protocol of commit_prepared, and I understand that this would\r\n> break backward compatibility between subscriber and publisher (you have\r\n> raised this issue as well).\r\n> I am not sure how else to handle this, let me know if you have any other ideas.\r\nI feel if we don't want to change the protocol of commit_prepared,\r\nwe need to make the publisher solely judge whether the prepare was empty or not,\r\nafter the restart.\r\n\r\nOne idea I thought at the beginning was to utilize and apply\r\nthe existing mechanism to spill ReorderBufferSerializeTXN object to local disk,\r\nby postponing the prepare txn object cleanup and when the walsender exits\r\nand commit prepared didn't come, spilling the transaction's data,\r\nthen restoring it after the restart in the DecodePrepare.\r\nHowever, this idea wasn't crash-safe fundamentally. It means,\r\nif the publisher crashes before spilling the empty prepare transaction,\r\nwe fail to detect the prepare was empty and come down to send the commit_prepared\r\nin the situation where the subscriber didn't get the prepare data again.\r\nSo, I thought to utilize the spill mechanism didn't work for this purpose.\r\n\r\nAnother idea would be, to create an empty file under the the pg_replslot/slotname\r\nwith a prefix different from \"xid\" in the DecodePrepare before the shutdown\r\nif the prepare was empty, and bypass the cleanup of the serialized txns\r\nand check the existence after the restart. But, this is pretty ad-hoc and I wasn't sure\r\nif to address the corner case of the restart has the strong enough justification\r\nto create this new file format.\r\n\r\nTherefore, in my humble opinion, the idea of protocol change slightly wins,\r\nsince the impact of the protocol change would not be big. We introduced\r\nthe protocol version 3 in the devel version and the number of users should be little.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 16 Feb 2022 03:15:20 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Mon, Jan 31, 2022 at 6:18 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n\nFew comments:\n=============\n1. Is there any particular why the patch is not skipping empty xacts\nfor streaming (in-progress) transactions as noted in the commit\nmessage as well?\n\n2.\n+static void\n+pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n+{\n bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n+\n+ Assert(txndata);\n\nI think here you can add an assert for sent_begin_txn to be always false?\n\n3.\n+/*\n+ * Send BEGIN.\n+ * This is where the BEGIN is actually sent. This is called\n+ * while processing the first change of the transaction.\n+ */\n\nHave an empty line between the first two lines to ensure consistency\nwith nearby comments. Also, the formatting of these lines appears\nawkward, either run pgindent or make sure lines are not too short.\n\n4. Do we really need to make any changes in PREPARE\ntransaction-related functions if can't skip in that case? I think you\ncan have a check if the output plugin private variable is not set then\nignore special optimization for sending begin.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Feb 2022 16:12:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Feb 16, 2022 at 8:45 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n\n[ideas to skip empty prepare/commit_prepare ....]\n\n>\n> I feel if we don't want to change the protocol of commit_prepared,\n> we need to make the publisher solely judge whether the prepare was empty or not,\n> after the restart.\n>\n> One idea I thought at the beginning was to utilize and apply\n> the existing mechanism to spill ReorderBufferSerializeTXN object to local disk,\n> by postponing the prepare txn object cleanup and when the walsender exits\n> and commit prepared didn't come, spilling the transaction's data,\n> then restoring it after the restart in the DecodePrepare.\n> However, this idea wasn't crash-safe fundamentally. It means,\n> if the publisher crashes before spilling the empty prepare transaction,\n> we fail to detect the prepare was empty and come down to send the commit_prepared\n> in the situation where the subscriber didn't get the prepare data again.\n> So, I thought to utilize the spill mechanism didn't work for this purpose.\n>\n> Another idea would be, to create an empty file under the the pg_replslot/slotname\n> with a prefix different from \"xid\" in the DecodePrepare before the shutdown\n> if the prepare was empty, and bypass the cleanup of the serialized txns\n> and check the existence after the restart. But, this is pretty ad-hoc and I wasn't sure\n> if to address the corner case of the restart has the strong enough justification\n> to create this new file format.\n>\n\nI think for this idea to work you need to create such an empty file\neach time we skip empty prepare as the system might crash after\nprepare and we won't get time to create such a file. I don't think it\nis advisable to do I/O to save the network message.\n\n> Therefore, in my humble opinion, the idea of protocol change slightly wins,\n> since the impact of the protocol change would not be big. We introduced\n> the protocol version 3 in the devel version and the number of users should be little.\n>\n\nThere is also the cost of the additional check (whether prepared xact\nexists) at the time of processing each commit prepared message. I\nthink if we want to go in this direction then it is better to do it\nvia a subscription parameter (say skip_empty_prepare_xact or something\nlike that) so that we can pay the additional cost of such a check\nconditionally when such a parameter is set by the user. I feel for now\nwe can document in comments why we can't skip empty prepared\ntransactions and maybe as an idea(s) worth exploring to implement the\nsame. OTOH, if multiple agree on such a solution we can even try to\nimplement it and see if that works.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Feb 2022 16:53:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Feb 16, 2022 at 2:15 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n> Another idea would be, to create an empty file under the the pg_replslot/slotname\n> with a prefix different from \"xid\" in the DecodePrepare before the shutdown\n> if the prepare was empty, and bypass the cleanup of the serialized txns\n> and check the existence after the restart. But, this is pretty ad-hoc and I wasn't sure\n> if to address the corner case of the restart has the strong enough justification\n> to create this new file format.\n>\n\nYes, this doesn't look very efficient.\n\n> Therefore, in my humble opinion, the idea of protocol change slightly wins,\n> since the impact of the protocol change would not be big. We introduced\n> the protocol version 3 in the devel version and the number of users should be little.\n\nYes, but we don't want to break backward compatibility for this small\nadded optimization.\n\nAmit,\n\nI will work on your comments.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Fri, 18 Feb 2022 19:21:49 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Feb 17, 2022 at 4:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 6:18 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n>\n> Few comments:\n> =============\n>\n\nOne more comment:\n@@ -1546,10 +1557,11 @@ WalSndWaitForWal(XLogRecPtr loc)\n * otherwise idle, this keepalive will trigger a reply. Processing the\n * reply will update these MyWalSnd locations.\n */\n- if (MyWalSnd->flush < sentPtr &&\n+ if (force_keepalive_syncrep ||\n+ (MyWalSnd->flush < sentPtr &&\n MyWalSnd->write < sentPtr &&\n- !waiting_for_ping_response)\n- WalSndKeepalive(false);\n+ !waiting_for_ping_response))\n+ WalSndKeepalive(false);\n\nWill this allow syncrep to proceed in case we are skipping the\ntransaction? Won't we need to send a feedback message with\n'requestReply' true in this case as we release syncrep waiters while\nprocessing standby message, see\nProcessStandbyReplyMessage->SyncRepReleaseWaiters. Without\n'requestReply', the subscriber might not send any message and the\nsyncrep won't proceed. Why do you decide to delay sending this message\ntill WalSndWaitForWal()? It may not be called for each transaction.\n\nI feel we should try to device a test case to test this sync\nreplication mechanism such that without this particular change the\nsync rep transaction waits momentarily but with this change it doesn't\nwait. I am not entirely sure whether we can devise an automated test\nas this is timing related issue but I guess we can at least manually\ntry to produce a case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Feb 2022 14:40:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tue, Feb 8, 2022 at 5:27 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, August 13, 2021 8:01 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> Changing the timing to send the keepalive to the decoding commit\n> timing didn't look impossible to me, although my suggestion\n> can be ad-hoc.\n>\n> After the initialization of sentPtr(by confirmed_flush lsn),\n> sentPtr is updated from logical_decoding_ctx->reader->EndRecPtr in XLogSendLogical.\n> In the XLogSendLogical, we update it after we execute LogicalDecodingProcessRecord.\n> This order leads to the current implementation to wait the next iteration\n> to send a keepalive in WalSndWaitForWal.\n>\n> But, I felt we can utilize end_lsn passed to ReorderBufferCommit for updating\n> sentPtr. The end_lsn is the lsn same as the ctx->reader->EndRecPtr,\n> which means advancing the timing to update the sentPtr for the commit case.\n> Then if the transaction is empty in synchronous mode,\n> send the keepalive in WalSndUpdateProgress directly,\n> instead of having the force_keepalive_syncrep flag and having it true.\n>\n\nYou have a point in that we don't need to delay sending this message\ntill next WalSndWaitForWal() but I don't see why we need to change\nanything about update of sentPtr.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Feb 2022 14:47:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Friday, February 18, 2022 6:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Feb 8, 2022 at 5:27 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, August 13, 2021 8:01 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> > > On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > Changing the timing to send the keepalive to the decoding commit\r\n> > timing didn't look impossible to me, although my suggestion can be\r\n> > ad-hoc.\r\n> >\r\n> > After the initialization of sentPtr(by confirmed_flush lsn), sentPtr\r\n> > is updated from logical_decoding_ctx->reader->EndRecPtr in\r\n> XLogSendLogical.\r\n> > In the XLogSendLogical, we update it after we execute\r\n> LogicalDecodingProcessRecord.\r\n> > This order leads to the current implementation to wait the next\r\n> > iteration to send a keepalive in WalSndWaitForWal.\r\n> >\r\n> > But, I felt we can utilize end_lsn passed to ReorderBufferCommit for\r\n> > updating sentPtr. The end_lsn is the lsn same as the\r\n> > ctx->reader->EndRecPtr, which means advancing the timing to update the\r\n> sentPtr for the commit case.\r\n> > Then if the transaction is empty in synchronous mode, send the\r\n> > keepalive in WalSndUpdateProgress directly, instead of having the\r\n> > force_keepalive_syncrep flag and having it true.\r\n> >\r\n> \r\n> You have a point in that we don't need to delay sending this message till next\r\n> WalSndWaitForWal() but I don't see why we need to change anything about\r\n> update of sentPtr.\r\nYeah, you're right.\r\nNow I think we don't need the update of sentPtr to send a keepalive.\r\n\r\nI thought we can send a keepalive message\r\nafter its update in XLogSendLogical or any appropriate place for it after the existing update.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 18 Feb 2022 09:36:02 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Fri, Feb 18, 2022 at 3:06 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, February 18, 2022 6:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Feb 8, 2022 at 5:27 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Friday, August 13, 2021 8:01 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > > > On Mon, Aug 2, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com>\n> > > > wrote:\n> > > Changing the timing to send the keepalive to the decoding commit\n> > > timing didn't look impossible to me, although my suggestion can be\n> > > ad-hoc.\n> > >\n> > > After the initialization of sentPtr(by confirmed_flush lsn), sentPtr\n> > > is updated from logical_decoding_ctx->reader->EndRecPtr in\n> > XLogSendLogical.\n> > > In the XLogSendLogical, we update it after we execute\n> > LogicalDecodingProcessRecord.\n> > > This order leads to the current implementation to wait the next\n> > > iteration to send a keepalive in WalSndWaitForWal.\n> > >\n> > > But, I felt we can utilize end_lsn passed to ReorderBufferCommit for\n> > > updating sentPtr. The end_lsn is the lsn same as the\n> > > ctx->reader->EndRecPtr, which means advancing the timing to update the\n> > sentPtr for the commit case.\n> > > Then if the transaction is empty in synchronous mode, send the\n> > > keepalive in WalSndUpdateProgress directly, instead of having the\n> > > force_keepalive_syncrep flag and having it true.\n> > >\n> >\n> > You have a point in that we don't need to delay sending this message till next\n> > WalSndWaitForWal() but I don't see why we need to change anything about\n> > update of sentPtr.\n> Yeah, you're right.\n> Now I think we don't need the update of sentPtr to send a keepalive.\n>\n> I thought we can send a keepalive message\n> after its update in XLogSendLogical or any appropriate place for it after the existing update.\n>\n\nYeah, I think there could be multiple ways (a) We can send such a keep\nalive in WalSndUpdateProgress() itself by using ctx->write_location.\nFor this, we need to modify WalSndKeepalive() to take sentPtr as\ninput. (b) set some flag in WalSndUpdateProgress() and then do it\nsomewhere in WalSndLoop probably in WalSndKeepaliveIfNecessary, or\nmaybe there is another better way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Feb 2022 15:57:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "FYI - the latest v18 patch no longer applies due to a recent push [1].\n\n------\n[1] https://github.com/postgres/postgres/commit/52e4f0cd472d39d07732b99559989ea3b615be78\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 22 Feb 2022 15:19:01 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Feb 17, 2022 at 9:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 6:18 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n>\n> Few comments:\n> =============\n> 1. Is there any particular why the patch is not skipping empty xacts\n> for streaming (in-progress) transactions as noted in the commit\n> message as well?\n>\n\nI have added support for skipping streaming transaction.\n\n> 2.\n> +static void\n> +pgoutput_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> +{\n> bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> + Assert(txndata);\n>\n> I think here you can add an assert for sent_begin_txn to be always false?\n>\n\nAdded.\n\n> 3.\n> +/*\n> + * Send BEGIN.\n> + * This is where the BEGIN is actually sent. This is called\n> + * while processing the first change of the transaction.\n> + */\n>\n> Have an empty line between the first two lines to ensure consistency\n> with nearby comments. Also, the formatting of these lines appears\n> awkward, either run pgindent or make sure lines are not too short.\n>\n\nChanged.\n\n> 4. Do we really need to make any changes in PREPARE\n> transaction-related functions if can't skip in that case? I think you\n> can have a check if the output plugin private variable is not set then\n> ignore special optimization for sending begin.\n>\n\nI have modified this as well.\n\nI have also rebased the patch after it did not apply due to a new commit.\n\nI will next work on testing and improving the keepalive logic while\nskipping transactions.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Wed, 23 Feb 2022 13:58:14 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Feb, Wed 23, 2022 at 10:58 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n>\r\nFew comments to V19-0001:\r\n\r\n1. I think we should adjust the alignment format.\r\ngit am ../v19-0001-Skip-empty-transactions-for-logical-replication.patch\r\n.git/rebase-apply/patch:197: indent with spaces.\r\n * Before we send schema, make sure that STREAM START/BEGIN/BEGIN PREPARE\r\n.git/rebase-apply/patch:198: indent with spaces.\r\n * is sent. If not, send now.\r\n.git/rebase-apply/patch:199: indent with spaces.\r\n */\r\n.git/rebase-apply/patch:201: indent with spaces.\r\n pgoutput_send_stream_start(ctx, toptxn);\r\n.git/rebase-apply/patch:204: indent with spaces.\r\n pgoutput_begin(ctx, toptxn);\r\nwarning: 5 lines add whitespace errors.\r\n\r\n2. Structure member initialization.\r\n static void\r\n pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\r\n {\r\n+\tPGOutputTxnData *txndata = MemoryContextAllocZero(ctx->context,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t sizeof(PGOutputTxnData));\r\n+\r\n+\ttxndata->sent_begin_txn = false;\r\n+\ttxn->output_plugin_private = txndata;\r\n+}\r\nDo we need to set sent_stream_start and sent_any_stream to false here?\r\n\r\n3. Maybe we should add Assert(txndata) like function pgoutput_commit_txn in\r\nother functions.\r\n\r\n4. In addition, I think we should keep a unified style.\r\na). log style (maybe first one is better.)\r\nFirst style : \"Skipping replication of an empty transaction in XXX\"\r\nSecond style : \"skipping replication of an empty transaction\"\r\nb) flag name (maybe second one is better.)\r\nFirst style : variable \"sent_begin_txn\" in function pgoutput_stream_*.\r\nSecond style : variable \"skip\" in function pgoutput_commit_txn.\r\n\r\n\r\nRegards,\r\nWang wei\r\n", "msg_date": "Wed, 23 Feb 2022 06:24:28 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "Hi. Here are my review comments for the v19 patch.\n\n======\n\n1. Commit message\n\nThe current logical replication behavior is to send every transaction to\nsubscriber even though the transaction is empty (because it does not\ncontain changes from the selected publications).\n\nSUGGESTION\n\"to subscriber even though\" --> \"to the subscriber even if\"\n\n~~~\n\n2. Commit message\n\nThis patch addresses the above problem by postponing the BEGIN message\nuntil the first change. While processing a COMMIT message,\nif there is no other change for that transaction,\ndo not send COMMIT message. It means that pgoutput will\nskip BEGIN/COMMIT messages for transactions that are empty.\n\nSUGGESTION\n\"if there is\" --> \"if there was\"\n\"do not send COMMIT message\" --> \"do not send the COMMIT message\"\n\"It means that pgoutput\" --> \"This means that pgoutput\"\n\n~~~\n\n3. Commit message\n\nShouldn't there be some similar description about using a lazy send\nmechanism for STREAM START?\n\n~~~\n\n4. src/backend/replication/pgoutput/pgoutput.c - typedef struct PGOutputTxnData\n\n+/*\n+ * Maintain a per-transaction level variable to track whether the\n+ * transaction has sent BEGIN. BEGIN is only sent when the first\n+ * change in a transaction is processed. This makes it possible\n+ * to skip transactions that are empty.\n+ */\n+typedef struct PGOutputTxnData\n+{\n+ bool sent_begin_txn; /* flag indicating whether BEGIN has been sent */\n+ bool sent_stream_start; /* flag indicating if stream start has been sent */\n+ bool sent_any_stream; /* flag indicating if any stream has been sent */\n+} PGOutputTxnData;\n+\n\nThe struct comment looks stale because it doesn't mention anything\nabout the similar lazy send mechanism for STREAM_START.\n\n~~~\n\n5. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_txn\n\n static void\n pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n {\n+ PGOutputTxnData *txndata = MemoryContextAllocZero(ctx->context,\n+ sizeof(PGOutputTxnData));\n+\n+ txndata->sent_begin_txn = false;\n+ txn->output_plugin_private = txndata;\n+}\n\nYou don’t need to assign the other members 'sent_stream_start',\n'sent_any_stream' because you are doing MemoryContextAllocZero anyway,\nbut for the same reason you did not really need to assign the\n'sent_begin_txn' flag either.\n\nI guess for consistency maybe it is better to (a) set all of them or\n(b) set none of them. I prefer (b).\n\n~~~\n\n6. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin\n\nI feel the 'pgoutput_begin' function is not well named. It makes some\nof the code where they are called look quite confusing.\n\nFor streaming there is:\n1. pgoutput_stream_start (does not send)\n2. pgoutput_send_stream_start (does send)\nso it is very clear.\n\nOTOH there are\n3. pgoutput_begin_txn (does not send)\n4. pgoutput_begin (does send)\n\nFor consistency I think the 'pgoutput_begin' name should be changed to\ninclude \"send\" verb\n1. pgoutput_begin_txn (does not send)\n2. pgoutput_send_begin_txn (does send)\n\n~~~\n\n7. src/backend/replication/pgoutput/pgoutput.c - maybe_send_schema\n\n@@ -594,6 +663,20 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n if (schema_sent)\n return;\n\n+ /* set up txndata */\n+ txndata = toptxn->output_plugin_private;\n+\n+ /*\n+ * Before we send schema, make sure that STREAM START/BEGIN/BEGIN PREPARE\n+ * is sent. If not, send now.\n+ */\n+ if (in_streaming && !txndata->sent_stream_start)\n+ pgoutput_send_stream_start(ctx, toptxn);\n+ else if (txndata && !txndata->sent_begin_txn)\n+ {\n+ pgoutput_begin(ctx, toptxn);\n+ }\n+\n\nHow come the in_streaming case is not checking for a NULL txndata\nbefore referencing it? Even if it is OK to do that, some more comments\nor assertions might help for this piece of code.\n(Stop-Press: see later comments #9, #10)\n\n~~~\n\n8. src/backend/replication/pgoutput/pgoutput.c - maybe_send_schema\n\n@@ -594,6 +663,20 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n if (schema_sent)\n return;\n\n+ /* set up txndata */\n+ txndata = toptxn->output_plugin_private;\n+\n+ /*\n+ * Before we send schema, make sure that STREAM START/BEGIN/BEGIN PREPARE\n+ * is sent. If not, send now.\n+ */\n\nWhat part of this code is doing anything about \"BEGIN PREPARE\" ?\n\n~~~\n\n9. src/backend/replication/pgoutput/pgoutput.c - pgoutput_change\n\n@@ -1183,6 +1267,15 @@ pgoutput_change(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n Assert(false);\n }\n\n+ /* If streaming, send STREAM START if we haven't yet */\n+ if (in_streaming && (txndata && !txndata->sent_stream_start))\n+ pgoutput_send_stream_start(ctx, txn);\n+ /*\n+ * Output BEGIN if we haven't yet, unless streaming.\n+ */\n+ else if (!in_streaming && (txndata && !txndata->sent_begin_txn))\n+ pgoutput_begin(ctx, txn);\n+\n\nThe above code fragment looks more like what IU was expecting should\nbe in 'maybe_send_schema',\n\nIf you expand it out (and tweak the comments) it can become much less\ncomplex looking IMO\n\ne.g.\n\nif (in_streaming)\n{\n/* If streaming, send STREAM START if we haven't yet */\nif (txndata && !txndata->sent_stream_start)\npgoutput_send_stream_start(ctx, txn);\n}\nelse\n{\n/* If not streaming, send BEGIN if we haven't yet */\nif (txndata && !txndata->sent_begin_txn)\npgoutput_begin(ctx, txn);\n}\n\nAlso, IIUC for the 'in_streaming' case you can Assert(txndata); so\nthen the code can be made even simpler.\n\n~~~\n\n10. src/backend/replication/pgoutput/pgoutput.c - pgoutput_truncate\n\n@ -1397,6 +1491,17 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n\n if (nrelids > 0)\n {\n+ txndata = (PGOutputTxnData *) txn->output_plugin_private;\n+\n+ /* If streaming, send STREAM START if we haven't yet */\n+ if (in_streaming && (txndata && !txndata->sent_stream_start))\n+ pgoutput_send_stream_start(ctx, txn);\n+ /*\n+ * output BEGIN if we haven't yet, unless streaming.\n+ */\n+ else if (!in_streaming && (txndata && !txndata->sent_begin_txn))\n+ pgoutput_begin(ctx, txn);\n\nSo now I have seen almost identical code repeated in 3 places so I am\nbeginning to think these should just be encapsulated in some common\nfunction to call to do the deferred \"send\". Thoughts?\n\n~~~\n\n11. src/backend/replication/pgoutput/pgoutput.c - pgoutput_message\n\n@@ -1429,6 +1534,24 @@ pgoutput_message(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n if (in_streaming)\n xid = txn->xid;\n\n+ /*\n+ * Output BEGIN if we haven't yet.\n+ * Avoid for streaming and non-transactional messages.\n+ */\n+ if (in_streaming || transactional)\n+ {\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n+\n+ /* If streaming, send STREAM START if we haven't yet */\n+ if (in_streaming && (txndata && !txndata->sent_stream_start))\n+ pgoutput_send_stream_start(ctx, txn);\n+ else if (transactional)\n+ {\n+ if (txndata && !txndata->sent_begin_txn)\n+ pgoutput_begin(ctx, txn);\n+ }\n+ }\n\nDoes that comment at the top of that code fragment accurately match\nthis code? It seemed a bit muddled/stale to me.\n\n~~~\n\n12. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_start\n\n /*\n+ * Don't actually send stream start here, instead set a flag that indicates\n+ * that stream start hasn't been sent and wait for the first actual change\n+ * for this stream to be sent and then send stream start. This is done\n+ * to avoid sending empty streams without any changes.\n+ */\n+ if (txndata == NULL)\n+ {\n+ txndata =\n+ MemoryContextAllocZero(ctx->context, sizeof(PGOutputTxnData));\n+ txndata->sent_begin_txn = false;\n+ txndata->sent_any_stream = false;\n+ txn->output_plugin_private = txndata;\n+ }\n\nIMO there is no need to set the members – just let the\nMemoryContextAllocZero take care of all that. Then the code is simpler\nand it also saves wondering if anything was accidentally missed.\n\n~~~\n\n13. src/backend/replication/pgoutput/pgoutput.c - pgoutput_send_stream_start\n\n+pgoutput_send_stream_start(struct LogicalDecodingContext *ctx,\n+ ReorderBufferTXN *txn)\n+{\n+ bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n+\n+\n+ /*\n * If we already sent the first stream for this transaction then don't\n * send the origin id in the subsequent streams.\n */\n- if (rbtxn_is_streamed(txn))\n+ if (txndata->sent_any_stream)\n send_replication_origin = false;\n\nGiven this usage, I wonder if there is a better name for the txndata\nmember - e.g. 'sent_first_stream' ?\n\n~~~\n\n14. src/backend/replication/pgoutput/pgoutput.c - pgoutput_send_stream_start\n\n- /* we're streaming a chunk of transaction now */\n- in_streaming = true;\n+ /*\n+ * Set the flags that indicate that changes were sent as part of\n+ * the transaction and the stream.\n+ */\n+ txndata->sent_begin_txn = txndata->sent_stream_start = true;\n+ txndata->sent_any_stream = true;\n\nWhy is this setting member 'sent_begin_txn' true also? It seems odd to\nsay so because the BEGIN was not actually sent at all, right?\n\n~~~\n\n15. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_abort\n\n@@ -1572,6 +1740,20 @@ pgoutput_stream_abort(struct LogicalDecodingContext *ctx,\n\n /* determine the toplevel transaction */\n toptxn = (txn->toptxn) ? txn->toptxn : txn;\n+ txndata = toptxn->output_plugin_private;\n+ sent_begin_txn = txndata->sent_begin_txn;\n+\n+ if (txn->toptxn == NULL)\n+ {\n+ pfree(txndata);\n+ txn->output_plugin_private = NULL;\n+ }\n+\n+ if (!sent_begin_txn)\n+ {\n+ elog(DEBUG1, \"Skipping replication of an empty transaction in stream abort\");\n+ return;\n+ }\n\nI didn't really understand why this code is checking the\n'sent_begin_txn' member instead of the 'sent_stream_start' member?\n\n~~~\n\n16. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_commit\n\n@@ -1598,7 +1782,17 @@ pgoutput_stream_commit(struct\nLogicalDecodingContext *ctx,\n Assert(!in_streaming);\n Assert(rbtxn_is_streamed(txn));\n\n- OutputPluginUpdateProgress(ctx);\n+ pfree(txndata);\n+ txn->output_plugin_private = NULL;\n+\n+ /* If no changes were part of this transaction then drop the commit */\n+ if (!sent_begin_txn)\n+ {\n+ elog(DEBUG1, \"Skipping replication of an empty transaction in stream commit\");\n+ return;\n+ }\n\n(Same as previous comment #15). I didn't really understand why this\ncode is checking the 'sent_begin_txn' member instead of the\n'sent_stream_start' member?\n\n~~~\n\n17. src/backend/replication/syncrep.c - SyncRepEnabled\n\n@@ -539,6 +538,15 @@ SyncRepReleaseWaiters(void)\n }\n\n /*\n+ * Check if synchronous replication is enabled\n+ */\n+bool\n+SyncRepEnabled(void)\n+{\n+ return SyncRepRequested() && ((volatile WalSndCtlData *)\nWalSndCtl)->sync_standbys_defined;\n+}\n\nThat code was once inline in 'SyncRepWaitForLSN' before it was turned\ninto a function, and there is a long comment in SyncRepWaitForLSN\ndescribing the risks of this logic. e.g.\n\n<quote>\n... If it's true, we need to check it again\n* later while holding the lock, to check the flag and operate the sync\n* rep queue atomically. This is necessary to avoid the race condition\n* described in SyncRepUpdateSyncStandbysDefined().\n</quote>\n\nThis same function is now called from walsender.c. I think maybe it is\nOK but please confirm it.\n\nAnyway, the point is maybe this SyncRepEnabled function should be\nbetter commented to make some reference about the race concerns of the\noriginal comment. Otherwise some future caller of this function may be\nunaware of it and come to grief.\n\n-------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 25 Feb 2022 21:16:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Feb 18, 2022 at 9:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> Yeah, I think there could be multiple ways (a) We can send such a keep\n> alive in WalSndUpdateProgress() itself by using ctx->write_location.\n> For this, we need to modify WalSndKeepalive() to take sentPtr as\n> input. (b) set some flag in WalSndUpdateProgress() and then do it\n> somewhere in WalSndLoop probably in WalSndKeepaliveIfNecessary, or\n> maybe there is another better way.\n>\n\nThanks for the suggestion Amit and Osumi-san, I experimented with both\nthe suggestions but finally decided to use\n (a)Modifying WalSndKeepalive() to take an LSN optionally as input and\npassed in the ctx->write_location.\n\nI also verified that if I block the WalSndKeepalive() in\nWalSndWaitForWal, then my new code sends the keepalive\nwhen skipping transactions and the syncrep gets back feedback..\n\nI will address comments from Peter and Wang in my next patch update.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 25 Feb 2022 22:19:02 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Feb 25, 2022 at 9:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi. Here are my review comments for the v19 patch.\n>\n> ======\n>\n> 1. Commit message\n>\n> The current logical replication behavior is to send every transaction to\n> subscriber even though the transaction is empty (because it does not\n> contain changes from the selected publications).\n>\n> SUGGESTION\n> \"to subscriber even though\" --> \"to the subscriber even if\"\n\nFixed.\n\n>\n> ~~~\n>\n> 2. Commit message\n>\n> This patch addresses the above problem by postponing the BEGIN message\n> until the first change. While processing a COMMIT message,\n> if there is no other change for that transaction,\n> do not send COMMIT message. It means that pgoutput will\n> skip BEGIN/COMMIT messages for transactions that are empty.\n>\n> SUGGESTION\n> \"if there is\" --> \"if there was\"\n> \"do not send COMMIT message\" --> \"do not send the COMMIT message\"\n> \"It means that pgoutput\" --> \"This means that pgoutput\"\n>\n> ~~~\n\nFixed.\n\n>\n> 3. Commit message\n>\n> Shouldn't there be some similar description about using a lazy send\n> mechanism for STREAM START?\n>\n> ~~~\n\nAdded.\n\n>\n> 4. src/backend/replication/pgoutput/pgoutput.c - typedef struct PGOutputTxnData\n>\n> +/*\n> + * Maintain a per-transaction level variable to track whether the\n> + * transaction has sent BEGIN. BEGIN is only sent when the first\n> + * change in a transaction is processed. This makes it possible\n> + * to skip transactions that are empty.\n> + */\n> +typedef struct PGOutputTxnData\n> +{\n> + bool sent_begin_txn; /* flag indicating whether BEGIN has been sent */\n> + bool sent_stream_start; /* flag indicating if stream start has been sent */\n> + bool sent_any_stream; /* flag indicating if any stream has been sent */\n> +} PGOutputTxnData;\n> +\n>\n> The struct comment looks stale because it doesn't mention anything\n> about the similar lazy send mechanism for STREAM_START.\n>\n> ~~~\n\nAdded.\n\n>\n> 5. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin_txn\n>\n> static void\n> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> + PGOutputTxnData *txndata = MemoryContextAllocZero(ctx->context,\n> + sizeof(PGOutputTxnData));\n> +\n> + txndata->sent_begin_txn = false;\n> + txn->output_plugin_private = txndata;\n> +}\n>\n> You don’t need to assign the other members 'sent_stream_start',\n> 'sent_any_stream' because you are doing MemoryContextAllocZero anyway,\n> but for the same reason you did not really need to assign the\n> 'sent_begin_txn' flag either.\n>\n> I guess for consistency maybe it is better to (a) set all of them or\n> (b) set none of them. I prefer (b).\n>\n> ~~~\n\nDid (b)\n\n\n>\n> 6. src/backend/replication/pgoutput/pgoutput.c - pgoutput_begin\n>\n> I feel the 'pgoutput_begin' function is not well named. It makes some\n> of the code where they are called look quite confusing.\n>\n> For streaming there is:\n> 1. pgoutput_stream_start (does not send)\n> 2. pgoutput_send_stream_start (does send)\n> so it is very clear.\n>\n> OTOH there are\n> 3. pgoutput_begin_txn (does not send)\n> 4. pgoutput_begin (does send)\n>\n> For consistency I think the 'pgoutput_begin' name should be changed to\n> include \"send\" verb\n> 1. pgoutput_begin_txn (does not send)\n> 2. pgoutput_send_begin_txn (does send)\n>\n> ~~~\n\nChanged as mentioned.\n\n>\n> 7. src/backend/replication/pgoutput/pgoutput.c - maybe_send_schema\n>\n> @@ -594,6 +663,20 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n> if (schema_sent)\n> return;\n>\n> + /* set up txndata */\n> + txndata = toptxn->output_plugin_private;\n> +\n> + /*\n> + * Before we send schema, make sure that STREAM START/BEGIN/BEGIN PREPARE\n> + * is sent. If not, send now.\n> + */\n> + if (in_streaming && !txndata->sent_stream_start)\n> + pgoutput_send_stream_start(ctx, toptxn);\n> + else if (txndata && !txndata->sent_begin_txn)\n> + {\n> + pgoutput_begin(ctx, toptxn);\n> + }\n> +\n>\n> How come the in_streaming case is not checking for a NULL txndata\n> before referencing it? Even if it is OK to do that, some more comments\n> or assertions might help for this piece of code.\n> (Stop-Press: see later comments #9, #10)\n>\n> ~~~\n\nUpdated.\n\n>\n> 8. src/backend/replication/pgoutput/pgoutput.c - maybe_send_schema\n>\n> @@ -594,6 +663,20 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n> if (schema_sent)\n> return;\n>\n> + /* set up txndata */\n> + txndata = toptxn->output_plugin_private;\n> +\n> + /*\n> + * Before we send schema, make sure that STREAM START/BEGIN/BEGIN PREPARE\n> + * is sent. If not, send now.\n> + */\n>\n> What part of this code is doing anything about \"BEGIN PREPARE\" ?\n>\n> ~~~\n\nRemoved that reference.\n\n>\n> 9. src/backend/replication/pgoutput/pgoutput.c - pgoutput_change\n>\n> @@ -1183,6 +1267,15 @@ pgoutput_change(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> Assert(false);\n> }\n>\n> + /* If streaming, send STREAM START if we haven't yet */\n> + if (in_streaming && (txndata && !txndata->sent_stream_start))\n> + pgoutput_send_stream_start(ctx, txn);\n> + /*\n> + * Output BEGIN if we haven't yet, unless streaming.\n> + */\n> + else if (!in_streaming && (txndata && !txndata->sent_begin_txn))\n> + pgoutput_begin(ctx, txn);\n> +\n>\n> The above code fragment looks more like what IU was expecting should\n> be in 'maybe_send_schema',\n>\n> If you expand it out (and tweak the comments) it can become much less\n> complex looking IMO\n>\n> e.g.\n>\n> if (in_streaming)\n> {\n> /* If streaming, send STREAM START if we haven't yet */\n> if (txndata && !txndata->sent_stream_start)\n> pgoutput_send_stream_start(ctx, txn);\n> }\n> else\n> {\n> /* If not streaming, send BEGIN if we haven't yet */\n> if (txndata && !txndata->sent_begin_txn)\n> pgoutput_begin(ctx, txn);\n> }\n>\n> Also, IIUC for the 'in_streaming' case you can Assert(txndata); so\n> then the code can be made even simpler.\n>\n\nChose your example.\n\n> ~~~\n>\n> 10. src/backend/replication/pgoutput/pgoutput.c - pgoutput_truncate\n>\n> @ -1397,6 +1491,17 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n>\n> if (nrelids > 0)\n> {\n> + txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> + /* If streaming, send STREAM START if we haven't yet */\n> + if (in_streaming && (txndata && !txndata->sent_stream_start))\n> + pgoutput_send_stream_start(ctx, txn);\n> + /*\n> + * output BEGIN if we haven't yet, unless streaming.\n> + */\n> + else if (!in_streaming && (txndata && !txndata->sent_begin_txn))\n> + pgoutput_begin(ctx, txn);\n>\n> So now I have seen almost identical code repeated in 3 places so I am\n> beginning to think these should just be encapsulated in some common\n> function to call to do the deferred \"send\". Thoughts?\n>\n> ~~~\n\nNot sure if we want to add a function call overhead.\n\n>\n> 11. src/backend/replication/pgoutput/pgoutput.c - pgoutput_message\n>\n> @@ -1429,6 +1534,24 @@ pgoutput_message(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> if (in_streaming)\n> xid = txn->xid;\n>\n> + /*\n> + * Output BEGIN if we haven't yet.\n> + * Avoid for streaming and non-transactional messages.\n> + */\n> + if (in_streaming || transactional)\n> + {\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> + /* If streaming, send STREAM START if we haven't yet */\n> + if (in_streaming && (txndata && !txndata->sent_stream_start))\n> + pgoutput_send_stream_start(ctx, txn);\n> + else if (transactional)\n> + {\n> + if (txndata && !txndata->sent_begin_txn)\n> + pgoutput_begin(ctx, txn);\n> + }\n> + }\n>\n> Does that comment at the top of that code fragment accurately match\n> this code? It seemed a bit muddled/stale to me.\n>\n> ~~~\n\nFixed.\n\n>\n> 12. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_start\n>\n> /*\n> + * Don't actually send stream start here, instead set a flag that indicates\n> + * that stream start hasn't been sent and wait for the first actual change\n> + * for this stream to be sent and then send stream start. This is done\n> + * to avoid sending empty streams without any changes.\n> + */\n> + if (txndata == NULL)\n> + {\n> + txndata =\n> + MemoryContextAllocZero(ctx->context, sizeof(PGOutputTxnData));\n> + txndata->sent_begin_txn = false;\n> + txndata->sent_any_stream = false;\n> + txn->output_plugin_private = txndata;\n> + }\n>\n> IMO there is no need to set the members – just let the\n> MemoryContextAllocZero take care of all that. Then the code is simpler\n> and it also saves wondering if anything was accidentally missed.\n>\n\nFixed.\n\n> ~~~\n>\n> 13. src/backend/replication/pgoutput/pgoutput.c - pgoutput_send_stream_start\n>\n> +pgoutput_send_stream_start(struct LogicalDecodingContext *ctx,\n> + ReorderBufferTXN *txn)\n> +{\n> + bool send_replication_origin = txn->origin_id != InvalidRepOriginId;\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> +\n> + /*\n> * If we already sent the first stream for this transaction then don't\n> * send the origin id in the subsequent streams.\n> */\n> - if (rbtxn_is_streamed(txn))\n> + if (txndata->sent_any_stream)\n> send_replication_origin = false;\n>\n> Given this usage, I wonder if there is a better name for the txndata\n> member - e.g. 'sent_first_stream' ?\n>\n> ~~~\n\nChanged.\n\n>\n> 14. src/backend/replication/pgoutput/pgoutput.c - pgoutput_send_stream_start\n>\n> - /* we're streaming a chunk of transaction now */\n> - in_streaming = true;\n> + /*\n> + * Set the flags that indicate that changes were sent as part of\n> + * the transaction and the stream.\n> + */\n> + txndata->sent_begin_txn = txndata->sent_stream_start = true;\n> + txndata->sent_any_stream = true;\n>\n> Why is this setting member 'sent_begin_txn' true also? It seems odd to\n> say so because the BEGIN was not actually sent at all, right?\n>\n> ~~~\n\nYou can have transactions that are partially streamed and partially\nnot. So if there\nis a transaction that started as streaming, but when it is committed,\nit is replicated\nas part of the commit, then when the changes are decoded, we shouldn't\nbe sending a \"begin\"\nagain.\n\n>\n> 15. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_abort\n>\n> @@ -1572,6 +1740,20 @@ pgoutput_stream_abort(struct LogicalDecodingContext *ctx,\n>\n> /* determine the toplevel transaction */\n> toptxn = (txn->toptxn) ? txn->toptxn : txn;\n> + txndata = toptxn->output_plugin_private;\n> + sent_begin_txn = txndata->sent_begin_txn;\n> +\n> + if (txn->toptxn == NULL)\n> + {\n> + pfree(txndata);\n> + txn->output_plugin_private = NULL;\n> + }\n> +\n> + if (!sent_begin_txn)\n> + {\n> + elog(DEBUG1, \"Skipping replication of an empty transaction in stream abort\");\n> + return;\n> + }\n>\n> I didn't really understand why this code is checking the\n> 'sent_begin_txn' member instead of the 'sent_stream_start' member?\n>\n\nYes, changed this to check \"sent_first_stream\"\n> ~~~\n>\n> 16. src/backend/replication/pgoutput/pgoutput.c - pgoutput_stream_commit\n>\n> @@ -1598,7 +1782,17 @@ pgoutput_stream_commit(struct\n> LogicalDecodingContext *ctx,\n> Assert(!in_streaming);\n> Assert(rbtxn_is_streamed(txn));\n>\n> - OutputPluginUpdateProgress(ctx);\n> + pfree(txndata);\n> + txn->output_plugin_private = NULL;\n> +\n> + /* If no changes were part of this transaction then drop the commit */\n> + if (!sent_begin_txn)\n> + {\n> + elog(DEBUG1, \"Skipping replication of an empty transaction in stream commit\");\n> + return;\n> + }\n>\n> (Same as previous comment #15). I didn't really understand why this\n> code is checking the 'sent_begin_txn' member instead of the\n> 'sent_stream_start' member?\n>\n> ~~~\n\nChanged.\n\n>\n> 17. src/backend/replication/syncrep.c - SyncRepEnabled\n>\n> @@ -539,6 +538,15 @@ SyncRepReleaseWaiters(void)\n> }\n>\n> /*\n> + * Check if synchronous replication is enabled\n> + */\n> +bool\n> +SyncRepEnabled(void)\n> +{\n> + return SyncRepRequested() && ((volatile WalSndCtlData *)\n> WalSndCtl)->sync_standbys_defined;\n> +}\n>\n> That code was once inline in 'SyncRepWaitForLSN' before it was turned\n> into a function, and there is a long comment in SyncRepWaitForLSN\n> describing the risks of this logic. e.g.\n>\n> <quote>\n> ... If it's true, we need to check it again\n> * later while holding the lock, to check the flag and operate the sync\n> * rep queue atomically. This is necessary to avoid the race condition\n> * described in SyncRepUpdateSyncStandbysDefined().\n> </quote>\n>\n> This same function is now called from walsender.c. I think maybe it is\n> OK but please confirm it.\n>\n> Anyway, the point is maybe this SyncRepEnabled function should be\n> better commented to make some reference about the race concerns of the\n> original comment. Otherwise some future caller of this function may be\n> unaware of it and come to grief.\n>\n\nLeaving this for now, not sure what wording is appropriate to use here.\n\nOn Wed, Feb 23, 2022 at 5:24 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Feb, Wed 23, 2022 at 10:58 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> Few comments to V19-0001:\n>\n> 1. I think we should adjust the alignment format.\n> git am ../v19-0001-Skip-empty-transactions-for-logical-replication.patch\n> .git/rebase-apply/patch:197: indent with spaces.\n> * Before we send schema, make sure that STREAM START/BEGIN/BEGIN PREPARE\n> .git/rebase-apply/patch:198: indent with spaces.\n> * is sent. If not, send now.\n> .git/rebase-apply/patch:199: indent with spaces.\n> */\n> .git/rebase-apply/patch:201: indent with spaces.\n> pgoutput_send_stream_start(ctx, toptxn);\n> .git/rebase-apply/patch:204: indent with spaces.\n> pgoutput_begin(ctx, toptxn);\n> warning: 5 lines add whitespace errors.\n\nFixed.\n\n\n>\n> 2. Structure member initialization.\n> static void\n> pgoutput_begin_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n> {\n> + PGOutputTxnData *txndata = MemoryContextAllocZero(ctx->context,\n> + sizeof(PGOutputTxnData));\n> +\n> + txndata->sent_begin_txn = false;\n> + txn->output_plugin_private = txndata;\n> +}\n> Do we need to set sent_stream_start and sent_any_stream to false here?\n\nFixed\n\n>\n> 3. Maybe we should add Assert(txndata) like function pgoutput_commit_txn in\n> other functions.\n>\n> 4. In addition, I think we should keep a unified style.\n> a). log style (maybe first one is better.)\n> First style : \"Skipping replication of an empty transaction in XXX\"\n> Second style : \"skipping replication of an empty transaction\"\n> b) flag name (maybe second one is better.)\n> First style : variable \"sent_begin_txn\" in function pgoutput_stream_*.\n> Second style : variable \"skip\" in function pgoutput_commit_txn.\n>\n\nFixed,\n\nRegards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Tue, 1 Mar 2022 16:02:17 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "Hi,\r\n\r\nHere are some comments on the v21 patch.\r\n\r\n1.\r\n+\t\t\tWalSndKeepalive(false, 0);\r\n\r\nMaybe we can use InvalidXLogRecPtr here, instead of 0.\r\n\r\n2.\r\n+\tpq_sendint64(&output_message, writePtr ? writePtr : sentPtr);\r\n\r\nSimilarly, should we use XLogRecPtrIsInvalid()?\r\n\r\n3.\r\n@@ -1183,6 +1269,20 @@ pgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n \t\t\tAssert(false);\r\n \t}\r\n \r\n+ if (in_streaming)\r\n+\t{\r\n+\t\t/* If streaming, send STREAM START if we haven't yet */\r\n+\t\tif (txndata && !txndata->sent_stream_start)\r\n+\t\tpgoutput_send_stream_start(ctx, txn);\r\n+\t}\r\n+\telse\r\n+\t{\r\n+\t\t/* If not streaming, send BEGIN if we haven't yet */\r\n+\t\tif (txndata && !txndata->sent_begin_txn)\r\n+\t\tpgoutput_send_begin(ctx, txn);\r\n+\t}\r\n+\r\n+\r\n \t/* Avoid leaking memory by using and resetting our own context */\r\n \told = MemoryContextSwitchTo(data->context);\r\n\r\n\r\nI am not sure if it is suitable to send begin or stream_start here, because the\r\nrow filter is not checked yet. That means, empty transactions caused by row\r\nfilter are not skipped.\r\n\r\n4.\r\n@@ -1617,9 +1829,21 @@ pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,\r\n \t\t\t\t\t\t\tReorderBufferTXN *txn,\r\n \t\t\t\t\t\t\tXLogRecPtr prepare_lsn)\r\n {\r\n+\tPGOutputTxnData *txndata = txn->output_plugin_private;\r\n+\tbool\t\t\tsent_begin_txn = txndata->sent_begin_txn;\r\n+\r\n \tAssert(rbtxn_is_streamed(txn));\r\n \r\n-\tOutputPluginUpdateProgress(ctx);\r\n+\tpfree(txndata);\r\n+\ttxn->output_plugin_private = NULL;\r\n+\r\n+\tif (!sent_begin_txn)\r\n+\t{\r\n+\t\telog(DEBUG1, \"Skipping replication of an empty transaction in stream prepare\");\r\n+\t\treturn;\r\n+\t}\r\n+\r\n+\tOutputPluginUpdateProgress(ctx, false);\r\n \tOutputPluginPrepareWrite(ctx, true);\r\n \tlogicalrep_write_stream_prepare(ctx->out, txn, prepare_lsn);\r\n \tOutputPluginWrite(ctx, true);\r\n\r\nI notice that the patch skips stream prepared transaction, this would cause an\r\nerror on subscriber side when committing this transaction on publisher side, so\r\nI think we'd better not do that.\r\n\r\nFor example:\r\n(set logical_decoding_work_mem = 64kB, max_prepared_transactions = 10 in\r\npostgresql.conf)\r\n\r\n-- publisher\r\ncreate table test (a int, b text, primary key(a));\r\ncreate table test2 (a int, b text, primary key(a));\r\ncreate publication pub for table test;\r\n\r\n-- subscriber \r\ncreate table test (a int, b text, primary key(a));\r\ncreate table test2 (a int, b text, primary key(a));\r\ncreate subscription sub connection 'dbname=postgres port=5432' publication pub with(two_phase=on, streaming=on);\r\n\r\n-- publisher\r\nbegin;\r\nINSERT INTO test2 SELECT i, md5(i::text) FROM generate_series(1, 1000) s(i);\r\nprepare transaction 't';\r\ncommit prepared 't';\r\n\r\nThe error message in subscriber log:\r\nERROR: prepared transaction with identifier \"pg_gid_16391_722\" does not exist\r\n\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Wed, 2 Mar 2022 02:00:55 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 2, 2022 at 1:01 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> 4.\n> @@ -1617,9 +1829,21 @@ pgoutput_stream_prepare_txn(LogicalDecodingContext *ctx,\n> ReorderBufferTXN *txn,\n> XLogRecPtr prepare_lsn)\n> {\n> + PGOutputTxnData *txndata = txn->output_plugin_private;\n> + bool sent_begin_txn = txndata->sent_begin_txn;\n> +\n> Assert(rbtxn_is_streamed(txn));\n>\n> - OutputPluginUpdateProgress(ctx);\n> + pfree(txndata);\n> + txn->output_plugin_private = NULL;\n> +\n> + if (!sent_begin_txn)\n> + {\n> + elog(DEBUG1, \"Skipping replication of an empty transaction in stream prepare\");\n> + return;\n> + }\n> +\n> + OutputPluginUpdateProgress(ctx, false);\n> OutputPluginPrepareWrite(ctx, true);\n> logicalrep_write_stream_prepare(ctx->out, txn, prepare_lsn);\n> OutputPluginWrite(ctx, true);\n>\n> I notice that the patch skips stream prepared transaction, this would cause an\n> error on subscriber side when committing this transaction on publisher side, so\n> I think we'd better not do that.\n>\n> For example:\n> (set logical_decoding_work_mem = 64kB, max_prepared_transactions = 10 in\n> postgresql.conf)\n>\n> -- publisher\n> create table test (a int, b text, primary key(a));\n> create table test2 (a int, b text, primary key(a));\n> create publication pub for table test;\n>\n> -- subscriber\n> create table test (a int, b text, primary key(a));\n> create table test2 (a int, b text, primary key(a));\n> create subscription sub connection 'dbname=postgres port=5432' publication pub with(two_phase=on, streaming=on);\n>\n> -- publisher\n> begin;\n> INSERT INTO test2 SELECT i, md5(i::text) FROM generate_series(1, 1000) s(i);\n> prepare transaction 't';\n> commit prepared 't';\n>\n> The error message in subscriber log:\n> ERROR: prepared transaction with identifier \"pg_gid_16391_722\" does not exist\n>\n\nThanks for the test. I guess this mixed streaming+two-phase runs into\nthe same problem that\nwas there while skipping two-phased transactions. If the eventual\ncommit prepared comes after a restart,\nthen there is no way of knowing if the original transaction was\nskipped or not and we can't know if the commit prepared\nneeds to be sent. I tried not skipping the \"stream prepare\", but that\ncauses a crash in the apply worker\nas it tries to find the non-existent streamed file. We could add logic\nto silently ignore a spurious \"stream prepare\"\nbut that might not be ideal. Any thoughts on how to address this? Or\nelse, we will need to avoid skipping streamed\ntransactions as well.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Thu, 3 Mar 2022 00:40:00 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 2, 2022 at 1:01 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> Here are some comments on the v21 patch.\n>\n> 1.\n> + WalSndKeepalive(false, 0);\n>\n> Maybe we can use InvalidXLogRecPtr here, instead of 0.\n>\n\nFixed.\n\n> 2.\n> + pq_sendint64(&output_message, writePtr ? writePtr : sentPtr);\n>\n> Similarly, should we use XLogRecPtrIsInvalid()?\n\nFixed\n\n>\n> 3.\n> @@ -1183,6 +1269,20 @@ pgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> Assert(false);\n> }\n>\n> + if (in_streaming)\n> + {\n> + /* If streaming, send STREAM START if we haven't yet */\n> + if (txndata && !txndata->sent_stream_start)\n> + pgoutput_send_stream_start(ctx, txn);\n> + }\n> + else\n> + {\n> + /* If not streaming, send BEGIN if we haven't yet */\n> + if (txndata && !txndata->sent_begin_txn)\n> + pgoutput_send_begin(ctx, txn);\n> + }\n> +\n> +\n> /* Avoid leaking memory by using and resetting our own context */\n> old = MemoryContextSwitchTo(data->context);\n>\n>\n> I am not sure if it is suitable to send begin or stream_start here, because the\n> row filter is not checked yet. That means, empty transactions caused by row\n> filter are not skipped.\n>\n\nMoved the check down, so that row_filters are taken into account.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Thu, 3 Mar 2022 14:36:05 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "I have split the patch into two. I have kept the logic of skipping\nstreaming changes in the second patch.\nI will work on the second patch once we can figure out a solution for\nthe COMMIT PREPARED after restart problem.\n\nregards,\nAjin Cherian", "msg_date": "Fri, 4 Mar 2022 12:41:16 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Mar 4, 2022 at 12:41 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> I have split the patch into two. I have kept the logic of skipping\n> streaming changes in the second patch.\n> I will work on the second patch once we can figure out a solution for\n> the COMMIT PREPARED after restart problem.\n>\n\nPlease see below my review comments for the first patch only (v23-0001)\n\n======\n\n1. Patch failed to apply cleanly - whitespace warnings.\n\ngit apply ../patches_misc/v23-0001-Skip-empty-transactions-for-logical-replication.patch\n../patches_misc/v23-0001-Skip-empty-transactions-for-logical-replication.patch:68:\ntrailing whitespace.\n * change in a transaction is processed. This makes it possible\nwarning: 1 line adds whitespace errors.\n\n~~~\n\n2. src/backend/replication/pgoutput/pgoutput.c - typedef struct PGOutputTxnData\n\n+/*\n+ * Maintain a per-transaction level variable to track whether the\n+ * transaction has sent BEGIN. BEGIN is only sent when the first\n+ * change in a transaction is processed. This makes it possible\n+ * to skip transactions that are empty.\n+ */\n+typedef struct PGOutputTxnData\n\nI felt that this comment is describing details all about its bool\nmember but I think maybe it should be describing something also about\nthe structure itself (because this is the structure comment). E.g. it\nshould say about it only being allocated by the pgoutput_begin_txn()\nand it is accessible via txn->output_plugin_private. Maybe also say\nthis has subtle implications if this is NULL then it means the tx\ncan't be 2PC etc...\n\n~~~\n\n3. src/backend/replication/pgoutput/pgoutput.c - pgoutput_send_begin\n\n+/*\n+ * Send BEGIN.\n+ *\n+ * This is where the BEGIN is actually sent. This is called while processing\n+ * the first change of the transaction.\n+ */\n+static void\n+pgoutput_send_begin(LogicalDecodingContext *ctx, ReorderBufferTXN *txn)\n\nIMO there is no need to repeat \"This is where the BEGIN is actually\nsent.\", because \"Send BEGIN.\" already said the same thing :-)\n\n~~~\n\n4. src/backend/replication/pgoutput/pgoutput.c - pgoutput_commit_txn\n\n+ /*\n+ * If a BEGIN message was not yet sent, then it means there were no relevant\n+ * changes encountered, so we can skip the COMMIT message too.\n+ */\n+ sent_begin_txn = txndata->sent_begin_txn;\n+ txn->output_plugin_private = NULL;\n+ OutputPluginUpdateProgress(ctx, !sent_begin_txn);\n+\n+ pfree(txndata);\n\nNot quite sure why this pfree is positioned where it is (after that\nfunction call). I felt this should be a couple of lines up so txndata\nis freed as soon as you had no more use for it (i.e. after you copied\nthe bool from it)\n\n~~~\n\n5. src/backend/replication/pgoutput/pgoutput.c - maybe_send_schema\n\n@@ -594,6 +658,13 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n if (schema_sent)\n return;\n\n+ /* set up txndata */\n+ txndata = toptxn->output_plugin_private;\n\nThe comment does quite feel right. Nothing is \"setting up\" anything.\nReally, all this does is assign a reference to the tx private data.\nProbably better with no comment at all?\n\n~~~\n\n6. src/backend/replication/pgoutput/pgoutput.c - maybe_send_schema\n\nI observed that every call to the maybe_send_schema function also has\nadjacent code that already/always is checking to call\npgoutput_send_begin_tx function.\n\nSo then I am wondering is the added logic to the maybe_send_schema\neven needed at all? It looks a bit redundant. Thoughts?\n\n~~~\n\n7. src/backend/replication/pgoutput/pgoutput.c - pgoutput_change\n\n@@ -1141,6 +1212,7 @@ pgoutput_change(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n Relation relation, ReorderBufferChange *change)\n {\n PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n+ PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n MemoryContext old;\n\nMaybe if is worth deferring this assignment until after the row-filter\ncheck. Otherwise, you are maybe doing it for nothing and IIRC this is\nhot code so the less you do here the better. OTOH a single assignment\nprobably amounts to almost nothing.\n\n~~~\n\n8. src/backend/replication/pgoutput/pgoutput.c - pgoutput_change\n\n@@ -1354,6 +1438,7 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n int nrelations, Relation relations[], ReorderBufferChange *change)\n {\n PGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\n+ PGOutputTxnData *txndata;\n MemoryContext old;\n\nThis variable declaration should be done later in the block where it\nis assigned.\n\n~~~\n\n9. src/backend/replication/pgoutput/pgoutput.c - suggestion\n\nI notice there is quite a few places in the patch that look like:\n\n+ txndata = (PGOutputTxnData *) txn->output_plugin_private;\n+\n+ /* Send BEGIN if we haven't yet */\n+ if (txndata && !txndata->sent_begin_txn)\n+ pgoutput_send_begin(ctx, txn);\n+\n\nIt might be worth considering encapsulating all those in a helper function like:\npgoutput_maybe_send_begin(ctx, txn)\n\nIt would certainly be a lot tidier.\n\n~~~\n\n10. src/backend/replication/syncrep.c - SyncRepEnabled\n\n@@ -539,6 +538,15 @@ SyncRepReleaseWaiters(void)\n }\n\n /*\n+ * Check if synchronous replication is enabled\n+ */\n+bool\n+SyncRepEnabled(void)\n\nMissing period for that function comment.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 4 Mar 2022 18:27:25 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Fri, Mar 4, 2022 9:41 AM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> \r\n> I have split the patch into two. I have kept the logic of skipping\r\n> streaming changes in the second patch.\r\n> I will work on the second patch once we can figure out a solution for\r\n> the COMMIT PREPARED after restart problem.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nA comment on v23-0001 patch.\r\n\r\n@@ -1429,6 +1520,19 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n \tif (in_streaming)\r\n \t\txid = txn->xid;\r\n \r\n+\t/*\r\n+\t * Output BEGIN if we haven't yet.\r\n+\t * Avoid for non-transactional messages.\r\n+\t */\r\n+\tif (in_streaming || transactional)\r\n+\t{\r\n+\t\tPGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n+\r\n+\t\t/* Send BEGIN if we haven't yet */\r\n+\t\tif (txndata && !txndata->sent_begin_txn)\r\n+\t\t\tpgoutput_send_begin(ctx, txn);\r\n+\t}\r\n+\r\n \tOutputPluginPrepareWrite(ctx, true);\r\n \tlogicalrep_write_message(ctx->out,\r\n \t\t\t\t\t\t\t xid,\r\n\r\nI think we don't need to send BEGIN if in_streaming is true, right? The first\r\npatch doesn't skip streamed transaction, so should we modify\r\n+\tif (in_streaming || transactional)\r\nto\r\n+\tif (!in_streaming && transactional)\r\n?\r\n\r\nRegards,\r\nShi yu\r\n\r\n", "msg_date": "Mon, 7 Mar 2022 08:50:42 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Mon, Mar 7, 2022 at 7:50 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Fri, Mar 4, 2022 9:41 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > I have split the patch into two. I have kept the logic of skipping\n> > streaming changes in the second patch.\n> > I will work on the second patch once we can figure out a solution for\n> > the COMMIT PREPARED after restart problem.\n> >\n>\n> Thanks for updating the patch.\n>\n> A comment on v23-0001 patch.\n>\n> @@ -1429,6 +1520,19 @@ pgoutput_message(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> if (in_streaming)\n> xid = txn->xid;\n>\n> + /*\n> + * Output BEGIN if we haven't yet.\n> + * Avoid for non-transactional messages.\n> + */\n> + if (in_streaming || transactional)\n> + {\n> + PGOutputTxnData *txndata = (PGOutputTxnData *) txn->output_plugin_private;\n> +\n> + /* Send BEGIN if we haven't yet */\n> + if (txndata && !txndata->sent_begin_txn)\n> + pgoutput_send_begin(ctx, txn);\n> + }\n> +\n> OutputPluginPrepareWrite(ctx, true);\n> logicalrep_write_message(ctx->out,\n> xid,\n>\n> I think we don't need to send BEGIN if in_streaming is true, right? The first\n> patch doesn't skip streamed transaction, so should we modify\n> + if (in_streaming || transactional)\n> to\n> + if (!in_streaming && transactional)\n> ?\n>\n\nFixed.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Mon, 7 Mar 2022 23:44:14 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Mon, Mar 7, 2022 at 11:44 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Fixed.\n>\n> regards,\n> Ajin Cherian\n> Fujitsu Australia\n\nRebased the patch and fixed some whitespace errors.\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Wed, 16 Mar 2022 18:03:24 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 16, 2022 at 12:33 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Mon, Mar 7, 2022 at 11:44 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > Fixed.\n> >\n\nReview comments/suggestions:\n=========================\n1. Isn't it sufficient to call pgoutput_send_begin from\nmaybe_send_schema as that is commonplace for all others and is always\nthe first message we send? If so, I think we can remove it from other\nplaces?\n2. Can we write some comments to explain why we don't skip streaming\nor prepared empty transactions and some possible solutions (the\nprotocol change and additional subscription parameter as discussed\n[1]) as discussed in this thread pgoutput.c?\n3. Can we add a simple test for it in one of the existing test\nfiles(say in 001_rep_changes.pl)?\n4. I think we can drop the skip streaming patch as we can't do that for now.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Mar 2022 17:13:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Thu, Mar 17, 2022 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Review comments/suggestions:\n> =========================\n> 1. Isn't it sufficient to call pgoutput_send_begin from\n> maybe_send_schema as that is commonplace for all others and is always\n> the first message we send? If so, I think we can remove it from other\n> places?\n\nI've done the other way, I've removed it from maybe_send_schema as we\nalways call this\nprior to calling maybe_send_schema.\n\n> 2. Can we write some comments to explain why we don't skip streaming\n> or prepared empty transactions and some possible solutions (the\n> protocol change and additional subscription parameter as discussed\n> [1]) as discussed in this thread pgoutput.c?\n\nI've added comment in the header of pgoutput_begin_prepare_txn() and\npgoutput_stream_start()\n\n> 3. Can we add a simple test for it in one of the existing test\n> files(say in 001_rep_changes.pl)?\n\nadded a simple test.\n\n> 4. I think we can drop the skip streaming patch as we can't do that for now.\n\nDropped,\n\nIn addition, I have also added a few more comments explaining why the begin send\nis delayed in pgoutput_change till row_filter is checked and also ran pgindent.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Sat, 19 Mar 2022 14:40:34 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Sat, Mar 19, 2022 at 9:10 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Thu, Mar 17, 2022 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > 3. Can we add a simple test for it in one of the existing test\n> > files(say in 001_rep_changes.pl)?\n>\n> added a simple test.\n>\n\nThis doesn't verify if the transaction is skipped. I think we should\nextend this test to check for a DEBUG message in the Logs (you need to\nprobably set log_min_messages to DEBUG1 for this test). As an example,\nyou can check the patch [1]. Also, it seems by mistake you have added\nwait_for_catchup() twice.\n\nFew other comments:\n=================\n1. Let's keep the parameter name as skipped_empty_xact in\nOutputPluginUpdateProgress so as to not confuse with the other patch's\n[2] keep_alive parameter. I think in this case we must send the\nkeep_alive message so as to not make the syncrep wait whereas in the\nother patch we only need to send it periodically based on\nwal_sender_timeout parameter.\n2. The new function SyncRepEnabled() seems confusing to me as the\ncomments in SyncRepWaitForLSN() clearly state why we need to first\nread the parameter 'sync_standbys_defined' without any lock then read\nit again with a lock if the parameter is true. So, I just put that\ncheck back and also added a similar check in WalSndUpdateProgress.\n3.\n@@ -1392,11 +1481,21 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n continue;\n\n relids[nrelids++] = relid;\n+\n+ /* Send BEGIN if we haven't yet */\n+ if (txndata && !txndata->sent_begin_txn)\n+ pgoutput_send_begin(ctx, txn);\n maybe_send_schema(ctx, change, relation, relentry);\n }\n\n if (nrelids > 0)\n {\n+ txndata = (PGOutputTxnData *) txn->output_plugin_private;\n+\n+ /* Send BEGIN if we haven't yet */\n+ if (txndata && !txndata->sent_begin_txn)\n+ pgoutput_send_begin(ctx, txn);\n+\n\nWhy do we need to try sending the begin in the second check? I think\nit should be sufficient to do it in the above loop.\n\nI have made these and a number of other changes in the attached patch.\nDo let me know what you think of the attached?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JbLRj6pSUENfDFsqj0%2BadNob_%3DRPXpnUnWFBskVi5JhA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1LGnaPuWs2M4sDfpd6JQZjoh4DGAsgUvNW%3DOr8i9z6K8w%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 21 Mar 2022 15:31:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Monday, March 21, 2022 6:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Sat, Mar 19, 2022 at 9:10 AM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> >\r\n> > On Thu, Mar 17, 2022 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > > 3. Can we add a simple test for it in one of the existing test\r\n> > > files(say in 001_rep_changes.pl)?\r\n> >\r\n> > added a simple test.\r\n> >\r\n> \r\n> This doesn't verify if the transaction is skipped. I think we should\r\n> extend this test to check for a DEBUG message in the Logs (you need to\r\n> probably set log_min_messages to DEBUG1 for this test). As an example,\r\n> you can check the patch [1]. Also, it seems by mistake you have added\r\n> wait_for_catchup() twice.\r\n\r\nI added a testcase to check the DEBUG message.\r\n\r\n> Few other comments:\r\n> =================\r\n> 1. Let's keep the parameter name as skipped_empty_xact in\r\n> OutputPluginUpdateProgress so as to not confuse with the other patch's\r\n> [2] keep_alive parameter. I think in this case we must send the\r\n> keep_alive message so as to not make the syncrep wait whereas in the\r\n> other patch we only need to send it periodically based on\r\n> wal_sender_timeout parameter.\r\n> 2. The new function SyncRepEnabled() seems confusing to me as the\r\n> comments in SyncRepWaitForLSN() clearly state why we need to first\r\n> read the parameter 'sync_standbys_defined' without any lock then read\r\n> it again with a lock if the parameter is true. So, I just put that\r\n> check back and also added a similar check in WalSndUpdateProgress.\r\n> 3.\r\n> @@ -1392,11 +1481,21 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\r\n> ReorderBufferTXN *txn,\r\n> continue;\r\n> \r\n> relids[nrelids++] = relid;\r\n> +\r\n> + /* Send BEGIN if we haven't yet */\r\n> + if (txndata && !txndata->sent_begin_txn)\r\n> + pgoutput_send_begin(ctx, txn);\r\n> maybe_send_schema(ctx, change, relation, relentry);\r\n> }\r\n> \r\n> if (nrelids > 0)\r\n> {\r\n> + txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n> +\r\n> + /* Send BEGIN if we haven't yet */\r\n> + if (txndata && !txndata->sent_begin_txn)\r\n> + pgoutput_send_begin(ctx, txn);\r\n> +\r\n> \r\n> Why do we need to try sending the begin in the second check? I think\r\n> it should be sufficient to do it in the above loop.\r\n> \r\n> I have made these and a number of other changes in the attached patch.\r\n> Do let me know what you think of the attached?\r\n\r\nThe changes look good to me.\r\nAnd I did some basic tests for the patch and didn’t find some other problems.\r\n\r\nAttach the new version patch.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 22 Mar 2022 00:48:20 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "> On Monday, March 21, 2022 6:01 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Sat, Mar 19, 2022 at 9:10 AM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> > >\r\n> > > On Thu, Mar 17, 2022 at 10:43 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > > 3. Can we add a simple test for it in one of the existing test\r\n> > > > files(say in 001_rep_changes.pl)?\r\n> > >\r\n> > > added a simple test.\r\n> > >\r\n> >\r\n> > This doesn't verify if the transaction is skipped. I think we should\r\n> > extend this test to check for a DEBUG message in the Logs (you need to\r\n> > probably set log_min_messages to DEBUG1 for this test). As an example,\r\n> > you can check the patch [1]. Also, it seems by mistake you have added\r\n> > wait_for_catchup() twice.\r\n> \r\n> I added a testcase to check the DEBUG message.\r\n> \r\n> > Few other comments:\r\n> > =================\r\n> > 1. Let's keep the parameter name as skipped_empty_xact in\r\n> > OutputPluginUpdateProgress so as to not confuse with the other patch's\r\n> > [2] keep_alive parameter. I think in this case we must send the\r\n> > keep_alive message so as to not make the syncrep wait whereas in the\r\n> > other patch we only need to send it periodically based on\r\n> > wal_sender_timeout parameter.\r\n> > 2. The new function SyncRepEnabled() seems confusing to me as the\r\n> > comments in SyncRepWaitForLSN() clearly state why we need to first\r\n> > read the parameter 'sync_standbys_defined' without any lock then read\r\n> > it again with a lock if the parameter is true. So, I just put that\r\n> > check back and also added a similar check in WalSndUpdateProgress.\r\n> > 3.\r\n> > @@ -1392,11 +1481,21 @@ pgoutput_truncate(LogicalDecodingContext *ctx,\r\n> > ReorderBufferTXN *txn,\r\n> > continue;\r\n> >\r\n> > relids[nrelids++] = relid;\r\n> > +\r\n> > + /* Send BEGIN if we haven't yet */\r\n> > + if (txndata && !txndata->sent_begin_txn) pgoutput_send_begin(ctx,\r\n> > + txn);\r\n> > maybe_send_schema(ctx, change, relation, relentry);\r\n> > }\r\n> >\r\n> > if (nrelids > 0)\r\n> > {\r\n> > + txndata = (PGOutputTxnData *) txn->output_plugin_private;\r\n> > +\r\n> > + /* Send BEGIN if we haven't yet */\r\n> > + if (txndata && !txndata->sent_begin_txn) pgoutput_send_begin(ctx,\r\n> > + txn);\r\n> > +\r\n> >\r\n> > Why do we need to try sending the begin in the second check? I think\r\n> > it should be sufficient to do it in the above loop.\r\n> >\r\n> > I have made these and a number of other changes in the attached patch.\r\n> > Do let me know what you think of the attached?\r\n> \r\n> The changes look good to me.\r\n> And I did some basic tests for the patch and didn’t find some other problems.\r\n> \r\n> Attach the new version patch.\r\n\r\nOh, sorry, I posted the wrong patch, here is the correct one.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 22 Mar 2022 01:55:31 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Tue, Mar 22, 2022 at 7:25 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > On Monday, March 21, 2022 6:01 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n>\n> Oh, sorry, I posted the wrong patch, here is the correct one.\n>\n\nThe test change looks good to me. I think additionally we can verify\nthat the record is not reflected in the subscriber table. Apart from\nthat, I had made minor changes mostly in the comments in the attached\npatch. If those look okay to you, please include those in the next\nversion.\n\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 22 Mar 2022 17:20:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tuesday, March 22, 2022 7:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Mar 22, 2022 at 7:25 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > > On Monday, March 21, 2022 6:01 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> >\r\n> > Oh, sorry, I posted the wrong patch, here is the correct one.\r\n> >\r\n> \r\n> The test change looks good to me. I think additionally we can verify that the\r\n> record is not reflected in the subscriber table. Apart from that, I had made\r\n> minor changes mostly in the comments in the attached patch. If those look\r\n> okay to you, please include those in the next version.\r\n\r\nThanks, the changes look good to me, I merged the diff patch.\r\n\r\nAttach the new version patch which include the following changes:\r\n\r\n- Fix a typo\r\n- Change the requestreply flag of the newly added WalSndKeepalive to false,\r\n because the subscriber can judge whether it's necessary to post a reply based\r\n on the received LSN.\r\n- Add a testcase to make sure there is no data in subscriber side when the\r\n transaction is skipped.\r\n- Change the name of flag skipped_empty_xact to skipped_xact which seems more\r\n understandable.\r\n- Merge Amit's suggested changes.\r\n\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Thu, 24 Mar 2022 03:19:18 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Thursday, March 24, 2022 11:19 AM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> Attach the new version patch which include the following changes:\r\n> \r\n> - Fix a typo\r\n> - Change the requestreply flag of the newly added WalSndKeepalive to false,\r\n> because the subscriber can judge whether it's necessary to post a reply\r\n> based\r\n> on the received LSN.\r\n> - Add a testcase to make sure there is no data in subscriber side when the\r\n> transaction is skipped.\r\n> - Change the name of flag skipped_empty_xact to skipped_xact which seems\r\n> more\r\n> understandable.\r\n> - Merge Amit's suggested changes.\r\n> \r\n\r\nHi,\r\n\r\nThis patch skips sending BEGIN/COMMIT messages for empty transactions and saves\r\nnetwork bandwidth. So I tried to do a test to see how does it affect bandwidth.\r\n\r\nThis test refers to the previous test by Peter[1]. I temporarily modified the\r\ncode in worker.c to log the length of the data received by the subscriber (after\r\ncalling walrcv_receive()). At the conclusion of the test run, the logs are\r\nprocessed to extract the numbers.\r\n\r\n[1] https://www.postgresql.org/message-id/CAHut%2BPuyqcDJO0X2BxY%2B9ycF%2Bew3x77FiCbTJQGnLDbNmMASZQ%40mail.gmail.com\r\n\r\nThe number of transactions is fixed (1000), and I tested different mixes of\r\nempty and not-empty transactions sent - 0%, 25%, 50%, 100%. The patch will send\r\nkeepalive message when skipping empty transaction in synchronous replication\r\nmode, so I tested both synchronous replication and asynchronous replication.\r\n\r\nThe results are as follows, and attach the bar chart.\r\n\r\nSync replication - size of sending data\r\n--------------------------------------------------------------------\r\n 0% 25% 50% 75% 100%\r\nHEAD 335211 281655 223661 170271 115108\r\npatched 335217 256617 173878 98095 18108\r\n\r\nAsync replication - size of sending data\r\n--------------------------------------------------------------------\r\n 0% 25% 50% 75% 100%\r\nHEAD 339379 285835 236343 184227 115000\r\npatched 335077 260953 180022 113333 18126\r\n\r\n\r\nThe details of the test is also attached.\r\n\r\nSummary of result:\r\nIn both synchronous replication mode and asynchronous replication mode, as more\r\nempty transactions, the improvement is more obvious. Even if when there is no\r\nempty transaction, I can't see any overhead.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Thu, 24 Mar 2022 03:33:23 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Thursday, March 24, 2022 11:19 AM houzj.fnst@fujitsu.com wrote:\r\n> On Tuesday, March 22, 2022 7:50 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Tue, Mar 22, 2022 at 7:25 AM houzj.fnst@fujitsu.com\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > > On Monday, March 21, 2022 6:01 PM Amit Kapila\r\n> > > > <amit.kapila16@gmail.com>\r\n> > > > wrote:\r\n> > >\r\n> > > Oh, sorry, I posted the wrong patch, here is the correct one.\r\n> > >\r\n> >\r\n> > The test change looks good to me. I think additionally we can verify\r\n> > that the record is not reflected in the subscriber table. Apart from\r\n> > that, I had made minor changes mostly in the comments in the attached\r\n> > patch. If those look okay to you, please include those in the next version.\r\n> \r\n> Thanks, the changes look good to me, I merged the diff patch.\r\n> \r\n> Attach the new version patch which include the following changes:\r\n> \r\n> - Fix a typo\r\n> - Change the requestreply flag of the newly added WalSndKeepalive to false,\r\n> because the subscriber can judge whether it's necessary to post a reply\r\n> based\r\n> on the received LSN.\r\n> - Add a testcase to make sure there is no data in subscriber side when the\r\n> transaction is skipped.\r\n> - Change the name of flag skipped_empty_xact to skipped_xact which seems\r\n> more\r\n> understandable.\r\n> - Merge Amit's suggested changes.\r\n> \r\n\r\nI did some more review for the newly added keepalive message and confirmed that\r\nit's necessary to send this in sync mode.\r\n\r\n+\tif (skipped_xact &&\r\n+\t\tSyncRepRequested() &&\r\n+\t\t((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\r\n+\t\tWalSndKeepalive(false, ctx->write_location);\r\n\r\nBecause in sync replication, the publisher need to get the reply from\r\nsubscirber to release the waiter. After applying the patch, we don't send empty\r\ntransaction to subscriber, so we won't get a reply without this keepalive\r\nmessage. Although the walsender usually invoke WalSndWaitForWal() which will\r\nalso send a keepalive message to subscriber, and we could get a reply and\r\nrelease the wait. But WalSndWaitForWal() is not always invoked for each record.\r\nWhen reading the page, we won't invoke WalSndWaitForWal() if we already have\r\nthe record in our buffer[1].\r\n\r\n[1] ReadPageInternal(\r\n...\r\n\t/* check whether we have all the requested data already */\r\n\tif (targetSegNo == state->seg.ws_segno &&\r\n\t\ttargetPageOff == state->segoff && reqLen <= state->readLen)\r\n\t\treturn state->readLen;\r\n...\r\n\r\nBased on above, if we don't have the newly added keepalive message in the\r\npatch, the transaction could wait for a bit more time to finish.\r\n\r\nFor example, I did some experiments to confirm:\r\n1. Set LOG_SNAPSHOT_INTERVAL_MS and checkpoint_timeout to a bigger value to\r\n make sure it doesn't generate extra WAL which could affect the test.\r\n2. Use debugger to attach the walsender and let it stop in the WalSndWaitForWal()\r\n3. Start two clients and modify un-published table\r\npostgres1 # INSERT INTO not_rep VALUES(1);\r\n---- waiting\r\npostgres2 # INSERT INTO not_rep VALUES(1);\r\n---- waiting\r\n4. Release the walsender, and we can see it won't send a keepalive to\r\n subscriber until it has handled all the above two transactions, which means\r\n the two transaction will wait until all of them has been decoded. This\r\n behavior doesn't looks good and is inconsistent with the current\r\n behavior(the transaction will finish after decoding it or after sending it\r\n to sub if necessary).\r\n\r\nSo, I think the newly add keepalive message makes sense.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 25 Mar 2022 00:30:30 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Friday, March 25, 2022 8:31 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> On Thursday, March 24, 2022 11:19 AM houzj.fnst@fujitsu.com wrote:\r\n> > On Tuesday, March 22, 2022 7:50 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > > On Tue, Mar 22, 2022 at 7:25 AM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > > On Monday, March 21, 2022 6:01 PM Amit Kapila\r\n> > > > > <amit.kapila16@gmail.com>\r\n> > > > > wrote:\r\n> > > >\r\n> > > > Oh, sorry, I posted the wrong patch, here is the correct one.\r\n> > > >\r\n> > >\r\n> > > The test change looks good to me. I think additionally we can verify\r\n> > > that the record is not reflected in the subscriber table. Apart from\r\n> > > that, I had made minor changes mostly in the comments in the attached\r\n> > > patch. If those look okay to you, please include those in the next version.\r\n> >\r\n> > Thanks, the changes look good to me, I merged the diff patch.\r\n> >\r\n> > Attach the new version patch which include the following changes:\r\n> >\r\n> > - Fix a typo\r\n> > - Change the requestreply flag of the newly added WalSndKeepalive to false,\r\n> > because the subscriber can judge whether it's necessary to post a reply\r\n> > based\r\n> > on the received LSN.\r\n> > - Add a testcase to make sure there is no data in subscriber side when the\r\n> > transaction is skipped.\r\n> > - Change the name of flag skipped_empty_xact to skipped_xact which seems\r\n> > more\r\n> > understandable.\r\n> > - Merge Amit's suggested changes.\r\n> >\r\n> \r\n> I did some more review for the newly added keepalive message and confirmed\r\n> that it's necessary to send this in sync mode.\r\n\r\nSince commit 75b1521 added decoding of sequence to logical \r\nreplication, this patch needs to have send begin message in\r\npgoutput_sequence if necessary.\r\n\r\nAttach the new version patch with this change.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Fri, 25 Mar 2022 07:20:38 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Fri, Mar 25, 2022 at 12:50 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch with this change.\n>\n\nFew comments:\n=================\n1. I think we can move the keep_alive check after the tracklag record\ncheck to keep it consistent with another patch [1].\n2. Add the comment about the new parameter skipped_xact atop\nWalSndUpdateProgress.\n3. I think we need to call pq_flush_if_writable after sending a\nkeepalive message to avoid delaying sync transactions.\n\n[1]: https://www.postgresql.org/message-id/OS3PR01MB6275C64F264662E84D2FB7AE9E1D9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Mar 2022 12:37:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Monday, March 28, 2022 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Mar 25, 2022 at 12:50 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new version patch with this change.\r\n> >\r\n> \r\n> Few comments:\r\n\r\nThanks for the comments.\r\n\r\n> =================\r\n> 1. I think we can move the keep_alive check after the tracklag record\r\n> check to keep it consistent with another patch [1].\r\n\r\nChanged.\r\n\r\n> 2. Add the comment about the new parameter skipped_xact atop\r\n> WalSndUpdateProgress.\r\n\r\nAdded.\r\n\r\n> 3. I think we need to call pq_flush_if_writable after sending a\r\n> keepalive message to avoid delaying sync transactions.\r\n\r\nAgreed.\r\nIf we don’t flush the data, we might flush the keepalive later than before. And\r\nwe could get the reply later as well and then the release of syncwait could be\r\ndelayed.\r\n\r\nAttach the new version patch which addressed the above comments.\r\nThe patch also adds a loop after the newly added keepalive message\r\nto make sure the message is actually flushed to the client like what\r\ndid in WalSndWriteData.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 28 Mar 2022 12:21:42 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Mon, Mar 28, 2022 at 9:22 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, March 28, 2022 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Mar 25, 2022 at 12:50 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > Attach the new version patch with this change.\n> > >\n> >\n> > Few comments:\n>\n> Thanks for the comments.\n>\n> > =================\n> > 1. I think we can move the keep_alive check after the tracklag record\n> > check to keep it consistent with another patch [1].\n>\n> Changed.\n>\n> > 2. Add the comment about the new parameter skipped_xact atop\n> > WalSndUpdateProgress.\n>\n> Added.\n>\n> > 3. I think we need to call pq_flush_if_writable after sending a\n> > keepalive message to avoid delaying sync transactions.\n>\n> Agreed.\n> If we don’t flush the data, we might flush the keepalive later than before. And\n> we could get the reply later as well and then the release of syncwait could be\n> delayed.\n>\n> Attach the new version patch which addressed the above comments.\n> The patch also adds a loop after the newly added keepalive message\n> to make sure the message is actually flushed to the client like what\n> did in WalSndWriteData.\n>\n\nThank you for updating the patch!\n\nSome comments:\n\n+ if (skipped_xact &&\n+ SyncRepRequested() &&\n+ ((volatile WalSndCtlData *) WalSndCtl)->sync_standbys_defined)\n+ {\n+ WalSndKeepalive(false, ctx->write_location);\n\nI think we can use 'lsn' since it is actually ctx->write_location.\n\n---\n+ if (!sent_begin_txn)\n+ {\n+ elog(DEBUG1, \"Skipped replication of an empty\ntransaction with XID: %u\", txn->xid);\n+ return;\n+ }\n\nThe log message should start with lowercase.\n\n---\n+# Note that the current location of the log file is not grabbed immediately\n+# after reloading the configuration, but after sending one SQL command to\n+# the node so as we are sure that the reloading has taken effect.\n+$log_location = -s $node_subscriber->logfile;\n+\n+$node_publisher->safe_psql('postgres', \"INSERT INTO tab_notrep VALUES (11)\");\n+\n+$node_publisher->wait_for_catchup('tap_sub');\n+\n+$logfile = slurp_file($node_publisher->logfile, $log_location);\n\nI think we should get the log location of the publisher node, not\nsubscriber node.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 29 Mar 2022 16:20:15 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tuesday, March 29, 2022 3:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Some comments:\r\n\r\nThanks for the comments!\r\n\r\n> \r\n> + if (skipped_xact &&\r\n> + SyncRepRequested() &&\r\n> + ((volatile WalSndCtlData *)\r\n> WalSndCtl)->sync_standbys_defined)\r\n> + {\r\n> + WalSndKeepalive(false, ctx->write_location);\r\n> \r\n> I think we can use 'lsn' since it is actually ctx->write_location.\r\n\r\nAgreed, and changed.\r\n\r\n> ---\r\n> + if (!sent_begin_txn)\r\n> + {\r\n> + elog(DEBUG1, \"Skipped replication of an empty\r\n> transaction with XID: %u\", txn->xid);\r\n> + return;\r\n> + }\r\n> \r\n> The log message should start with lowercase.\r\n\r\nChanged.\r\n\r\n> ---\r\n> +# Note that the current location of the log file is not grabbed\r\n> +immediately # after reloading the configuration, but after sending one\r\n> +SQL command to # the node so as we are sure that the reloading has taken\r\n> effect.\r\n> +$log_location = -s $node_subscriber->logfile;\r\n> +\r\n> +$node_publisher->safe_psql('postgres', \"INSERT INTO tab_notrep VALUES\r\n> +(11)\");\r\n> +\r\n> +$node_publisher->wait_for_catchup('tap_sub');\r\n> +\r\n> +$logfile = slurp_file($node_publisher->logfile, $log_location);\r\n> \r\n> I think we should get the log location of the publisher node, not subscriber\r\n> node.\r\n\r\nChanged.\r\n\r\nAttach the new version patch which addressed the\r\nabove comments and slightly adjusted some code comments.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 29 Mar 2022 08:35:11 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Tue, Mar 29, 2022 at 2:05 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which addressed the\n> above comments and slightly adjusted some code comments.\n>\n\nThe patch looks good to me. One minor suggestion is to change the\nfunction name ProcessPendingWritesAndTimeOut() to\nProcessPendingWrites().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 29 Mar 2022 14:41:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Tuesday, March 29, 2022 5:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Mar 29, 2022 at 2:05 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new version patch which addressed the above comments and\r\n> > slightly adjusted some code comments.\r\n> >\r\n> \r\n> The patch looks good to me. One minor suggestion is to change the function\r\n> name ProcessPendingWritesAndTimeOut() to ProcessPendingWrites().\r\n\r\nThanks for the comment.\r\nAttach the new version patch with this change.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 29 Mar 2022 09:15:05 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Tue, Mar 29, 2022 5:15 PM Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> Thanks for the comment.\r\n> Attach the new version patch with this change.\r\n> \r\n\r\nHi,\r\n\r\nI did a performance test for this patch to see if it affects performance when\r\npublishing empty transactions, based on the v32 patch.\r\n\r\nIn this test, I use synchronous logical replication, and publish a table with no\r\noperations on it. The test uses pgbench, each run takes 15 minutes, and I take\r\nmedian of 3 runs. Drop and recreate db after each run.\r\n\r\nThe results are as follows, and attach the bar chart. The details of the test is\r\nalso attached.\r\n\r\nTPS - publishing empty transactions (scale factor 1)\r\n--------------------------------------------------------------------\r\n 4 threads 8 threads 16 threads\r\nHEAD 4818.2837 4353.6243 3888.5995\r\npatched 5111.2936 4555.1629 4024.4286\r\n\r\n\r\nTPS - publishing empty transactions (scale factor 100)\r\n--------------------------------------------------------------------\r\n 4 threads 8 threads 16 threads\r\nHEAD 9066.6465 16118.0453 21485.1207\r\npatched 9357.3361 16638.6409 24503.6829\r\n\r\nThere is an improvement of more than 3% after applying this patch, and in the\r\nbest case, it improves by 14%, which looks good to me.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Tue, 29 Mar 2022 09:35:28 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logical replication empty transactions" }, { "msg_contents": "On Tue, Mar 29, 2022 at 6:15 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, March 29, 2022 5:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 29, 2022 at 2:05 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > Attach the new version patch which addressed the above comments and\n> > > slightly adjusted some code comments.\n> > >\n> >\n> > The patch looks good to me. One minor suggestion is to change the function\n> > name ProcessPendingWritesAndTimeOut() to ProcessPendingWrites().\n>\n> Thanks for the comment.\n> Attach the new version patch with this change.\n>\n\nThank you for updating the patch. Looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 30 Mar 2022 10:44:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" }, { "msg_contents": "On Wed, Mar 30, 2022 at 7:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Mar 29, 2022 at 6:15 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Thanks for the comment.\n> > Attach the new version patch with this change.\n> >\n>\n> Thank you for updating the patch. Looks good to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 30 Mar 2022 14:00:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication empty transactions" } ]
[ { "msg_contents": "Hello,\n\nI ran across an EXPLAIN plan and had some questions about some of its\ndetails. The BUFFERS docs say\n\n>The number of blocks shown for an upper-level node includes those used by\nall its child nodes.\n\nI initially assumed this would be cumulative, but I realized it's probably\nnot because some of the blocks affected by each child will actually\noverlap. But this particular plan has a Shared Hit Blocks at the root (an\nAggregate) that is smaller than some of its children (three ModifyTables\nand a CTE Scan). This seems to contradict the documentation (since if\nchildren overlap fully in their buffers usage, the parent should still have\na cost equal to the costliest child)--any idea what's up? I can send the\nwhole plan (attached? inline? it's ~15kb) if that helps.\n\nI also noticed the I/O Read Time (from track_io_timing) of two children in\nthis plan is equal to the I/O Read Time in the root. Is I/O time\npotentially fully parallelized across children? There are no parallel\nworkers according to the plan, so I'm surprised at this and would like to\nunderstand better.\n\nAlso, a tangential question: why is the top-level structure of a JSON plan\nan array? I've only ever seen one root node with a Plan key there.\n\nThanks,\nMaciek\n\nHello,I ran across an EXPLAIN plan and had some questions about some of its details. The BUFFERS docs say>The number of blocks shown for an upper-level node includes those used by all its child nodes.I initially assumed this would be cumulative, but I realized it's probably not because some of the blocks affected by each child will actually overlap. But this particular plan has a Shared Hit Blocks at the root (an Aggregate) that is smaller than some of its children (three ModifyTables and a CTE Scan). This seems to contradict the documentation (since if children overlap fully in their buffers usage, the parent should still have a cost equal to the costliest child)--any idea what's up? I can send the whole plan (attached? inline? it's ~15kb) if that helps.I also noticed the I/O Read Time (from track_io_timing) of two children in this plan is equal to the I/O Read Time in the root. Is I/O time potentially fully parallelized across children? There are no parallel workers according to the plan, so I'm surprised at this and would like to understand better.Also, a tangential question: why is the top-level structure of a JSON plan an array? I've only ever seen one root node with a Plan key there.Thanks,Maciek", "msg_date": "Mon, 21 Oct 2019 23:18:32 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "EXPLAIN BUFFERS and I/O timing accounting questions" }, { "msg_contents": "Also, I noticed that in this plan, the root (again, an Aggregate) has 0\nTemp Read Blocks, but two of its children (two of the ModifyTable nodes)\nhave non-zero Temp Read Blocks. Again, this contradicts the documentation,\nas these costs are stated to be cumulative. Any ideas?\n\nThanks,\nMaciek\n\nAlso, I noticed that in this plan, the root (again, an Aggregate) has 0 Temp Read Blocks, but two of its children (two of the ModifyTable nodes) have non-zero Temp Read Blocks. Again, this contradicts the documentation, as these costs are stated to be cumulative. Any ideas?Thanks,Maciek", "msg_date": "Wed, 23 Oct 2019 09:13:08 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN BUFFERS and I/O timing accounting questions" }, { "msg_contents": "Hi,\n\nOn 2019-10-21 23:18:32 -0700, Maciek Sakrejda wrote:\n> I ran across an EXPLAIN plan and had some questions about some of its\n> details. The BUFFERS docs say\n> \n> >The number of blocks shown for an upper-level node includes those used by\n> all its child nodes.\n> \n> I initially assumed this would be cumulative, but I realized it's probably\n> not because some of the blocks affected by each child will actually\n> overlap.\n\nNote that the buffer access stats do *not* count the number of distinct\nbuffers accessed, but that they purely the number of buffer\naccesses.\n\nIt'd be really expensive to count the number of distinct buffers\naccessed, although I guess one could make it only expensive by using\nsomething like hyperloglog (although that will still be hard, due to\nbuffer replacement etc).\n\n\n> But this particular plan has a Shared Hit Blocks at the root (an\n> Aggregate) that is smaller than some of its children (three ModifyTables\n> and a CTE Scan).\n\nDo you have an example? I assume what's going on is that the cost of\nthe CTE is actually attributed (in equal parts or something like that)\nto all places using the CTE. Do the numbers add up if you just exclude\nthe CTE?\n\n\n> This seems to contradict the documentation (since if\n> children overlap fully in their buffers usage, the parent should still have\n> a cost equal to the costliest child)--any idea what's up? I can send the\n> whole plan (attached? inline? it's ~15kb) if that helps.\n\nOr just relevant top-level excerpts.\n\n\n> Also, a tangential question: why is the top-level structure of a JSON plan\n> an array? I've only ever seen one root node with a Plan key there.\n\nIIRC one can get multiple plans when there's a DO ALSO rule. There might\nbe other ways to get there too.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 14:25:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN BUFFERS and I/O timing accounting questions" }, { "msg_contents": "On Thu, Oct 24, 2019 at 2:25 PM Andres Freund <andres@anarazel.de> wrote:\n> Note that the buffer access stats do *not* count the number of distinct\n> buffers accessed, but that they purely the number of buffer\n> accesses.\n\nYou mean, even within a single node? That is, if a node hits a block ten\ntimes, that counts as ten blocks hit? And if it reads a block and then\nneeds it three more times, that's one read plus three hit?\n\n> Do you have an example?\n\nSure, here's the \"abridged\" plan:\n\n[{ \"Plan\": {\n \"Node Type\": \"Aggregate\",\n \"Plan Rows\": 1,\n \"Plan Width\": 8,\n \"Total Cost\": 26761745.14,\n \"Actual Rows\": 1,\n \"I/O Read Time\": 234129.299,\n \"I/O Write Time\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Shared Hit Blocks\": 4847762,\n \"Shared Read Blocks\": 1626312,\n \"Shared Dirtied Blocks\": 541014,\n \"Shared Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 4786,\n \"Plans\": [\n {\n \"Node Type\": \"ModifyTable\",\n \"Operation\": \"Delete\",\n \"Parent Relationship\": \"InitPlan\",\n \"Plan Rows\": 13943446,\n \"Plan Width\": 6,\n \"Total Cost\": 25774594.63,\n \"Actual Rows\": 2178416,\n \"I/O Read Time\": 234129.299,\n \"I/O Write Time\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Shared Hit Blocks\": 4847762,\n \"Shared Read Blocks\": 1626312,\n \"Shared Dirtied Blocks\": 541014,\n \"Shared Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"Plans\": \"<elided>\"\n },\n {\n \"Node Type\": \"ModifyTable\",\n \"Operation\": \"Delete\",\n \"Parent Relationship\": \"InitPlan\",\n \"Plan Rows\": 63897788,\n \"Plan Width\": 38,\n \"Total Cost\": 315448.53,\n \"Actual Rows\": 0,\n \"I/O Read Time\": 30529.231,\n \"I/O Write Time\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Shared Hit Blocks\": 12964205,\n \"Shared Read Blocks\": 83260,\n \"Shared Dirtied Blocks\": 48256,\n \"Shared Written Blocks\": 0,\n \"Temp Read Blocks\": 4788,\n \"Temp Written Blocks\": 0,\n \"Plans\": \"<elided>\"\n },\n {\n \"Node Type\": \"ModifyTable\",\n \"Operation\": \"Delete\",\n \"Parent Relationship\": \"InitPlan\",\n \"Plan Rows\": 45657680,\n \"Plan Width\": 38,\n \"Total Cost\": 357974.43,\n \"Actual Rows\": 0,\n \"I/O Read Time\": 24260.512,\n \"I/O Write Time\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Shared Hit Blocks\": 10521264,\n \"Shared Read Blocks\": 64450,\n \"Shared Dirtied Blocks\": 36822,\n \"Shared Written Blocks\": 0,\n \"Temp Read Blocks\": 4788,\n \"Temp Written Blocks\": 1,\n \"Plans\": \"<elided>\"\n },\n {\n \"Node Type\": \"CTE Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Plan Rows\": 13943446,\n \"Plan Width\": 8,\n \"Total Cost\": 278868.92,\n \"Actual Rows\": 2178416,\n \"I/O Read Time\": 234129.299,\n \"I/O Write Time\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Shared Hit Blocks\": 4847762,\n \"Shared Read Blocks\": 1626312,\n \"Shared Dirtied Blocks\": 541014,\n \"Shared Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 4786\n }\n ]\n}}]\n\nLet me know if I removed anything I shouldn't have and I can follow up with\nextra info.\n\n> I assume what's going on is that the cost of\n> the CTE is actually attributed (in equal parts or something like that)\n> to all places using the CTE. Do the numbers add up if you just exclude\n> the CTE?\n\nNot really--it looks like the full Shared Blocks Hit cost in the root is\nthe same as the CTE by itself. This is playing around with the plan in a\nnode console:\n\n> p[0].Plan['Shared Hit Blocks']\n4847762\n> p[0].Plan.Plans.map(p => p['Node Type'])\n[ 'ModifyTable', 'ModifyTable', 'ModifyTable', 'CTE Scan' ]\n> p[0].Plan.Plans.map(p => p['Shared Hit Blocks'])\n[ 4847762, 12964205, 10521264, 4847762 ]\n\n> IIRC one can get multiple plans when there's a DO ALSO rule. There might\n> be other ways to get there too.\n\nThanks, good to know.\n\nOn Thu, Oct 24, 2019 at 2:25 PM Andres Freund <andres@anarazel.de> wrote:> Note that the buffer access stats do *not* count the number of distinct> buffers accessed, but that they purely the number of buffer> accesses.You mean, even within a single node? That is, if a node hits a block ten times, that counts as ten blocks hit? And if it reads a block and then needs it three more times, that's one read plus three hit?> Do you have an example?Sure, here's the \"abridged\" plan:[{ \"Plan\": {  \"Node Type\": \"Aggregate\",  \"Plan Rows\": 1,  \"Plan Width\": 8,  \"Total Cost\": 26761745.14,  \"Actual Rows\": 1,  \"I/O Read Time\": 234129.299,  \"I/O Write Time\": 0,  \"Local Hit Blocks\": 0,  \"Local Read Blocks\": 0,  \"Local Dirtied Blocks\": 0,  \"Local Written Blocks\": 0,  \"Shared Hit Blocks\": 4847762,  \"Shared Read Blocks\": 1626312,  \"Shared Dirtied Blocks\": 541014,  \"Shared Written Blocks\": 0,  \"Temp Read Blocks\": 0,  \"Temp Written Blocks\": 4786,  \"Plans\": [    {      \"Node Type\": \"ModifyTable\",      \"Operation\": \"Delete\",      \"Parent Relationship\": \"InitPlan\",      \"Plan Rows\": 13943446,      \"Plan Width\": 6,      \"Total Cost\": 25774594.63,      \"Actual Rows\": 2178416,      \"I/O Read Time\": 234129.299,      \"I/O Write Time\": 0,      \"Local Hit Blocks\": 0,      \"Local Read Blocks\": 0,      \"Local Dirtied Blocks\": 0,      \"Local Written Blocks\": 0,      \"Shared Hit Blocks\": 4847762,      \"Shared Read Blocks\": 1626312,      \"Shared Dirtied Blocks\": 541014,      \"Shared Written Blocks\": 0,      \"Temp Read Blocks\": 0,      \"Temp Written Blocks\": 0,      \"Plans\": \"<elided>\"    },    {      \"Node Type\": \"ModifyTable\",      \"Operation\": \"Delete\",      \"Parent Relationship\": \"InitPlan\",      \"Plan Rows\": 63897788,      \"Plan Width\": 38,      \"Total Cost\": 315448.53,      \"Actual Rows\": 0,      \"I/O Read Time\": 30529.231,      \"I/O Write Time\": 0,      \"Local Hit Blocks\": 0,      \"Local Read Blocks\": 0,      \"Local Dirtied Blocks\": 0,      \"Local Written Blocks\": 0,      \"Shared Hit Blocks\": 12964205,      \"Shared Read Blocks\": 83260,      \"Shared Dirtied Blocks\": 48256,      \"Shared Written Blocks\": 0,      \"Temp Read Blocks\": 4788,      \"Temp Written Blocks\": 0,      \"Plans\": \"<elided>\"    },    {      \"Node Type\": \"ModifyTable\",      \"Operation\": \"Delete\",      \"Parent Relationship\": \"InitPlan\",      \"Plan Rows\": 45657680,      \"Plan Width\": 38,      \"Total Cost\": 357974.43,      \"Actual Rows\": 0,      \"I/O Read Time\": 24260.512,      \"I/O Write Time\": 0,      \"Local Hit Blocks\": 0,      \"Local Read Blocks\": 0,      \"Local Dirtied Blocks\": 0,      \"Local Written Blocks\": 0,      \"Shared Hit Blocks\": 10521264,      \"Shared Read Blocks\": 64450,      \"Shared Dirtied Blocks\": 36822,      \"Shared Written Blocks\": 0,      \"Temp Read Blocks\": 4788,      \"Temp Written Blocks\": 1,      \"Plans\": \"<elided>\"    },    {      \"Node Type\": \"CTE Scan\",      \"Parent Relationship\": \"Outer\",      \"Plan Rows\": 13943446,      \"Plan Width\": 8,      \"Total Cost\": 278868.92,      \"Actual Rows\": 2178416,      \"I/O Read Time\": 234129.299,      \"I/O Write Time\": 0,      \"Local Hit Blocks\": 0,      \"Local Read Blocks\": 0,      \"Local Dirtied Blocks\": 0,      \"Local Written Blocks\": 0,      \"Shared Hit Blocks\": 4847762,      \"Shared Read Blocks\": 1626312,      \"Shared Dirtied Blocks\": 541014,      \"Shared Written Blocks\": 0,      \"Temp Read Blocks\": 0,      \"Temp Written Blocks\": 4786    }  ]}}]Let me know if I removed anything I shouldn't have and I can follow up with extra info.>  I assume what's going on is that the cost of> the CTE is actually attributed (in equal parts or something like that)> to all places using the CTE. Do the numbers add up if you just exclude> the CTE?Not really--it looks like the full Shared Blocks Hit cost in the root is the same as the CTE by itself. This is playing around with the plan in a node console:> p[0].Plan['Shared Hit Blocks']4847762> p[0].Plan.Plans.map(p => p['Node Type'])[ 'ModifyTable', 'ModifyTable', 'ModifyTable', 'CTE Scan' ]> p[0].Plan.Plans.map(p => p['Shared Hit Blocks'])[ 4847762, 12964205, 10521264, 4847762 ]> IIRC one can get multiple plans when there's a DO ALSO rule. There might> be other ways to get there too.Thanks, good to know.", "msg_date": "Thu, 24 Oct 2019 16:31:39 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN BUFFERS and I/O timing accounting questions" }, { "msg_contents": "Hi,\n\nOn 2019-10-24 16:31:39 -0700, Maciek Sakrejda wrote:\n> On Thu, Oct 24, 2019 at 2:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > Note that the buffer access stats do *not* count the number of distinct\n> > buffers accessed, but that they purely the number of buffer\n> > accesses.\n> \n> You mean, even within a single node? That is, if a node hits a block ten\n> times, that counts as ten blocks hit? And if it reads a block and then\n> needs it three more times, that's one read plus three hit?\n\nCorrect. It's basically the number of lookups in the buffer\npool. There's some nodes that will kind repeatedly use the same buffer,\nwithout increasing the count. E.g. a seqscan will keep the current page\npinned until all the tuples on the page have been returned.\n\nConsider e.g. an nested loop indexscan - how would we determine whether\nwe've previously looked at a buffer within the indexscan, without\ndrastically increasing the resources?\n\n\n> > Do you have an example?\n> \n> Sure, here's the \"abridged\" plan:\n> \n> [{ \"Plan\": {\n> \"Node Type\": \"Aggregate\",\n> \"Plan Rows\": 1,\n> \"Plan Width\": 8,\n> \"Total Cost\": 26761745.14,\n> \"Actual Rows\": 1,\n> \"I/O Read Time\": 234129.299,\n> \"I/O Write Time\": 0,\n> \"Local Hit Blocks\": 0,\n> \"Local Read Blocks\": 0,\n> \"Local Dirtied Blocks\": 0,\n> \"Local Written Blocks\": 0,\n> \"Shared Hit Blocks\": 4847762,\n> \"Shared Read Blocks\": 1626312,\n> \"Shared Dirtied Blocks\": 541014,\n> \"Shared Written Blocks\": 0,\n> \"Temp Read Blocks\": 0,\n> \"Temp Written Blocks\": 4786,\n> \"Plans\": [\n> {\n> \"Node Type\": \"ModifyTable\",\n> \"Operation\": \"Delete\",\n> \"Parent Relationship\": \"InitPlan\",\n> \"Plan Rows\": 13943446,\n> \"Plan Width\": 6,\n> \"Total Cost\": 25774594.63,\n> \"Actual Rows\": 2178416,\n> \"I/O Read Time\": 234129.299,\n> \"I/O Write Time\": 0,\n> \"Local Hit Blocks\": 0,\n> \"Local Read Blocks\": 0,\n> \"Local Dirtied Blocks\": 0,\n> \"Local Written Blocks\": 0,\n> \"Shared Hit Blocks\": 4847762,\n> \"Shared Read Blocks\": 1626312,\n> \"Shared Dirtied Blocks\": 541014,\n> \"Shared Written Blocks\": 0,\n> \"Temp Read Blocks\": 0,\n> \"Temp Written Blocks\": 0,\n> \"Plans\": \"<elided>\"\n> },\n...\n\nI think this may be partially confusing due to the way the json output\nlooks. Which is so bad that it's imo fair to call it a bug. Here's text\noutput to a similar-ish query:\n\n\nAggregate (cost=112.50..112.51 rows=1 width=8) (actual time=35.893..35.894 rows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=6015 dirtied=15\n CTE foo\n -> Delete on public.p (cost=0.00..45.00 rows=3000 width=6) (actual time=0.235..28.239 rows=3000 loops=1)\n Output: p.data\n Delete on public.p\n Delete on public.c1\n Delete on public.c2\n Buffers: shared hit=6015 dirtied=15\n -> Seq Scan on public.p (cost=0.00..15.00 rows=1000 width=6) (actual time=0.161..1.375 rows=1000 loops=1)\n Output: p.ctid\n Buffers: shared hit=5 dirtied=5\n -> Seq Scan on public.c1 (cost=0.00..15.00 rows=1000 width=6) (actual time=0.147..1.314 rows=1000 loops=1)\n Output: c1.ctid\n Buffers: shared hit=5 dirtied=5\n -> Seq Scan on public.c2 (cost=0.00..15.00 rows=1000 width=6) (actual time=0.145..1.170 rows=1000 loops=1)\n Output: c2.ctid\n Buffers: shared hit=5 dirtied=5\n -> CTE Scan on foo (cost=0.00..60.00 rows=3000 width=0) (actual time=0.243..34.083 rows=3000 loops=1)\n Output: foo.data\n Buffers: shared hit=6015 dirtied=15\nPlanning Time: 0.508 ms\nExecution Time: 36.512 ms\n\nNote that the node below the Aggregate is actually the CTE, and that\nthat the DELETEs are below that. But the json, slightly abbreviated,\nlooks like:\n\n[\n {\n \"Plan\": {\n \"Node Type\": \"Aggregate\",\n \"Strategy\": \"Plain\",\n \"Shared Hit Blocks\": 6015,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 15,\n \"Shared Written Blocks\": 0,\n \"Plans\": [\n {\n \"Node Type\": \"ModifyTable\",\n \"Operation\": \"Delete\",\n \"Parent Relationship\": \"InitPlan\",\n \"Subplan Name\": \"CTE foo\",\n \"Output\": [\"p.data\"],\n \"Target Tables\": [\n {\n \"Relation Name\": \"p\",\n \"Schema\": \"public\",\n \"Alias\": \"p\"\n },\n {\n \"Relation Name\": \"c1\",\n \"Schema\": \"public\",\n \"Alias\": \"c1\"\n },\n {\n \"Relation Name\": \"c2\",\n \"Schema\": \"public\",\n \"Alias\": \"c2\"\n }\n ],\n \"Shared Hit Blocks\": 6015,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 15,\n \"Shared Written Blocks\": 0,\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Member\",\n \"Output\": [\"p.ctid\"],\n \"Shared Hit Blocks\": 5,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 5,\n },\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Member\",\n \"Parallel Aware\": false,\n \"Relation Name\": \"c1\",\n \"Schema\": \"public\",\n \"Shared Hit Blocks\": 5,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 5,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n },\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Member\",\n \"Shared Hit Blocks\": 5,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 5,\n \"Shared Written Blocks\": 0,\n }\n ]\n },\n {\n \"Node Type\": \"CTE Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"CTE Name\": \"foo\",\n \"Alias\": \"foo\",\n \"Startup Cost\": 0.00,\n \"Total Cost\": 60.00,\n \"Plan Rows\": 3000,\n \"Plan Width\": 0,\n \"Actual Startup Time\": 0.258,\n \"Actual Total Time\": 12.737,\n \"Actual Rows\": 3000,\n \"Actual Loops\": 1,\n \"Output\": [\"foo.data\"],\n \"Shared Hit Blocks\": 6015,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 15,\n \"Shared Written Blocks\": 0,\n }\n ]\n\nBut I still don't quite get how the IO adds up in your case.\n\nPerhaps you could send me the full plan and query privately? And, if you\nhave access to that, the plain text explain?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 17:38:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN BUFFERS and I/O timing accounting questions" }, { "msg_contents": "Hi,\n\nMigrating to -hackers, this seems clearly suboptimal. and confusing.\n\nThe original thread is at\nhttps://www.postgresql.org/message-id/20191025003834.2rswu7smheaddag3%40alap3.anarazel.de\n\nOn 2019-10-24 17:38:34 -0700, Andres Freund wrote:\n> Perhaps you could send me the full plan and query privately? And, if you\n> have access to that, the plain text explain?\n\nMaciek did, and I think I see what's going on...\n\n\n(I asked whether it's ok to include the query)\nOn 2019-10-24 18:58:27 -0700, Maciek Sakrejda wrote:\n> Thanks for your help. Here is the query text:\n> \n> WITH q AS (\n> DELETE FROM queries WHERE last_occurred_at < now() - '60 days'::interval\n> RETURNING queries.id\n> ),\n> t1 AS (DELETE FROM query_fingerprints WHERE query_id IN (SELECT id FROM q)),\n> t2 as (DELETE FROM query_table_associations WHERE query_id IN\n> (SELECT id FROM q))\n> SELECT COUNT(id) FROM q\n\nNote that t1 and t2 CTEs are not referenced in the query\nitself. Normally that would mean that they're simply not evaluated - but\nfor CTE that include DML we force evaluation, as the result would\notherwise be inconsistent.\n\nBut that forcing happens not from within the normal query (the SELECT\nCOUNT(id) FROM q), but from the main executor. As the attribution of\nchild executor nodes to the layer above happens when the execution of a\nnode ends (see ExecProcNodeInstr()), the work of completing wCTEs is not\nassociated to any single node.\n\nvoid\nstandard_ExecutorFinish(QueryDesc *queryDesc)\n...\n\t/* Run ModifyTable nodes to completion */\n\tExecPostprocessPlan(estate);\n\n\nstatic void\nExecPostprocessPlan(EState *estate)\n..\n\t/*\n\t * Run any secondary ModifyTable nodes to completion, in case the main\n\t * query did not fetch all rows from them. (We do this to ensure that\n\t * such nodes have predictable results.)\n\t */\n\tforeach(lc, estate->es_auxmodifytables)\n...\n\t\t\tslot = ExecProcNode(ps);\n\nin contrast to e.g. AFTER triggers, which also get executed at the end\nof the query, we do not associate the cost of running this post\nprocessing work with any superior node.\n\nWhich is quite confusing, because the DML nodes do look like\nthey're subsidiary to another node.\n\nI think we ought to either attribute the cost of ExecPostprocessPlan()\nto a separate instrumentation that we display at the end of the explain\nplan when not zero, or at least associate the cost with the parent node.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 19:46:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN BUFFERS and I/O timing accounting questions" } ]
[ { "msg_contents": "Hello,\n\nWhile developing pgbench to allow partitioned tabled, I reproduced the \nstring management style used in the corresponding functions, but was \npretty unhappy with that kind of pattern:\n\n \tsnprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), ...)\n\nHowever adding a feature is not the place for refactoring.\n\nThis patch refactors initialization functions so as to use PQExpBuffer \nwhere appropriate to simplify and clarify the code. SQL commands are \ngenerated by accumulating parts into a buffer in order, before executing \nit. I also added a more generic function to execute a statement and fail \nif the result is unexpected.\n\n-- \nFabien.", "msg_date": "Tue, 22 Oct 2019 08:32:45 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "pgbench - refactor init functions with buffers" }, { "msg_contents": "On Tue, Oct 22, 2019 at 12:03 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello,\n>\n> While developing pgbench to allow partitioned tabled, I reproduced the\n> string management style used in the corresponding functions, but was\n> pretty unhappy with that kind of pattern:\n>\n> snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), ...)\n>\n> However adding a feature is not the place for refactoring.\n>\n> This patch refactors initialization functions so as to use PQExpBuffer\n> where appropriate to simplify and clarify the code. SQL commands are\n> generated by accumulating parts into a buffer in order, before executing\n> it. I also added a more generic function to execute a statement and fail\n> if the result is unexpected.\n>\n\n- for (i = 0; i < nbranches * scale; i++)\n+ for (int i = 0; i < nbranches * scale; i++)\n ...\n- for (i = 0; i < ntellers * scale; i++)\n+ for (int i = 0; i < ntellers * scale; i++)\n {\n\nI haven't read the complete patch. But, I have noticed that many\nplaces you changed the variable declaration from c to c++ style (i.e\nmoved the declaration in the for loop). IMHO, generally in PG, we\ndon't follow this convention. Is there any specific reason to do\nthis?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 12:49:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": ">\n>\n> I haven't read the complete patch. But, I have noticed that many\n> places you changed the variable declaration from c to c++ style (i.e\n> moved the declaration in the for loop). IMHO, generally in PG, we\n> don't follow this convention. Is there any specific reason to do\n> this?\n>\n\n+1.\n\nThe patch does not apply on master, needs rebase.\nAlso, I got some whitespace errors.\n\nI think you can also refactor the function tryExecuteStatement(), and\ncall your newly added function executeStatementExpect() by passing\nan additional flag something like \"errorOK\".\n\nRegards,\nJeevan Ladhe\n\nI haven't read the complete patch.  But, I have noticed that many\nplaces you changed the variable declaration from c to c++ style (i.e\nmoved the declaration in the for loop).  IMHO, generally in PG, we\ndon't follow this convention.  Is there any specific reason to do\nthis?+1.The patch does not apply on master, needs rebase.Also, I got some whitespace errors.I think you can also refactor the function tryExecuteStatement(), andcall your newly added function executeStatementExpect() by passingan additional flag something like \"errorOK\".Regards,Jeevan Ladhe", "msg_date": "Tue, 22 Oct 2019 14:51:47 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "\nHello Dilip,\n\n> - for (i = 0; i < nbranches * scale; i++)\n> + for (int i = 0; i < nbranches * scale; i++)\n> ...\n> - for (i = 0; i < ntellers * scale; i++)\n> + for (int i = 0; i < ntellers * scale; i++)\n> {\n>\n> I haven't read the complete patch. But, I have noticed that many\n> places you changed the variable declaration from c to c++ style (i.e\n> moved the declaration in the for loop). IMHO, generally in PG, we\n> don't follow this convention. Is there any specific reason to do\n> this?\n\nThere are many places where it is used now in pg (120 occurrences in \nmaster, 7 in pgbench). I had a bug recently because of a stupidly reused \nindex variable, so I tend to use this now it is admissible, moreover here \nI'm actually doing a refactoring patch, so it seems ok to include that.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 22 Oct 2019 12:00:13 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Hello Jeevan,\n\n>> I haven't read the complete patch. But, I have noticed that many\n>> places you changed the variable declaration from c to c++ style (i.e\n>> moved the declaration in the for loop). IMHO, generally in PG, we\n>> don't follow this convention. Is there any specific reason to do\n>> this?\n>\n> +1.\n\nAs I said, this C99 feature is already used extensively in pg sources, so \nit makes sense to use it when refactoring something and if appropriate, \nwhich IMO is the case here.\n\n> The patch does not apply on master, needs rebase.\n\nHmmm. \"git apply pgbench-buffer-1.patch\" works for me on current master.\n\n> Also, I got some whitespace errors.\n\nIt possible, but I cannot see any. Could you be more specific?\n\nMany mailers do not conform to MIME and mess-up newlines when attachements \nare typed text/*, because MIME requires the mailer to convert those to \ncrnl eol when sending and back to system eol when receiving, but few \nactually do it. Maybe the issue is really there.\n\n> I think you can also refactor the function tryExecuteStatement(), and\n> call your newly added function executeStatementExpect() by passing\n> an additional flag something like \"errorOK\".\n\nIndeed, good point.\n\n-- \nFabien.", "msg_date": "Tue, 22 Oct 2019 13:06:20 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On Tue, Oct 22, 2019 at 3:30 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Dilip,\n>\n> > - for (i = 0; i < nbranches * scale; i++)\n> > + for (int i = 0; i < nbranches * scale; i++)\n> > ...\n> > - for (i = 0; i < ntellers * scale; i++)\n> > + for (int i = 0; i < ntellers * scale; i++)\n> > {\n> >\n> > I haven't read the complete patch. But, I have noticed that many\n> > places you changed the variable declaration from c to c++ style (i.e\n> > moved the declaration in the for loop). IMHO, generally in PG, we\n> > don't follow this convention. Is there any specific reason to do\n> > this?\n>\n> There are many places where it is used now in pg (120 occurrences in\n> master, 7 in pgbench). I had a bug recently because of a stupidly reused\n> index variable, so I tend to use this now it is admissible, moreover here\n> I'm actually doing a refactoring patch, so it seems ok to include that.\n>\nI see. I was under impression that we don't use this style in PG.\nBut, since we are already using this style other places so no\nobjection from my side for this particular point.\nSorry for the noise.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 22 Oct 2019 16:57:58 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On Tue, Oct 22, 2019 at 4:36 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Jeevan,\n>\n> >> I haven't read the complete patch. But, I have noticed that many\n> >> places you changed the variable declaration from c to c++ style (i.e\n> >> moved the declaration in the for loop). IMHO, generally in PG, we\n> >> don't follow this convention. Is there any specific reason to do\n> >> this?\n> >\n> > +1.\n>\n> As I said, this C99 feature is already used extensively in pg sources, so\n> it makes sense to use it when refactoring something and if appropriate,\n> which IMO is the case here.\n\n\nOk, no problem.\n\n\n>\n>\n> The patch does not apply on master, needs rebase.\n>\n> Hmmm. \"git apply pgbench-buffer-1.patch\" works for me on current master.\n>\n> > Also, I got some whitespace errors.\n>\n> It possible, but I cannot see any. Could you be more specific?\n>\n\nFor me it failing, see below:\n\n$ git log -1\ncommit ad4b7aeb84434c958e2df76fa69b68493a889e4a\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Tue Oct 22 10:35:54 2019 +0200\n\n Make command order in test more sensible\n\n Through several updates, the CREATE USER command has been separated\n from where the user is actually used in the test.\n\n$ git apply pgbench-buffer-1.patch\npgbench-buffer-1.patch:10: trailing whitespace.\nstatic void append_fillfactor(PQExpBuffer query);\npgbench-buffer-1.patch:18: trailing whitespace.\nexecuteStatementExpect(PGconn *con, const char *sql, const ExecStatusType\nexpected)\npgbench-buffer-1.patch:19: trailing whitespace.\n{\npgbench-buffer-1.patch:20: trailing whitespace.\n PGresult *res;\npgbench-buffer-1.patch:21: trailing whitespace.\n\nerror: patch failed: src/bin/pgbench/pgbench.c:599\nerror: src/bin/pgbench/pgbench.c: patch does not apply\n\n$\n\nRegards,\nJeevan Ladhe\n\nOn Tue, Oct 22, 2019 at 4:36 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Jeevan,\n\n>> I haven't read the complete patch.  But, I have noticed that many\n>> places you changed the variable declaration from c to c++ style (i.e\n>> moved the declaration in the for loop).  IMHO, generally in PG, we\n>> don't follow this convention.  Is there any specific reason to do\n>> this?\n>\n> +1.\n\nAs I said, this C99 feature is already used extensively in pg sources, so \nit makes sense to use it when refactoring something and if appropriate, \nwhich IMO is the case here.Ok, no problem.  \n> The patch does not apply on master, needs rebase.\n\nHmmm. \"git apply pgbench-buffer-1.patch\" works for me on current master.\n\n> Also, I got some whitespace errors.\n\nIt possible, but I cannot see any. Could you be more specific?For me it failing, see below:$ git log -1commit ad4b7aeb84434c958e2df76fa69b68493a889e4aAuthor: Peter Eisentraut <peter@eisentraut.org>Date:   Tue Oct 22 10:35:54 2019 +0200    Make command order in test more sensible        Through several updates, the CREATE USER command has been separated    from where the user is actually used in the test.$ git apply pgbench-buffer-1.patchpgbench-buffer-1.patch:10: trailing whitespace.static void append_fillfactor(PQExpBuffer query);pgbench-buffer-1.patch:18: trailing whitespace.executeStatementExpect(PGconn *con, const char *sql, const ExecStatusType expected)pgbench-buffer-1.patch:19: trailing whitespace.{pgbench-buffer-1.patch:20: trailing whitespace.        PGresult   *res;pgbench-buffer-1.patch:21: trailing whitespace.error: patch failed: src/bin/pgbench/pgbench.c:599error: src/bin/pgbench/pgbench.c: patch does not apply$  Regards,Jeevan Ladhe", "msg_date": "Tue, 22 Oct 2019 17:03:30 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "\n>> The patch does not apply on master, needs rebase.\n>>\n>> Hmmm. \"git apply pgbench-buffer-1.patch\" works for me on current master.\n>>\n>>> Also, I got some whitespace errors.\n>>\n>> It possible, but I cannot see any. Could you be more specific?\n>\n> For me it failing, see below:\n>\n> $ git log -1\n> commit ad4b7aeb84434c958e2df76fa69b68493a889e4a\n\nSame for me, but it works:\n\n Switched to a new branch 'test'\n sh> git apply ~/pgbench-buffer-2.patch\n sh> git st\n On branch test\n Changes not staged for commit: ...\n modified: src/bin/pgbench/pgbench.c\n\n sh> file ~/pgbench-buffer-2.patch\n .../pgbench-buffer-2.patch: unified diff output, ASCII text\n\n sh> sha1sum ~/pgbench-buffer-2.patch\n eab8167ef3ec5eca814c44b30e07ee5631914f07 ...\n\nI suspect that your mailer did or did not do something with the \nattachment. Maybe try with \"patch -p1 < foo.patch\" at the root.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 22 Oct 2019 17:27:02 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "I am able to apply the v2 patch with \"patch -p1 \"\n\n-----\n\n+static void\n+executeStatementExpect(PGconn *con, const char *sql, const ExecStatusType\nexpected, bool errorOK)\n+{\n\nI think some instances like this need 80 column alignment?\n\n-----\n\nin initCreatePKeys():\n+ for (int i = 0; i < lengthof(DDLINDEXes); i++)\n+ {\n+ resetPQExpBuffer(&query);\n+ appendPQExpBufferStr(&query, DDLINDEXes[i]);\n\nI think you can simply use printfPQExpBuffer() for the first append,\nsimilar to\nwhat you have used in createPartitions(), which is a combination of both\nreset\nand append.\n\n-----\n\nThe pgbench tap tests are also running fine.\n\nRegards,\nJeevan Ladhe\n\nOn Tue, Oct 22, 2019 at 8:57 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> >> The patch does not apply on master, needs rebase.\n> >>\n> >> Hmmm. \"git apply pgbench-buffer-1.patch\" works for me on current master.\n> >>\n> >>> Also, I got some whitespace errors.\n> >>\n> >> It possible, but I cannot see any. Could you be more specific?\n> >\n> > For me it failing, see below:\n> >\n> > $ git log -1\n> > commit ad4b7aeb84434c958e2df76fa69b68493a889e4a\n>\n> Same for me, but it works:\n>\n> Switched to a new branch 'test'\n> sh> git apply ~/pgbench-buffer-2.patch\n> sh> git st\n> On branch test\n> Changes not staged for commit: ...\n> modified: src/bin/pgbench/pgbench.c\n>\n> sh> file ~/pgbench-buffer-2.patch\n> .../pgbench-buffer-2.patch: unified diff output, ASCII text\n>\n> sh> sha1sum ~/pgbench-buffer-2.patch\n> eab8167ef3ec5eca814c44b30e07ee5631914f07 ...\n>\n> I suspect that your mailer did or did not do something with the\n> attachment. Maybe try with \"patch -p1 < foo.patch\" at the root.\n>\n> --\n> Fabien.\n>\n\nI am able to apply the v2 patch with \"patch -p1 \"-----+static void+executeStatementExpect(PGconn *con, const char *sql, const ExecStatusType expected, bool errorOK)+{I think some instances like this need 80 column alignment?-----in initCreatePKeys():+ for (int i = 0; i < lengthof(DDLINDEXes); i++)+ {+ resetPQExpBuffer(&query);+ appendPQExpBufferStr(&query, DDLINDEXes[i]); I think you can simply use printfPQExpBuffer() for the first append, similar towhat you have used in createPartitions(), which is a combination of both resetand append.-----The pgbench tap tests are also running fine.Regards,Jeevan LadheOn Tue, Oct 22, 2019 at 8:57 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> The patch does not apply on master, needs rebase.\n>>\n>> Hmmm. \"git apply pgbench-buffer-1.patch\" works for me on current master.\n>>\n>>> Also, I got some whitespace errors.\n>>\n>> It possible, but I cannot see any. Could you be more specific?\n>\n> For me it failing, see below:\n>\n> $ git log -1\n> commit ad4b7aeb84434c958e2df76fa69b68493a889e4a\n\nSame for me, but it works:\n\n   Switched to a new branch 'test'\n   sh> git apply ~/pgbench-buffer-2.patch\n   sh> git st\n    On branch test\n    Changes not staged for commit: ...\n         modified:   src/bin/pgbench/pgbench.c\n\n   sh> file ~/pgbench-buffer-2.patch\n   .../pgbench-buffer-2.patch: unified diff output, ASCII text\n\n   sh> sha1sum ~/pgbench-buffer-2.patch\n   eab8167ef3ec5eca814c44b30e07ee5631914f07 ...\n\nI suspect that your mailer did or did not do something with the \nattachment. Maybe try with \"patch -p1 < foo.patch\" at the root.\n\n-- \nFabien.", "msg_date": "Wed, 23 Oct 2019 19:37:11 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Hello Jeevan,\n\n> +static void\n> +executeStatementExpect(PGconn *con, const char *sql, const ExecStatusType\n> expected, bool errorOK)\n> +{\n>\n> I think some instances like this need 80 column alignment?\n\nYep. Applying the pgindent is kind-of a pain, so I tend to do a reasonable \njob by hand and rely on the next global pgindent to fix such things. I \nshorten the line anyway.\n\n> + resetPQExpBuffer(&query);\n> + appendPQExpBufferStr(&query, DDLINDEXes[i]);\n>\n> I think you can simply use printfPQExpBuffer() for the first append, \n> similar to what you have used in createPartitions(), which is a \n> combination of both reset and append.\n\nIt could, but it would mean switching to using a format which is not very \nuseful here as it uses the simpler append*Str variant.\n\nWhile looking at it, I noticed the repeated tablespace addition just \nafterwards, so I factored it out as well in a function.\n\nAttached v3 shorten some lines and adds \"append_tablespace\".\n\n-- \nFabien.", "msg_date": "Thu, 24 Oct 2019 08:33:06 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Hi,\n\nOn 2019-10-24 08:33:06 +0200, Fabien COELHO wrote:\n> Attached v3 shorten some lines and adds \"append_tablespace\".\n\nI'd prefer not to expand the use of pqexpbuffer in more places, and\ninstead rather see this use StringInfo, now that's also available to\nfrontend programs.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Nov 2019 18:37:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Hello Andres,\n\n>> Attached v3 shorten some lines and adds \"append_tablespace\".\n\nA v4 which just extends the patch to newly added 'G'.\n\n> I'd prefer not to expand the use of pqexpbuffer in more places, and\n> instead rather see this use StringInfo, now that's also available to\n> frontend programs.\n\nFranckly, one or the other does not matter much to me.\n\nHowever, pgbench already uses PQExpBuffer, it uses PsqlScanState which \nalso uses PQExpBuffer, and it intrinsically depends on libpq which \nprovides PQExpBuffer: ISTM that it makes sense to keep going there, unless \nPQExpBuffer support is to be dropped.\n\nSwitching all usages would involve a significant effort and having both \nPQExpBuffer and string_info used in the same file for the same purpose \nwould be confusing.\n\n-- \nFabien.", "msg_date": "Wed, 6 Nov 2019 06:48:14 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": ">>> Attached v3 shorten some lines and adds \"append_tablespace\".\n>\n> A v4 which just extends the patch to newly added 'G'.\n\nv5 is a rebase after 30a3e772.\n\n-- \nFabien.", "msg_date": "Thu, 9 Jan 2020 17:00:23 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On 11/6/19 12:48 AM, Fabien COELHO wrote:\n> \n> Hello Andres,\n> \n>>> Attached v3 shorten some lines and adds \"append_tablespace\".\n> \n> A v4 which just extends the patch to newly added 'G'.\n> \n>> I'd prefer not to expand the use of pqexpbuffer in more places, and\n>> instead rather see this use StringInfo, now that's also available to\n>> frontend programs.\n> \n> Franckly, one or the other does not matter much to me.\n\nFWIW, I agree with Andres with regard to using StringInfo.\n\nAlso, the changes to executeStatementExpect() and adding \nexecuteStatement() do not seem to fit in with the purpose of this patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 27 Mar 2020 12:23:26 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Hello David,\n\n>>> I'd prefer not to expand the use of pqexpbuffer in more places, and \n>>> instead rather see this use StringInfo, now that's also available to \n>>> frontend programs.\n>> \n>> Franckly, one or the other does not matter much to me.\n>\n> FWIW, I agree with Andres with regard to using StringInfo.\n\nOk. I find it strange to mix PQExpBuffer & StringInfo in the same file.\n\n> Also, the changes to executeStatementExpect() and adding executeStatement() \n> do not seem to fit in with the purpose of this patch.\n\nYep, that was in passing.\n\nAttached a v6 which uses StringInfo, and the small refactoring as a \nseparate patch.\n\n-- \nFabien.", "msg_date": "Fri, 27 Mar 2020 23:13:32 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On 3/27/20 6:13 PM, Fabien COELHO wrote:\n> \n> Hello David,\n> \n>>>> I'd prefer not to expand the use of pqexpbuffer in more places, and \n>>>> instead rather see this use StringInfo, now that's also available to \n>>>> frontend programs.\n>>>\n>>> Franckly, one or the other does not matter much to me.\n>>\n>> FWIW, I agree with Andres with regard to using StringInfo.\n> \n> Ok. I find it strange to mix PQExpBuffer & StringInfo in the same file.\n\nAgreed, but we'd rather use StringInfo going forward. However, I don't \nthink that puts you on the hook for updating all the PQExpBuffer references.\n\nUnless you want to...\n\n>> Also, the changes to executeStatementExpect() and adding \n>> executeStatement() do not seem to fit in with the purpose of this patch.\n> \n> Yep, that was in passing.\n> \n> Attached a v6 which uses StringInfo, and the small refactoring as a \n> separate patch.\n\nI think that's better, thanks.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 27 Mar 2020 18:26:32 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "\n>> Ok. I find it strange to mix PQExpBuffer & StringInfo in the same file.\n>\n> Agreed, but we'd rather use StringInfo going forward. However, I don't think \n> that puts you on the hook for updating all the PQExpBuffer references.\n>\n> Unless you want to...\n\nI cannot say that I \"want\" to fix something which already works the same \nway, because it is against my coding principles.\n\nHowever there may be some fun in writing a little script to replace one \nwith the other automatically. I counted nearly 3500 calls under src/bin.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 27 Mar 2020 23:59:24 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>> Ok. I find it strange to mix PQExpBuffer & StringInfo in the same file.\n\n>> Agreed, but we'd rather use StringInfo going forward. However, I don't think \n>> that puts you on the hook for updating all the PQExpBuffer references.\n>> Unless you want to...\n\n> I cannot say that I \"want\" to fix something which already works the same \n> way, because it is against my coding principles.\n> However there may be some fun in writing a little script to replace one \n> with the other automatically. I counted nearly 3500 calls under src/bin.\n\nYeah, that's the problem. If someone does come forward with a patch to do\nthat, I think it'd be summarily rejected, at least in high-traffic code\nlike pg_dump. The pain it'd cause for back-patching would outweigh the\nvalue.\n\nThat being the case, I'd think a better design principle is \"make your\nnew code look like the code around it\", which would tend to weigh against\nintroducing StringInfo uses into pgbench when there's none there now and\na bunch of PQExpBuffer instead. So I can't help thinking the advice\nyou're being given here is suspect.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 19:57:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On 2020-Mar-27, Tom Lane wrote:\n\n> That being the case, I'd think a better design principle is \"make your\n> new code look like the code around it\", which would tend to weigh against\n> introducing StringInfo uses into pgbench when there's none there now and\n> a bunch of PQExpBuffer instead. So I can't help thinking the advice\n> you're being given here is suspect.\n\n+1 for keeping it PQExpBuffer-only, until such a time when you need a\nStringInfo feature that's not in PQExpBuffer -- and even at that point,\nI think you'd switch just that one thing to StringInfo, not the whole\nprogram.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 27 Mar 2020 22:52:26 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "\nHello Tom,\n\n>> I cannot say that I \"want\" to fix something which already works the same\n>> way, because it is against my coding principles. [...]\n>> I counted nearly 3500 calls under src/bin.\n>\n> Yeah, that's the problem. If someone does come forward with a patch to do\n> that, I think it'd be summarily rejected, at least in high-traffic code\n> like pg_dump. The pain it'd cause for back-patching would outweigh the\n> value.\n\nWhat about \"typedef StringInfoData PQExpBufferData\" and replacing \nPQExpBuffer by StringInfo internally, just keeping the old interface \naround because it is there? That would remove a few hundreds clocs.\n\nISTM that with inline and varargs macro the substition can be managed \nreasonably lightly, depending on what level of compatibility is required \nfor libpq: should it be linkability, or requiring a recompilation is ok?\n\nA clear benefit is that there are quite a few utils for PQExpBuffer in \n\"fe_utils/string_utils.c\" which would become available for StringInfo, \nwhich would help using StringInfo without duplicating them.\n\n> That being the case, I'd think a better design principle is \"make your\n> new code look like the code around it\",\n\nYep.\n\n> which would tend to weigh against introducing StringInfo uses into \n> pgbench when there's none there now and a bunch of PQExpBuffer instead.\n> So I can't help thinking the advice you're being given here is suspect.\n\nWell, that is what I was saying, but at 2 against 1, I fold.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 28 Mar 2020 10:46:02 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On 3/27/20 9:52 PM, Alvaro Herrera wrote:\n> On 2020-Mar-27, Tom Lane wrote:\n> \n>> That being the case, I'd think a better design principle is \"make your\n>> new code look like the code around it\", which would tend to weigh against\n>> introducing StringInfo uses into pgbench when there's none there now and\n>> a bunch of PQExpBuffer instead. So I can't help thinking the advice\n>> you're being given here is suspect.\n> \n> +1 for keeping it PQExpBuffer-only, until such a time when you need a\n> StringInfo feature that's not in PQExpBuffer -- and even at that point,\n> I think you'd switch just that one thing to StringInfo, not the whole\n> program.\n\nI think I need to be careful what I joke about. It wasn't my intention \nto advocate changing all the existing *PQExpBuffer() calls in bin.\n\nBut, the only prior committer to look at this patch expressed a \npreference for StringInfo so in the absence of any other input I thought \nit might move the patch forward if I reinforced that. Now it seems the \nconsensus has moved in favor of *PQExpBuffer().\n\nFabien has provided a patch in each flavor, so I guess the question is: \nis it committable either way?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sat, 28 Mar 2020 10:36:58 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Hi,\n\nOn 2020-03-27 19:57:12 -0400, Tom Lane wrote:\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> >>> Ok. I find it strange to mix PQExpBuffer & StringInfo in the same file.\n> \n> >> Agreed, but we'd rather use StringInfo going forward. However, I don't think \n> >> that puts you on the hook for updating all the PQExpBuffer references.\n> >> Unless you want to...\n> \n> > I cannot say that I \"want\" to fix something which already works the same \n> > way, because it is against my coding principles.\n> > However there may be some fun in writing a little script to replace one \n> > with the other automatically. I counted nearly 3500 calls under src/bin.\n> \n> Yeah, that's the problem. If someone does come forward with a patch to do\n> that, I think it'd be summarily rejected, at least in high-traffic code\n> like pg_dump. The pain it'd cause for back-patching would outweigh the\n> value.\n\nSure, but that's not at all what was proposed.\n\n\n> That being the case, I'd think a better design principle is \"make your\n> new code look like the code around it\", which would tend to weigh against\n> introducing StringInfo uses into pgbench when there's none there now and\n> a bunch of PQExpBuffer instead. So I can't help thinking the advice\n> you're being given here is suspect.\n\nI don't agree with this. This is a \"fresh\" usage of StringInfo. That's\ndifferent to adding one new printed line among others built with\npqexpbuffer. If we continue adding large numbers of new uses of both\npieces of infrastructure, we're just making things more confusing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 28 Mar 2020 11:34:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-27 19:57:12 -0400, Tom Lane wrote:\n>> That being the case, I'd think a better design principle is \"make your\n>> new code look like the code around it\", which would tend to weigh against\n>> introducing StringInfo uses into pgbench when there's none there now and\n>> a bunch of PQExpBuffer instead. So I can't help thinking the advice\n>> you're being given here is suspect.\n\n> I don't agree with this. This is a \"fresh\" usage of StringInfo. That's\n> different to adding one new printed line among others built with\n> pqexpbuffer. If we continue adding large numbers of new uses of both\n> pieces of infrastructure, we're just making things more confusing.\n\nWhy? I'm not aware of any intention to deprecate/remove PQExpBuffer,\nand I doubt it'd be a good thing to try. It does some things that\nStringInfo won't, notably cope with OOM without crashing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Mar 2020 14:49:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On 2020-03-28 14:49:31 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-03-27 19:57:12 -0400, Tom Lane wrote:\n> >> That being the case, I'd think a better design principle is \"make your\n> >> new code look like the code around it\", which would tend to weigh against\n> >> introducing StringInfo uses into pgbench when there's none there now and\n> >> a bunch of PQExpBuffer instead. So I can't help thinking the advice\n> >> you're being given here is suspect.\n> \n> > I don't agree with this. This is a \"fresh\" usage of StringInfo. That's\n> > different to adding one new printed line among others built with\n> > pqexpbuffer. If we continue adding large numbers of new uses of both\n> > pieces of infrastructure, we're just making things more confusing.\n> \n> Why? I'm not aware of any intention to deprecate/remove PQExpBuffer,\n> and I doubt it'd be a good thing to try. It does some things that\n> StringInfo won't, notably cope with OOM without crashing.\n\n- code using it cannot easily be shared between frontend/backend (no\n memory context integration etc)\n- most code does *not* want to deal with the potential for OOM without\n erroring out\n- it's naming is even more confusing than StringInfo\n- it introduces dependencies to libpq even when not needed\n- both stringinfo and pqexpbuffer are performance relevant in some uses,\n needing to optimize both is wasted effort\n- we shouldn't expose everyone to both APIs except where needed - it's\n stuff one has to learn\n\n\n", "msg_date": "Sat, 28 Mar 2020 12:04:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-28 14:49:31 -0400, Tom Lane wrote:\n>> Why? I'm not aware of any intention to deprecate/remove PQExpBuffer,\n>> and I doubt it'd be a good thing to try. It does some things that\n>> StringInfo won't, notably cope with OOM without crashing.\n\n> - code using it cannot easily be shared between frontend/backend (no\n> memory context integration etc)\n\nTrue, but also pretty irrelevant for pgbench and similar code.\n\n> - most code does *not* want to deal with the potential for OOM without\n> erroring out\n\nFair point.\n\n> - it's naming is even more confusing than StringInfo\n\nEye of the beholder ...\n\n> - it introduces dependencies to libpq even when not needed\n\nMost of our FE programs do include libpq, and pgbench certainly does,\nso this seems like a pretty irrelevant objection as well.\n\n> - both stringinfo and pqexpbuffer are performance relevant in some uses,\n> needing to optimize both is wasted effort\n\nI'm not aware that anybody is trying to micro-optimize either. Even\nif someone is, it doesn't imply that they need to change both.\n\n> - we shouldn't expose everyone to both APIs except where needed - it's\n> stuff one has to learn\n\nThat situation is unlikely to change in the foreseeable future.\nMoreover, using both APIs in one program, where we were not before,\nmakes it worse not better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Mar 2020 15:16:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On 2020-03-28 15:16:21 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > - both stringinfo and pqexpbuffer are performance relevant in some uses,\n> > needing to optimize both is wasted effort\n> \n> I'm not aware that anybody is trying to micro-optimize either.\n\nhttps://postgr.es/m/5450.1578797036%40sss.pgh.pa.us\n\n\n", "msg_date": "Sat, 28 Mar 2020 12:34:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "Hello Andres,\n\n>> That being the case, I'd think a better design principle is \"make your\n>> new code look like the code around it\", which would tend to weigh against\n>> introducing StringInfo uses into pgbench when there's none there now and\n>> a bunch of PQExpBuffer instead. So I can't help thinking the advice\n>> you're being given here is suspect.\n>\n> I don't agree with this. This is a \"fresh\" usage of StringInfo. That's\n> different to adding one new printed line among others built with\n> pqexpbuffer. If we continue adding large numbers of new uses of both\n> pieces of infrastructure, we're just making things more confusing.\n\nMy 0.02ᅵᅵ :\n\n - I'm in favor or having one tool for one purpose, so a fe/be common\nStringInfo interface is fine with me;\n\n - I prefer to avoid using both PQExpBuffer & StringInfo in the same file, \nbecause they do the exact same thing and it is locally confusing;\n\n - I'd be fine with switching all of pgbench to StringInfo, as there are \nonly 31 uses;\n\n - But, pgbench relies on psql scanner, which uses PQExpBuffer in \nPsqlScanState, so mixing is unavoidable, unless PQExpBuffer & StringInfo\nare the same thing (i.e. typedef + cpp/inline/function wrappers);\n\n - There are 1260 uses of PQExpBuffer in psql that, although they are \ntrivial, I'm in no hurry to update.\n\n-- \nFabien.", "msg_date": "Sun, 29 Mar 2020 07:44:31 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "> in favor of *PQExpBuffer().\n\nAttached v7 is rebased v5 which uses PQExpBuffer, per cfbot.\n\n-- \nFabien.", "msg_date": "Thu, 9 Jul 2020 09:05:27 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "On 09/07/2020 10:05, Fabien COELHO wrote:\n>> in favor of *PQExpBuffer().\n> \n> Attached v7 is rebased v5 which uses PQExpBuffer, per cfbot.\n\nThanks! I pushed this with small changes:\n\n- I left out the changes to executeStatement(). I'm not quite convinced \nit's a good idea or worth it, and it's unrelated to the main part of \nthis patch, so let's handle that separately.\n\n- I also left out changes to use the C99-style \"for (int i = 0; ...)\" \nconstruct. I think that's a good change for readability, but again \nunrelated to this and hardly worth changing existing code for.\n\n- I inlined the append_tablespace() function back to the callers. And I \ndid the same to the append_fillfactor() function, too. It seems more \nreadable to just call appendPQExpBuffer() diretly, than encapulate the \nsingle appendPQExpBuffer() call in a helper function.\n\n> @@ -3880,15 +3868,16 @@ initGenerateDataClientSide(PGconn *con)\n> \n> \tINSTR_TIME_SET_CURRENT(start);\n> \n> +\t/* printf overheads should probably be avoided... */\n> \tfor (k = 0; k < (int64) naccounts * scale; k++)\n> \t{\n> \t\tint64\t\tj = k + 1;\n> \n> \t\t/* \"filler\" column defaults to blank padded empty string */\n> -\t\tsnprintf(sql, sizeof(sql),\n> -\t\t\t\t INT64_FORMAT \"\\t\" INT64_FORMAT \"\\t%d\\t\\n\",\n> -\t\t\t\t j, k / naccounts + 1, 0);\n> -\t\tif (PQputline(con, sql))\n> +\t\tprintfPQExpBuffer(&sql,\n> +\t\t\t\t\t\t INT64_FORMAT \"\\t\" INT64_FORMAT \"\\t%d\\t\\n\",\n> +\t\t\t\t\t\t j, k / naccounts + 1, 0);\n> +\t\tif (PQputline(con, sql.data))\n> \t\t{\n> \t\t\tpg_log_fatal(\"PQputline failed\");\n> \t\t\texit(1);\n\nCan you elaborate what you meant by the new \"print overheads should \nprobably be avoided\" comment? I left that out since it seems unrelated \nto switching to PQExpBuffer.\n\n- Heikki\n\n\n", "msg_date": "Wed, 30 Sep 2020 10:59:30 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: pgbench - refactor init functions with buffers" }, { "msg_contents": "\n> Can you elaborate what you meant by the new \"print overheads should probably \n> be avoided\" comment?\n\nBecause printf is slow and this is on the critical path of data \ngeneration. Printf has to interpret the format each time just to print \nthree ints, specialized functions could be used which would allow to skip \nthe repeated format parsing.\n\n> I left that out since it seems unrelated to switching to PQExpBuffer.\n\nYep.\n\nThanks for the commit. Getting rid of most snprintf is a relief.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 2 Oct 2020 10:55:40 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench - refactor init functions with buffers" } ]
[ { "msg_contents": "Hi,\n\nWhen I run pg_basebackup in v12 against v11, standby server fails to connecto\nprimary with the following error:\n\n2019-10-22 09:28:23.673 UTC [2375] FATAL: could not connect to the primary\nserver: invalid connection option \"gssencmode\"\n\nWhen I remove this from recovery.conf, it works fine. Looks like a bug to me\n(we need to preserve backward compatibility). Comments?\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Tue, 22 Oct 2019 12:32:53 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "v12 pg_basebackup fails against older servers (take two)" }, { "msg_contents": "On Tue, Oct 22, 2019 at 12:32:53PM +0300, Devrim Gündüz wrote:\n> When I run pg_basebackup in v12 against v11, standby server fails to connecto\n> primary with the following error:\n> \n> 2019-10-22 09:28:23.673 UTC [2375] FATAL: could not connect to the primary\n> server: invalid connection option \"gssencmode\"\n> \n> When I remove this from recovery.conf, it works fine. Looks like a bug to me\n> (we need to preserve backward compatibility). Comments?\n\nYou are referring to the connection string generated in\nprimary_conninfo here, right? It would be nice to be more compatible\nhere. This can be simply fixed by having an extra filter in\nGenerateRecoveryConfig() (different file between HEAD and\nREL_12_STABLE). I also think that there is more. On HEAD,\nchannel_binding gets added to the connection string generated which\nwould equally cause a failure with pg_basebackup from HEAD used for a\nv12 or older server.\n--\nMichael", "msg_date": "Tue, 22 Oct 2019 19:16:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12 pg_basebackup fails against older servers (take two)" }, { "msg_contents": "Hi,\n\nOn Tue, 2019-10-22 at 19:16 +0900, Michael Paquier wrote:\n> You are referring to the connection string generated in\n> primary_conninfo here, right?\n\nRight.\n\n> It would be nice to be more compatible here. This can be simply fixed by\n> having an extra filter in GenerateRecoveryConfig() (different file between\n> HEAD and REL_12_STABLE). I also think that there is more. On HEAD,\n> channel_binding gets added to the connection string generated which\n> would equally cause a failure with pg_basebackup from HEAD used for a\n> v12 or older server.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=beeb8e2e0717065296dc7b32daba2d66f0f931dd\n\nhad a similar approach in backwards compatibility, so I also agree on fixing\nwhatever breaks it.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Tue, 22 Oct 2019 13:37:18 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "Re: v12 pg_basebackup fails against older servers (take two)" }, { "msg_contents": "Greetings,\n\n* Devrim Gündüz (devrim@gunduz.org) wrote:\n> On Tue, 2019-10-22 at 19:16 +0900, Michael Paquier wrote:\n> > You are referring to the connection string generated in\n> > primary_conninfo here, right?\n> \n> Right.\n\nI'm awful suspicious that there's other similar cases beyond this\nparticular one...\n\n> > It would be nice to be more compatible here. This can be simply fixed by\n> > having an extra filter in GenerateRecoveryConfig() (different file between\n> > HEAD and REL_12_STABLE). I also think that there is more. On HEAD,\n> > channel_binding gets added to the connection string generated which\n> > would equally cause a failure with pg_basebackup from HEAD used for a\n> > v12 or older server.\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=beeb8e2e0717065296dc7b32daba2d66f0f931dd\n> \n> had a similar approach in backwards compatibility, so I also agree on fixing\n> whatever breaks it.\n\nYeah, we clearly do want newer versions of pg_basebackup to work with\nolder versions of PG and therefore we should address this.\n\nHere's just a quick rough-up of a patch (it compiles, I haven't tried it\nout more than that) that adds in a check to skip gssencmode on older\nversions. If it seems like a reasonable approach then I can test it out\nand deal with back-patching it and such.\n\nThoughts?\n\nThanks,\n\nStephen", "msg_date": "Tue, 22 Oct 2019 09:06:03 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 pg_basebackup fails against older servers (take two)" }, { "msg_contents": "On Tue, Oct 22, 2019 at 09:06:03AM -0400, Stephen Frost wrote:\n> Here's just a quick rough-up of a patch (it compiles, I haven't tried it\n> out more than that) that adds in a check to skip gssencmode on older\n> versions. If it seems like a reasonable approach then I can test it out\n> and deal with back-patching it and such.\n> \n> Thoughts?\n\nHere is a thought. We could tackle the problem at its source and\ntrack in internalPQconninfoOption the minimum version supported by a\nparameter. This way, we could make sure that libpq routines similar\nto PQconninfo() never return an option which is not compatible with a\na live connection, and we won't forget that if the problem shows up\nagain because creating a new parameter would require to add a new\nversion number. There is an advantage here: internalPQconninfoOption\nis an internal structure, so this should be back-patchable.\n--\nMichael", "msg_date": "Tue, 22 Oct 2019 22:35:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12 pg_basebackup fails against older servers (take two)" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Tue, Oct 22, 2019 at 09:06:03AM -0400, Stephen Frost wrote:\n> > Here's just a quick rough-up of a patch (it compiles, I haven't tried it\n> > out more than that) that adds in a check to skip gssencmode on older\n> > versions. If it seems like a reasonable approach then I can test it out\n> > and deal with back-patching it and such.\n> \n> Here is a thought. We could tackle the problem at its source and\n> track in internalPQconninfoOption the minimum version supported by a\n> parameter. This way, we could make sure that libpq routines similar\n> to PQconninfo() never return an option which is not compatible with a\n> a live connection, and we won't forget that if the problem shows up\n> again because creating a new parameter would require to add a new\n> version number. There is an advantage here: internalPQconninfoOption\n> is an internal structure, so this should be back-patchable.\n\nYeah.. Something along those lines definitely seems like it'd be better\nas that would force anyone adding new options to explicitly say which\nserver version the option makes sense for. Would it make sene to have a\nminimum and a maximum (and a \"currently live\" or some such indicator, so\nwe aren't changing the max every release)?\n\nThe other thought I had was if we should, perhaps, be skipping settings\nwhose values haven't been changed from the default value. Currently, we\nend up with a bunch of stuff that, in my experience anyway, just ends up\nbeing confusing to people, without any particular benefit, like\n'sslcompression=0' when SSL wasn't used, or 'krbsrvname=postgres' when\nKerberos/GSSAPI wasn't used...\n\nThanks,\n\nStephen", "msg_date": "Tue, 22 Oct 2019 09:53:45 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 pg_basebackup fails against older servers (take two)" }, { "msg_contents": "On Tue, Oct 22, 2019 at 09:53:45AM -0400, Stephen Frost wrote:\n> Yeah.. Something along those lines definitely seems like it'd be better\n> as that would force anyone adding new options to explicitly say which\n> server version the option makes sense for. Would it make sense to have a\n> minimum and a maximum (and a \"currently live\" or some such indicator, so\n> we aren't changing the max every release)?\n\nYeah. A maximum may help to handle properly the cycling of deprecated\noptions in connstrs, so I see your point. Not sure that this\n\"currently-live\" indicator is something to care about if we know\nalready the range of versions supported by a parameter and the\nversion of the backend for a live connection. My take is that it\nwould be more consistent to have a PG_MAJORVERSION_NUM for this\npurpose in pg_config.h as well (I honestly don't like much the\nexisting tweaks for the major version numbers like \"PG_VERSION_NUM / \n100\" in pg_basebackup.c & co for example). If we were to have a\nmaximum, couldn't there also be issues when it comes to link a binary\nwith a version of libpq which has been compiled with a version of\nPostgres older than the version of the binary? For example, imagine a\nversion of libpq compiled with v11, used to link to a pg_basebackup\nfrom v12.. (@_@)\n\n> The other thought I had was if we should, perhaps, be skipping settings\n> whose values haven't been changed from the default value. Currently, we\n> end up with a bunch of stuff that, in my experience anyway, just ends up\n> being confusing to people, without any particular benefit, like\n> 'sslcompression=0' when SSL wasn't used, or 'krbsrvname=postgres' when\n> Kerberos/GSSAPI wasn't used...\n\nCouldn't this become a problem if we were to change the default for\nsome parameters? There has been a lot of talks for example about how\nbad sslmode's default it for one, even if nobody has actually pulled\nthe trigger to change it.\n--\nMichael", "msg_date": "Wed, 23 Oct 2019 15:37:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: v12 pg_basebackup fails against older servers (take two)" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Tue, Oct 22, 2019 at 09:53:45AM -0400, Stephen Frost wrote:\n> > Yeah.. Something along those lines definitely seems like it'd be better\n> > as that would force anyone adding new options to explicitly say which\n> > server version the option makes sense for. Would it make sense to have a\n> > minimum and a maximum (and a \"currently live\" or some such indicator, so\n> > we aren't changing the max every release)?\n> \n> Yeah. A maximum may help to handle properly the cycling of deprecated\n> options in connstrs, so I see your point. Not sure that this\n> \"currently-live\" indicator is something to care about if we know\n> already the range of versions supported by a parameter and the\n> version of the backend for a live connection. My take is that it\n> would be more consistent to have a PG_MAJORVERSION_NUM for this\n> purpose in pg_config.h as well (I honestly don't like much the\n> existing tweaks for the major version numbers like \"PG_VERSION_NUM / \n> 100\" in pg_basebackup.c & co for example). If we were to have a\n> maximum, couldn't there also be issues when it comes to link a binary\n> with a version of libpq which has been compiled with a version of\n> Postgres older than the version of the binary? For example, imagine a\n> version of libpq compiled with v11, used to link to a pg_basebackup\n> from v12.. (@_@)\n\nErm, your last concern is exactly why I was saying we'd have a\n'currently live' indicator- so that it wouldn't be an issue to have an\nolder library connecting from a new application to a newer database.\n\n> > The other thought I had was if we should, perhaps, be skipping settings\n> > whose values haven't been changed from the default value. Currently, we\n> > end up with a bunch of stuff that, in my experience anyway, just ends up\n> > being confusing to people, without any particular benefit, like\n> > 'sslcompression=0' when SSL wasn't used, or 'krbsrvname=postgres' when\n> > Kerberos/GSSAPI wasn't used...\n> \n> Couldn't this become a problem if we were to change the default for\n> some parameters? There has been a lot of talks for example about how\n> bad sslmode's default it for one, even if nobody has actually pulled\n> the trigger to change it.\n\nThat really depends on if we think that users will expect the\nnew-default behavior to be used, or the old-default to be. If the user\ndidn't set anything explicitly when they ran the command in the first\nplace, then it would seem like they intended and expected the defaults\nto be used. Perhaps that's an even better answer- just only put into\nthe recovery.conf file what the user actually set instead of a bunch of\nother stuff...\n\nThanks,\n\nStephen", "msg_date": "Wed, 23 Oct 2019 10:07:41 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: v12 pg_basebackup fails against older servers (take two)" } ]
[ { "msg_contents": "Hello,\n\nI originally reported this in pgsql-bugs [0], but there wasn't much\nfeedback there, so moving the discussion here. When using JSON, YAML, or\nXML-format EXPLAIN on a plan that uses a parallelized sort, the Sort nodes\nlist two different entries for \"Workers\", one for the sort-related info,\nand one for general worker info. This is what this looks like in JSON (some\ndetails elided):\n\n{\n \"Node Type\": \"Sort\",\n ...\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Sort Method\": \"external merge\",\n \"Sort Space Used\": 20128,\n \"Sort Space Type\": \"Disk\"\n },\n {\n \"Worker Number\": 1,\n \"Sort Method\": \"external merge\",\n \"Sort Space Used\": 20128,\n \"Sort Space Type\": \"Disk\"\n }\n ],\n ...\n \"Workers\": [\n {\n \"Worker Number\": 0,\n \"Actual Startup Time\": 309.726,\n \"Actual Total Time\": 310.179,\n \"Actual Rows\": 4128,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 2872,\n \"Shared Read Blocks\": 7584,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 490,\n \"Temp Written Blocks\": 2529\n },\n {\n \"Worker Number\": 1,\n \"Actual Startup Time\": 306.523,\n \"Actual Total Time\": 307.001,\n \"Actual Rows\": 4128,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 3356,\n \"Shared Read Blocks\": 7100,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 490,\n \"Temp Written Blocks\": 2529\n }\n ],\n \"Plans:\" ...\n}\n\nThis is technically valid JSON, but it's extremely difficult to work with,\nsince default JSON parsing in Ruby, node, Python, Go, and even Postgres'\nown jsonb only keep the latest key--the sort information is discarded\n(other languages probably don't fare much better; this is what I had on\nhand). As Tom Lane pointed out in my pgsql-bugs thread, this has been\nreported before [1] and in that earlier thread, Andrew Dunstan suggested\nthat perhaps the simplest solution is to just rename the sort-related\nWorkers node. Tom expressed some concerns about a breaking change here,\nthough I think the current behavior means vanishingly few users are parsing\nthis data correctly. Thoughts?\n\nThanks,\nMaciek\n\n[0]:\nhttps://www.postgresql.org/message-id/CADXhmgSr807j2Pc9aUjW2JOzOBe3FeYnQBe_f9U%2B-Mm4b1HRUw%40mail.gmail.com\n[1]:\nhttps://www.postgresql.org/message-id/flat/41ee53a5-a36e-cc8f-1bee-63f6565bb1ee@dalibo.com\n\nHello,I originally reported this in pgsql-bugs [0], but there wasn't much feedback there, so moving the discussion here. When using JSON, YAML, or XML-format EXPLAIN on a plan that uses a parallelized sort, the Sort nodes list two different entries for \"Workers\", one for the sort-related info, and one for general worker info. This is what this looks like in JSON (some details elided):{  \"Node Type\": \"Sort\",  ...  \"Workers\": [    {      \"Worker Number\": 0,      \"Sort Method\": \"external merge\",      \"Sort Space Used\": 20128,      \"Sort Space Type\": \"Disk\"    },    {      \"Worker Number\": 1,      \"Sort Method\": \"external merge\",      \"Sort Space Used\": 20128,      \"Sort Space Type\": \"Disk\"    }  ],  ...  \"Workers\": [    {      \"Worker Number\": 0,      \"Actual Startup Time\": 309.726,      \"Actual Total Time\": 310.179,      \"Actual Rows\": 4128,      \"Actual Loops\": 1,      \"Shared Hit Blocks\": 2872,      \"Shared Read Blocks\": 7584,      \"Shared Dirtied Blocks\": 0,      \"Shared Written Blocks\": 0,      \"Local Hit Blocks\": 0,      \"Local Read Blocks\": 0,      \"Local Dirtied Blocks\": 0,      \"Local Written Blocks\": 0,      \"Temp Read Blocks\": 490,      \"Temp Written Blocks\": 2529    },    {      \"Worker Number\": 1,      \"Actual Startup Time\": 306.523,      \"Actual Total Time\": 307.001,      \"Actual Rows\": 4128,      \"Actual Loops\": 1,      \"Shared Hit Blocks\": 3356,      \"Shared Read Blocks\": 7100,      \"Shared Dirtied Blocks\": 0,      \"Shared Written Blocks\": 0,      \"Local Hit Blocks\": 0,      \"Local Read Blocks\": 0,      \"Local Dirtied Blocks\": 0,      \"Local Written Blocks\": 0,      \"Temp Read Blocks\": 490,      \"Temp Written Blocks\": 2529    }  ],  \"Plans:\" ...}This is technically valid JSON, but it's extremely difficult to work with, since default JSON parsing in Ruby, node, Python, Go, and even Postgres' own jsonb only keep the latest key--the sort information is discarded (other languages probably don't fare much better; this is what I had on hand). As Tom Lane pointed out in my pgsql-bugs thread, this has been reported before [1] and in that earlier thread, Andrew Dunstan suggested that perhaps the simplest solution is to just rename the sort-related Workers node. Tom expressed some concerns about a breaking change here, though I think the current behavior means vanishingly few users are parsing this data correctly. Thoughts?Thanks,Maciek[0]: https://www.postgresql.org/message-id/CADXhmgSr807j2Pc9aUjW2JOzOBe3FeYnQBe_f9U%2B-Mm4b1HRUw%40mail.gmail.com[1]: https://www.postgresql.org/message-id/flat/41ee53a5-a36e-cc8f-1bee-63f6565bb1ee@dalibo.com", "msg_date": "Tue, 22 Oct 2019 11:58:35 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Hi,\n\nOn 2019-10-22 11:58:35 -0700, Maciek Sakrejda wrote:\n> I originally reported this in pgsql-bugs [0], but there wasn't much\n> feedback there, so moving the discussion here. When using JSON, YAML, or\n> XML-format EXPLAIN on a plan that uses a parallelized sort, the Sort nodes\n> list two different entries for \"Workers\", one for the sort-related info,\n> and one for general worker info. This is what this looks like in JSON (some\n> details elided):\n>\n> {\n> \"Node Type\": \"Sort\",\n> ...\n> \"Workers\": [\n> {\n> \"Worker Number\": 0,\n> \"Sort Method\": \"external merge\",\n> \"Sort Space Used\": 20128,\n> \"Sort Space Type\": \"Disk\"\n> },\n> {\n> \"Worker Number\": 1,\n> \"Sort Method\": \"external merge\",\n> \"Sort Space Used\": 20128,\n> \"Sort Space Type\": \"Disk\"\n> }\n> ],\n> ...\n> \"Workers\": [\n> {\n\n> This is technically valid JSON, but it's extremely difficult to work with,\n> since default JSON parsing in Ruby, node, Python, Go, and even Postgres'\n> own jsonb only keep the latest key\n\nIt's also quite confusing.\n\n\n> As Tom Lane pointed out in my pgsql-bugs thread, this has been\n> reported before [1] and in that earlier thread, Andrew Dunstan suggested\n> that perhaps the simplest solution is to just rename the sort-related\n> Workers node. Thoughts?\n\nYea, I think we should fix this. The current output basically makes no\nsense.\n\n\n> Tom expressed some concerns about a breaking change here,\n> though I think the current behavior means vanishingly few users are parsing\n> this data correctly.\n\nWell, in a lot of the cases there's no parallel output for the sort, and\nin other cases BUFFERS is not specified. In either case the 'duplicate\nkey' problem won't exist then.\n\n\nWhile Tom said:\n\nOn 2019-10-16 09:16:56 +0200, Tom Lane wrote:\n> I think the text-mode output is intentional, but the other formats\n> need more work.\n\n Sort Method: external merge Disk: 4920kB\n Worker 0: Sort Method: external merge Disk: 5880kB\n Worker 1: Sort Method: external merge Disk: 5920kB\n Buffers: shared hit=682 read=10188, temp read=1415 written=2101\n Worker 0: actual time=130.058..130.324 rows=1324 loops=1\n Buffers: shared hit=337 read=3489, temp read=505 written=739\n Worker 1: actual time=130.273..130.512 rows=1297 loops=1\n Buffers: shared hit=345 read=3507, temp read=505 written=744\n\nI don't think this is close to being good enough to be worth\npreserving. I think it's worth avoiding unnecessary breakage of explain\noutput, but we also shouldn't endlessly carry forward confusing output,\njust because of that.\n\nIt clearly seems like it'd be better if this instead were\n\n Sort Method: external merge Disk: 4920kB\n Buffers: shared hit=682 read=10188, temp read=1415 written=2101\n Worker 0: actual time=130.058..130.324 rows=1324 loops=1\n Sort Method: external merge Disk: 5880kB\n Buffers: shared hit=337 read=3489, temp read=505 written=739\n Worker 1: actual time=130.273..130.512 rows=1297 loops=1\n Buffers: shared hit=345 read=3507, temp read=505 written=744\n Sort Method: external merge Disk: 5920kB\n\nI think the way this information was added in bf11e7ee2e36 and\n33001fd7a707, contrasting to the output added in b287df70e408, is just\nnot right. If we add similar instrumentation reporting to more nodes,\nwe'll end up with duplicated information all over. Additionally the\nper-worker part of show_sort_info() basically just duplicated the rest\nof the function. I then also did something similar (although luckily\nwith a different key...), with the ExplainPrintJIT() call for Gather\nnodes.\n\nUnfortunately I think the fix isn't all that trivial, due to the way we\noutput the per-worker information at the end of ExplainNode(), by just\ndumping things into a string. It seems to me that a step in the right\ndirection would be for ExplainNode() to create\nplanstate->worker_instrument StringInfos, which can be handed to\nroutines like show_sort_info(), which would print the per-node\ninformation into that, rather than directly dumping into\nes->output. Most of the current \"Show worker detail\" would be done\nearlier in ExplainNode(), at the place where we current display the\n\"actual rows\" bit.\n\nISTM that should include removing the duplication fo the the contents of\nshow_sort_info(), and probably also for the Gather, GatherMerge blocks\n(I've apparently skipped adding the JIT information to the latter, not\nsure if we ought to fix that in the stable branches).\n\nAny chance you want to take a stab at that?\n\nI don't think we'll fix it soon, but damn, all this string appending\naround just isn't a great way to reliably build nested data formats.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 18:48:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "On Thu, Oct 24, 2019 at 6:48 PM Andres Freund <andres@anarazel.de> wrote:\n> Unfortunately I think the fix isn't all that trivial, due to the way we\n> output the per-worker information at the end of ExplainNode(), by just\n> dumping things into a string. It seems to me that a step in the right\n> direction would be for ExplainNode() to create\n> planstate->worker_instrument StringInfos, which can be handed to\n> routines like show_sort_info(), which would print the per-node\n> information into that, rather than directly dumping into\n> es->output. Most of the current \"Show worker detail\" would be done\n> earlier in ExplainNode(), at the place where we current display the\n> \"actual rows\" bit.\n>\n> ISTM that should include removing the duplication fo the the contents of\n> show_sort_info(), and probably also for the Gather, GatherMerge blocks\n> (I've apparently skipped adding the JIT information to the latter, not\n> sure if we ought to fix that in the stable branches).\n>\n> Any chance you want to take a stab at that?\n\nIt took me a while, but I did take a stab at it (thanks for your\noff-list help). Attached is my patch that changes the structured\nformats to merge sort worker output in with costs/timing/buffers\nworker output. I have not touched any other worker output yet, since\nit's not under a Workers group as far as I can tell (so it does not\nexhibit the problem I originally reported). I can try to make further\nchanges here if the approach is deemed sound. I also have not touched\ntext output; above you had proposed\n\n> Sort Method: external merge Disk: 4920kB\n> Buffers: shared hit=682 read=10188, temp read=1415 written=2101\n> Worker 0: actual time=130.058..130.324 rows=1324 loops=1\n> Sort Method: external merge Disk: 5880kB\n> Buffers: shared hit=337 read=3489, temp read=505 written=739\n> Worker 1: actual time=130.273..130.512 rows=1297 loops=1\n> Buffers: shared hit=345 read=3507, temp read=505 written=744\n> Sort Method: external merge Disk: 5920kB\n\nwhich makes sense to me, but I'd like to confirm this is the approach\nwe want before I add it to the patch.\n\nThis is my first C in close to a decade (and I was never much of a C\nprogrammer to begin with), so please be gentle.\n\nAs Andres suggested off-list, I also changed the worker output to\norder fields that also occur in the parent node in the same way as the\nparent node.\n\nI've also added a test for the patch, and because this is really an\nEXPLAIN issue rather than a query feature issue, I added a\nsrc/test/regress/sql/explain.sql for the test. I added a couple of\nutility functions for munging json-formatted EXPLAIN plans into\nsomething we can repeatably verify in regression tests (the functions\nuse json rather than jsonb to preserve field order). I have not added\nthis for YAML or XML (even though they should behave the same way),\nsince I'm not familiar with the the functions to manipulate those data\ntypes in a similar way (if they exist). My hunch is due to the\nsimilarity of structured formats, just testing JSON is enough, but I\ncan expand/adjust tests as necessary.", "msg_date": "Mon, 18 Nov 2019 15:39:33 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "I wanted to follow up on this patch since I received no feedback. What\nshould my next steps be (besides rebasing, though I want to confirm there's\ninterest before I do that)?\n\nI wanted to follow up on this patch since I received no feedback. What should my next steps be (besides rebasing, though I want to confirm there's interest before I do that)?", "msg_date": "Thu, 26 Dec 2019 15:31:16 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "On Fri, Dec 27, 2019 at 12:31 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> I wanted to follow up on this patch since I received no feedback. What should my next steps be (besides rebasing, though I want to confirm there's interest before I do that)?\n\nGiven Andres' answer I'd say that there's interest in this patch. You\nshould register this patch in the next commitfest\n(https://commitfest.postgresql.org/26/) to make sure that it's not\nforgotten, which unfortunately is probably what happened here .\n\n\n", "msg_date": "Fri, 27 Dec 2019 07:29:48 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Done! Thanks!\n\nDone! Thanks!", "msg_date": "Fri, 27 Dec 2019 17:11:38 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThis is a high level review only. However, seeing that there is a conflict and does not merge cleanly after commit <b925a00f4ef>, I return to Author.\r\n\r\nTo be fair the resolution seems quite straight forward and I took the liberty of applying the necessary changes to include the new element of ExplainState introduced in the above commit, namely hide_workers. However since the author might have a different idea on how to incorporate this change I leave it up to him.\r\n\r\nAnother very high level comment is the introduction of a new test file, namely explain. Seeing `explain.sql` in the tests suits, personally and very opinion based, I would have expected the whole spectrum of the explain properties to be tested. However only a slight fraction of the functionality is tested. Since this is a bit more of a personal opinion, I don't expect any changes unless the author happens to agree.\r\n\r\nOther than these minor nitpick, the code seems clear, concise and does what it says on the box. It follows the comments in the discussion thread(s) and solves a real issue.\r\n\r\nPlease have a look on how commit <b925a00f4ef> affects this patch and rebase.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 14 Jan 2020 14:44:51 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Thanks for the review! I looked at <b925a00f4ef> and rebased the patch\non current master, ac5bdf6.\n\nI introduced a new test file because this bug is specifically about\nEXPLAIN output (as opposed to query execution or planning\nfunctionality), and it didn't seem like a test would fit in any of the\nother files. I focused on testing just the behavior around this\nspecific bug (and fix). I think eventually we should probably test\nother more fundamental EXPLAIN features (and I'm happy to contribute\nto that) in that file, but that seems outside of the scope of this\npatch.\n\nAny thoughts on what we should do with text mode output (which is\nuntouched right now)? The output Andres proposed above makes sense to\nme, but I'd like to get more input.", "msg_date": "Tue, 14 Jan 2020 23:22:04 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThe current version of the patch (v2) applies cleanly and does what it says on the box.\r\n\r\n> Any thoughts on what we should do with text mode output (which is\r\nuntouched right now)? The output Andres proposed above makes sense to\r\nme, but I'd like to get more input.\r\n\r\nNothing to add beyond what is stated in the thread.\r\n\r\n> Sort Method: external merge Disk: 4920kB\r\n> Buffers: shared hit=682 read=10188, temp read=1415 written=2101\r\n> Worker 0: actual time=130.058..130.324 rows=1324 loops=1\r\n> Sort Method: external merge Disk: 5880kB\r\n> Buffers: shared hit=337 read=3489, temp read=505 written=739\r\n> Worker 1: actual time=130.273..130.512 rows=1297 loops=1\r\n> Buffers: shared hit=345 read=3507, temp read=505 written=744\r\n> Sort Method: external merge Disk: 5920kB\r\n\r\nThis proposal seems like a fitting approach. Awaiting for v3 which\r\nwill include the text version. May I suggest a format yaml test case?\r\nJust to make certain that no regressions occur in the future.\r\n\r\nThanks,", "msg_date": "Wed, 15 Jan 2020 10:11:43 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Sounds good, I'll try that format. Any idea how to test YAML? With the\nJSON format, I was able to rely on Postgres' own JSON-manipulating\nfunctions to strip or canonicalize fields that can vary across\nexecutions--I can't really do that with YAML. Or should I run EXPLAIN\nwith COSTS OFF, TIMING OFF, SUMMARY OFF and assume that for simple\nqueries the BUFFERS output (and other fields I can't turn off like\nSort Space Used) *is* going to be stable?\n\n\n", "msg_date": "Wed, 15 Jan 2020 09:12:04 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "> Sounds good, I'll try that format. Any idea how to test YAML? With the\r\n> JSON format, I was able to rely on Postgres' own JSON-manipulating\r\n> functions to strip or canonicalize fields that can vary across\r\n> executions--I can't really do that with YAML. \r\n\r\nYes, this approach was clear in the patch and works great with Json. Also\r\nyou are correct, this can not be done with YAML. I spend a bit of time to\r\nlook around and I could not find any tests really on yaml format.\r\n\r\n> Or should I run EXPLAIN\r\n> with COSTS OFF, TIMING OFF, SUMMARY OFF and assume that for simple\r\n> queries the BUFFERS output (and other fields I can't turn off like\r\n> Sort Space Used) *is* going to be stable?\r\n\r\nI have to admit with the current diff tool used in pg_regress, this is not possible.\r\nI am pretty certain that it *is not* going to be stable. Not for long anyways.\r\nI withdraw my suggestion for YAML and currently awaiting for TEXT format only.", "msg_date": "Thu, 16 Jan 2020 14:07:36 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "TEXT format was tricky due to its inconsistencies, but I think I have\nsomething working reasonably well. I added a simple test for TEXT\nformat output as well, using a similar approach as the JSON format\ntest, and liberally regexp_replacing away any volatile output. I\nsuppose in theory we could do this for YAML, too, but I think it's\ngross enough not to be worth it, especially given the high similarity\nof all the structured outputs.", "msg_date": "Tue, 21 Jan 2020 00:48:35 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\n> TEXT format was tricky due to its inconsistencies, but I think I have\r\n> something working reasonably well. I added a simple test for TEXT\r\n> format output as well, using a similar approach as the JSON format\r\n\r\nGreat!\r\n\r\n> test, and liberally regexp_replacing away any volatile output. I\r\n> suppose in theory we could do this for YAML, too, but I think it's\r\n> gross enough not to be worth it, especially given the high similarity\r\n> of all the structured outputs.\r\n\r\nAgreed, what is in the patch suffices. Overall great work, a couple of\r\nminor nitpicks if you allow me.\r\n\r\n+ /* Prepare per-worker output */\r\n+ if (es->analyze && planstate->worker_instrument) {\r\n\r\nStyle, parenthesis on its own line.\r\n\r\n+ int num_workers = planstate->worker_instrument->num_workers;\r\n+ int n;\r\n+ worker_strs = (StringInfo *) palloc0(num_workers * sizeof(StringInfo));\r\n+ for (n = 0; n < num_workers; n++) {\r\n\r\nI think C99 would be better here. Also no parenthesis needed.\r\n\r\n+ worker_strs[n] = makeStringInfo();\r\n+ }\r\n+ }\r\n\r\n@@ -1357,6 +1369,58 @@ ExplainNode(PlanState *planstate, List *ancestors,\r\n ExplainPropertyBool(\"Parallel Aware\", plan->parallel_aware, es);\r\n }\r\n\r\n+ /* Prepare worker general execution details */\r\n+ if (es->analyze && es->verbose && planstate->worker_instrument)\r\n+ {\r\n+ WorkerInstrumentation *w = planstate->worker_instrument;\r\n+ int n;\r\n+\r\n+ for (n = 0; n < w->num_workers; ++n)\r\n\r\nI think C99 would be better here.\r\n\r\n+ {\r\n+ Instrumentation *instrument = &w->instrument[n];\r\n+ double nloops = instrument->nloops;\r\n\r\n- appendStringInfoSpaces(es->str, es->indent * 2);\r\n- if (n > 0 || !es->hide_workers)\r\n- appendStringInfo(es->str, \"Worker %d: \", n);\r\n+ if (indent)\r\n+ {\r\n+ appendStringInfoSpaces(es->str, es->indent * 2);\r\n+ }\r\n\r\nStyle: No parenthesis needed\r\n\r\n\r\n- if (opened_group)\r\n- ExplainCloseGroup(\"Workers\", \"Workers\", false, es);\r\n+ /* Show worker detail */\r\n+ if (planstate->worker_instrument) {\r\n+ ExplainFlushWorkers(worker_strs, planstate->worker_instrument->num_workers, es);\r\n }\r\n\r\nStyle: No parenthesis needed\r\n\r\n\r\n+ * just indent once, to add worker info on the next worker line.\r\n+ */\r\n+ if (es->str == es->root_str)\r\n+ {\r\n+ es->indent += es->format == EXPLAIN_FORMAT_TEXT ? 1 : 2;\r\n+ }\r\n+\r\n\r\nStyle: No parenthesis needed\r\n\r\n+ ExplainCloseGroup(\"Workers\", \"Workers\", false, es);\r\n+ // do we have any other cleanup to do?\r\n\r\nThis comment does not really explain anything. Either remove\r\nor rephrase. Also C style comments.\r\n\r\n+ es->print_workers = false;\r\n+}\r\n\r\n int indent; /* current indentation level */\r\n List *grouping_stack; /* format-specific grouping state */\r\n+ bool print_workers; /* whether current node has worker metadata */\r\n\r\nHmm.. commit <b925a00f4ef> introduced `hide_workers` in the struct. Having both\r\nnames in the struct so far apart even seems a bit confusing and sloppy. Do you\r\nthink it would be possible to combine or rename?\r\n\r\n\r\n+extern void ExplainOpenWorker(StringInfo worker_str, ExplainState *es);\r\n+extern void ExplainCloseWorker(ExplainState *es);\r\n+extern void ExplainFlushWorkers(StringInfo *worker_strs, int num_workers, ExplainState *es);\r\n\r\nNo need to expose those, is there? I feel there should be static.\r\n\r\nAwaiting for answer or resolution of these comments to change the status.\r\n\r\n//Georgios", "msg_date": "Wed, 22 Jan 2020 12:54:07 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Thanks! I'll fix the brace issues. Re: the other items:\n\n> + int num_workers = planstate->worker_instrument->num_workers;\n> + int n;\n> + worker_strs = (StringInfo *) palloc0(num_workers * sizeof(StringInfo));\n> + for (n = 0; n < num_workers; n++) {\n>\n> I think C99 would be better here. Also no parenthesis needed.\n\nPardon my C illiteracy, but what am I doing that's not valid C99 here?\n\n> + /* Prepare worker general execution details */\n> + if (es->analyze && es->verbose && planstate->worker_instrument)\n> + {\n> + WorkerInstrumentation *w = planstate->worker_instrument;\n> + int n;\n> +\n> + for (n = 0; n < w->num_workers; ++n)\n>\n> I think C99 would be better here.\n\nAnd here (if it's not the same problem)?\n\n> + ExplainCloseGroup(\"Workers\", \"Workers\", false, es);\n> + // do we have any other cleanup to do?\n>\n> This comment does not really explain anything. Either remove\n> or rephrase. Also C style comments.\n\nGood catch, thanks--I had put this in to remind myself (and reviewers)\nabout cleanup, but I don't think there's anything else to do, so I'll\njust drop it.\n\n> int indent; /* current indentation level */\n> List *grouping_stack; /* format-specific grouping state */\n> + bool print_workers; /* whether current node has worker metadata */\n>\n> Hmm.. commit <b925a00f4ef> introduced `hide_workers` in the struct. Having both\n> names in the struct so far apart even seems a bit confusing and sloppy. Do you\n> think it would be possible to combine or rename?\n\nI noticed that. I was thinking about combining them, but\n\"hide_workers\" seems to be about \"pretend there is no worker output\neven if there is\" and \"print_workers\" is \"keep track of whether or not\nthere is worker output to print\". Maybe I'll rename to\n\"has_worker_output\"?\n\n> +extern void ExplainOpenWorker(StringInfo worker_str, ExplainState *es);\n> +extern void ExplainCloseWorker(ExplainState *es);\n> +extern void ExplainFlushWorkers(StringInfo *worker_strs, int num_workers, ExplainState *es);\n>\n> No need to expose those, is there? I feel there should be static.\n\nGood call, I'll update.\n\n\n", "msg_date": "Wed, 22 Jan 2020 08:54:40 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": ">> + int num_workers = planstate->worker_instrument->num_workers;\r\n>> + int n;\r\n>> + worker_strs = (StringInfo *) palloc0(num_workers * sizeof(StringInfo));\r\n>> + for (n = 0; n < num_workers; n++) {\r\n>>\r\n>> I think C99 would be better here. Also no parenthesis needed.\r\n>\r\n>\r\n> Pardon my C illiteracy, but what am I doing that's not valid C99 here?\r\n\r\nMy bad, I should have been more clear. I meant that it is preferable to use\r\nthe C99 standard which calls for declaring variables in the scope that you\r\nneed them. In this case, 'n' is needed only in the for loop, so something like\r\n\r\nfor (int n = 0; n < num_workers; n++) \r\n\r\nis preferable. To be clear, your code was perfectly valid. It was only the\r\nstyle I was referring to.\r\n\r\n>> + for (n = 0; n < w->num_workers; ++n)\r\n>>\r\n>> I think C99 would be better here.\r\n>\r\n>\r\n> And here (if it's not the same problem)?\r\n\r\nExactly the same as above. \r\n\r\n>> int indent; /* current indentation level */\r\n>> List *grouping_stack; /* format-specific grouping state */\r\n>> + bool print_workers; /* whether current node has worker metadata */\r\n>>\r\n>> Hmm.. commit <b925a00f4ef> introduced `hide_workers` in the struct. Having both\r\n>> names in the struct so far apart even seems a bit confusing and sloppy. Do you\r\n>> think it would be possible to combine or rename?\r\n>\r\n>\r\n> I noticed that. I was thinking about combining them, but\r\n> \"hide_workers\" seems to be about \"pretend there is no worker output\r\n> even if there is\" and \"print_workers\" is \"keep track of whether or not\r\n> there is worker output to print\". Maybe I'll rename to\r\n> \"has_worker_output\"?\r\n\r\nThe rename sounds a bit better in my humble opinion. Thanks.", "msg_date": "Wed, 22 Jan 2020 17:36:20 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "On Wed, Jan 22, 2020 at 9:37 AM Georgios Kokolatos <gkokolatos@pm.me> wrote:\n> My bad, I should have been more clear. I meant that it is preferable to use\n> the C99 standard which calls for declaring variables in the scope that you\n> need them.\n\nAh, I see. I think I got that from ExplainPrintSettings. I've\ncorrected my usage--thanks for pointing it out. I appreciate the\neffort to maintain a consistent style.\n\n>\n> >> int indent; /* current indentation level */\n> >> List *grouping_stack; /* format-specific grouping state */\n> >> + bool print_workers; /* whether current node has worker metadata */\n> >>\n> >> Hmm.. commit <b925a00f4ef> introduced `hide_workers` in the struct. Having both\n> >> names in the struct so far apart even seems a bit confusing and sloppy. Do you\n> >> think it would be possible to combine or rename?\n> >\n> >\n> > I noticed that. I was thinking about combining them, but\n> > \"hide_workers\" seems to be about \"pretend there is no worker output\n> > even if there is\" and \"print_workers\" is \"keep track of whether or not\n> > there is worker output to print\". Maybe I'll rename to\n> > \"has_worker_output\"?\n>\n> The rename sounds a bit better in my humble opinion. Thanks.\n\nAlso, reviewing my code again, I noticed that when I moved the general\nworker output earlier, I missed part of the merge conflict: I had\nreplaced\n\n- /* Show worker detail */\n- if (es->analyze && es->verbose && !es->hide_workers &&\n- planstate->worker_instrument)\n\nwith\n\n+ /* Prepare worker general execution details */\n+ if (es->analyze && es->verbose && planstate->worker_instrument)\n\nwhich ignores the es->hide_workers flag (it did not fail the tests,\nbut the intent is pretty clear). I've corrected this in the current\npatch.\n\nI also noticed that we can now handle worker buffer output more\nconsistently across TEXT and structured formats, so I made that small\nchange too:\n\ndiff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c\nindex 140d0be426..b23b015594 100644\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -1401,8 +1401,6 @@ ExplainNode(PlanState *planstate, List *ancestors,\n appendStringInfo(es->str,\n\n \"actual rows=%.0f loops=%.0f\\n\",\n\n rows, nloops);\n- if (es->buffers)\n- show_buffer_usage(es,\n&instrument->bufusage);\n }\n else\n {\n@@ -1951,7 +1949,7 @@ ExplainNode(PlanState *planstate, List *ancestors,\n\n /* Prepare worker buffer usage */\n if (es->buffers && es->analyze && es->verbose && !es->hide_workers\n- && planstate->worker_instrument && es->format !=\nEXPLAIN_FORMAT_TEXT)\n+ && planstate->worker_instrument)\n {\n WorkerInstrumentation *w = planstate->worker_instrument;\n int n;\ndiff --git a/src/test/regress/expected/explain.out\nb/src/test/regress/expected/explain.out\nindex 8034a4e0db..a4eed3067f 100644\n--- a/src/test/regress/expected/explain.out\n+++ b/src/test/regress/expected/explain.out\n@@ -103,8 +103,8 @@ $$, 'verbose', 'analyze', 'buffers', 'timing off',\n'costs off', 'summary off'),\n Sort Key: (ROW(\"*VALUES*\".column1)) +\n Buffers: shared hit=114 +\n Worker 0: actual rows=2 loops=1 +\n- Buffers: shared hit=114 +\n Sort Method: xxx +\n+ Buffers: shared hit=114 +\n -> Values Scan on \"*VALUES*\" (actual rows=2 loops=1) +\n Output: \"*VALUES*\".column1, ROW(\"*VALUES*\".column1)+\n Worker 0: actual rows=2 loops=1 +\n\n\nI think the \"producing plan output for a worker\" process is easier to\nreason about now, and while it changes TEXT format worker output\norder, the other changes in this patch are more drastic so this\nprobably does not matter.\n\nI've also addressed the other feedback above, and reworded a couple of\ncomments slightly.", "msg_date": "Thu, 23 Jan 2020 01:00:32 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\n> Ah, I see. I think I got that from ExplainPrintSettings. I've\r\n> corrected my usage--thanks for pointing it out. I appreciate the\r\n> effort to maintain a consistent style.\r\n\r\nThanks, I am just following the reviewing guide to be honest.\r\n\r\n> Also, reviewing my code again, I noticed that when I moved the general\r\n> worker output earlier, I missed part of the merge conflict:\r\n\r\nRight. I thought that was intentional.\r\n\r\n> which ignores the es->hide_workers flag (it did not fail the tests,\r\n> but the intent is pretty clear). I've corrected this in the current\r\n> patch.\r\n\r\nNoted and appreciated.\r\n\r\n> I also noticed that we can now handle worker buffer output more\r\n> consistently across TEXT and structured formats, so I made that small\r\n> change too:\r\n\r\nLooks good.\r\n\r\n> I think the \"producing plan output for a worker\" process is easier to\r\n> reason about now, and while it changes TEXT format worker output\r\n> order, the other changes in this patch are more drastic so this\r\n> probably does not matter.\r\n> \r\n> I've also addressed the other feedback above, and reworded a couple of\r\n> comments slightly.\r\n\r\nThanks for the above. Looks clean, does what it says in the tin and solves a\r\nproblem that needs solving. All relevant installcheck-world pass. As far as I \r\nam concerned, I think it is ready to be sent to a committer. Updating the status\r\naccordingly.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 23 Jan 2020 10:38:38 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Great, thank you. I noticed in the CF patch tester app, the build\nfails on Windows [1]. Investigating, I realized I had failed to fully\nstrip volatile EXPLAIN info (namely buffers) in TEXT mode due to a\nbad regexp_replace. I've fixed this in the attached patch (which I\nalso rebased against current master again).\n\n[1]: https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.76313", "msg_date": "Fri, 24 Jan 2020 09:03:22 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> Great, thank you. I noticed in the CF patch tester app, the build\n> fails on Windows [1]. Investigating, I realized I had failed to fully\n> strip volatile EXPLAIN info (namely buffers) in TEXT mode due to a\n> bad regexp_replace.\n\nYou haven't gone nearly far enough in that direction. The proposed\noutput still relies on the assumption that the session will always\nget as many workers as it asks for, which it will not. For previous\nbitter experience in this department, see for instance commits 4ea03f3f4\nand 13e8b2ee8.\n\nTBH I am not sure that the proposed regression tests for this change\ncan be committed at all. Which is a bit of a problem perhaps, but\nthen again we don't have terribly good coverage for the existing code\neither, for the same reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Jan 2020 12:40:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Okay. Does not getting as many workers as it asks for include\nsometimes getting zero, completely changing the actual output? If so,\nI'll submit a new version of the patch removing all tests--I was\nhoping to improve coverage, but I guess this is not the way to start.\nIf not, can we keep the json tests at least if we only consider the\nfirst worker?\n\n\n", "msg_date": "Fri, 24 Jan 2020 16:26:57 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> Okay. Does not getting as many workers as it asks for include\n> sometimes getting zero, completely changing the actual output?\n\nYup :-(. We could possibly avoid that by running the explain\ntest by itself rather than in a parallel group, but I don't\nespecially want to add yet more non-parallelizable tests.\n\nMeanwhile, I spent some time looking at the code, and wasn't very happy\nwith it. I'm on board with the general plan of redirecting EXPLAIN\noutput into per-worker buffers that we eventually recombine, but I think\nthat this particular coding is pretty unmaintainable/unextensible.\nIn particular, I'm really unhappy that the code is ignoring EXPLAIN's\ngrouping_state stack. That means that when it's formatting fields that\nbelong to the per-worker group, it's using the state-stack entry that\ncorresponds to the plan node's main level. This seems to accidentally\nwork, but that fact depends on a number of shaky assumptions:\n\n* Before messing with any per-worker data, we've always emitted at\nleast one field in the plan node's main level, so that the state-stack\nentry isn't at its initial state for the level.\n\n* Before actually emitting the shunted-aside data, we've always emitted\na \"Worker Number\" field in correct format within the per-worker group,\nso that the formatting state is now correct for a non-initial field.\n\n* There is no formatting difference between any non-first fields in\na level (ie the state stack entries are basically just booleans),\nso that it doesn't matter how many plan-node fields we emitted before\nstarting the per-worker data, so long as there was at least one, nor\ndoes transiently abusing the plan node level's stack entry like this\nbreak the formatting of subsequent plan-node-level fields.\n\nNow maybe we'll never implement an output format that breaks that\nlast assumption, and maybe we'll never rearrange the EXPLAIN code\nin a way that breaks either of the first two. But I don't like those\nassumptions too much. I also didn't like the code's assumption that\nall the non-text formats interpret es->indent the same.\n\nI also noted an actual bug, which is that the patch fails regression\ntesting under force_parallel_mode = regress. This isn't really your\nfault, because the issue is in this obscure and poorly-commented hack\nin show_sort_info:\n\n * You might think we should just skip this stanza entirely when\n * es->hide_workers is true, but then we'd get no sort-method output at\n * all. We have to make it look like worker 0's data is top-level data.\n * Currently, we only bother with that for text-format output.\n\nNonetheless, it's broken.\n\nSo I spent some time hacking on this and came up with the attached.\nIt's noticeably more verbose than your patch, but it keeps the\noutput-format-aware code at what seems to me to be a maintainable\narm's-length distance from the parallel-worker hacking. TEXT is\nstill a special case of course :-(.\n\nThis patch just covers the code, I'm not taking any position yet\nabout what to do about the tests. I did tweak the code to eliminate\nthe one formatting difference in select_parallel (ie put two spaces\nafter \"Worker N:\", which I think reads better anyhow), so it\npasses check-world as it stands.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 24 Jan 2020 21:14:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Thanks for the thorough review. I obviously missed some critical\nissues. I recognized some of the other maintainability problems, but\ndid not have a sense of how to best avoid them, being unfamiliar with\nthe code.\n\nFor what it's worth, this version makes sense to me.\n\n\n", "msg_date": "Fri, 24 Jan 2020 19:33:18 -0800", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> For what it's worth, this version makes sense to me.\n\nThanks for looking. Here's a version that deals with the JIT\ninstrumentation. As Andres noted far upthread, that was also\nreally bogusly done before. Not only could you get multiple \"JIT\"\nsubnodes on a Gather node, but we failed to print the info at all\nif the parallelism was expressed as Gather Merge rather than\nGather.\n\nA side effect of this change is that per-worker JIT info is now\nprinted one plan level further down: before it was printed on\nthe Gather node, but now it's attached to the Gather's child,\nbecause that's where we print other per-worker data. I don't\nsee anything particularly wrong with that, but it's another\nchange from the behavior today.\n\nIt's still really unclear to me how we could exercise any of\nthis behavior meaningfully in a regression test. I thought\nfor a little bit about using the TAP infrastructure instead\nof a traditional-style test, but it seems like that doesn't\nbuy anything except for a bias towards ignoring details instead\nof overspecifying them. Which is not much of an improvement.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 25 Jan 2020 14:23:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "I wrote:\n> It's still really unclear to me how we could exercise any of\n> this behavior meaningfully in a regression test. I thought\n> for a little bit about using the TAP infrastructure instead\n> of a traditional-style test, but it seems like that doesn't\n> buy anything except for a bias towards ignoring details instead\n> of overspecifying them. Which is not much of an improvement.\n\nAfter further thought, I decided that about the best we can do\nis suppress the \"Workers\" field in the regression test's expected\noutput. This still gives us code coverage of the relevant code,\nand we can check that the output is valid JSON before we strip it,\nso it's not a dead loss.\n\nI rewrote the test script a bit to add some coverage of XML and YAML\noutput formats, since we had exactly none before, and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Jan 2020 18:20:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Hi,\n\nOn 2020-01-25 14:23:50 -0500, Tom Lane wrote:\n> A side effect of this change is that per-worker JIT info is now\n> printed one plan level further down: before it was printed on\n> the Gather node, but now it's attached to the Gather's child,\n> because that's where we print other per-worker data. I don't\n> see anything particularly wrong with that, but it's another\n> change from the behavior today.\n\nYea, I don't see any need to be bothered by that.\n\n\n> It's still really unclear to me how we could exercise any of\n> this behavior meaningfully in a regression test. I thought\n> for a little bit about using the TAP infrastructure instead\n> of a traditional-style test, but it seems like that doesn't\n> buy anything except for a bias towards ignoring details instead\n> of overspecifying them. Which is not much of an improvement.\n\nHm. I'd like to avoid needing TAP for this kind of thing, it'll just\nmake it much more expensive in just about all ways.\n\nTesting JIT explain is \"easy\" enough I think, I've posted a patch in\nanother thread, which just skips over the region of the test if JIT is\nnot available. See e.g. 0008 in\nhttps://www.postgresql.org/message-id/20191029000229.fkjmuld3g7f2jq7i%40alap3.anarazel.de\n(that's a thread I'd love your input in btw)\n\n\nIt's harder for parallel query though. If parallel query were able to\nreuse workers, we could \"just\" check at the beginning of the test if we\nare able to get the workers we need, and then skip the rest of the tests\nif not. But as things stand that doesn't guarantee anything.\n\nI wonder if we could introduce a debug GUC that makes parallel worker\nacquisition just retry in a loop, for a time determined by the GUC. That\nobviously would be a bad idea to do in a production setup, but it could\nbe good enough for regression tests? There are some deadlock dangers,\nbut I'm not sure they really matter for the tests.\n\n\n\n> +\t/* prepare per-worker general execution details */\n> +\tif (es->workers_state && es->verbose)\n> +\t{\n> +\t\tWorkerInstrumentation *w = planstate->worker_instrument;\n> +\n> +\t\tfor (int n = 0; n < w->num_workers; n++)\n> +\t\t{\n> +\t\t\tInstrumentation *instrument = &w->instrument[n];\n> +\t\t\tdouble\t\tnloops = instrument->nloops;\n> +\t\t\tdouble\t\tstartup_ms;\n> +\t\t\tdouble\t\ttotal_ms;\n> +\t\t\tdouble\t\trows;\n> +\n> +\t\t\tif (nloops <= 0)\n> +\t\t\t\tcontinue;\n> +\t\t\tstartup_ms = 1000.0 * instrument->startup / nloops;\n> +\t\t\ttotal_ms = 1000.0 * instrument->total / nloops;\n> +\t\t\trows = instrument->ntuples / nloops;\n> +\n> +\t\t\tExplainOpenWorker(n, es);\n> +\n> +\t\t\tif (es->format == EXPLAIN_FORMAT_TEXT)\n> +\t\t\t{\n> +\t\t\t\tExplainIndentText(es);\n> +\t\t\t\tif (es->timing)\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \"actual time=%.3f..%.3f rows=%.0f loops=%.0f\\n\",\n> +\t\t\t\t\t\t\t\t\t startup_ms, total_ms, rows, nloops);\n> +\t\t\t\telse\n> +\t\t\t\t\tappendStringInfo(es->str,\n> +\t\t\t\t\t\t\t\t\t \"actual rows=%.0f loops=%.0f\\n\",\n> +\t\t\t\t\t\t\t\t\t rows, nloops);\n> +\t\t\t}\n> +\t\t\telse\n> +\t\t\t{\n> +\t\t\t\tif (es->timing)\n> +\t\t\t\t{\n> +\t\t\t\t\tExplainPropertyFloat(\"Actual Startup Time\", \"ms\",\n> +\t\t\t\t\t\t\t\t\t\t startup_ms, 3, es);\n> +\t\t\t\t\tExplainPropertyFloat(\"Actual Total Time\", \"ms\",\n> +\t\t\t\t\t\t\t\t\t\t total_ms, 3, es);\n> +\t\t\t\t}\n> +\t\t\t\tExplainPropertyFloat(\"Actual Rows\", NULL, rows, 0, es);\n> +\t\t\t\tExplainPropertyFloat(\"Actual Loops\", NULL, nloops, 0, es);\n> +\t\t\t}\n> +\n> +\t\t\tExplainCloseWorker(n, es);\n> +\t\t}\n> +\t}\n\nI'd personally move this into a separate function, given the patches\nmoves the code around already. ExplainNode() is already hard enough to\nnavigate...\n\nIt probably also makes sense to move everything but the nloops <= 0,\nExplainOpenWorker/ExplainCloseWorker into its own function. As far as I\ncan tell it now should be identical between the non-parallel case?\n\n\n> +/*\n> + * Begin or resume output into the set-aside group for worker N.\n> + */\n> +static void\n\nWould it make sense to make these functions non-static? It seems\nplausible that code for a custom node or such would want to add\nits own information?\n\n\n> +ExplainOpenWorker(int n, ExplainState *es)\n> +{\n> +\tExplainWorkersState *wstate = es->workers_state;\n> +\n> +\tAssert(wstate);\n> +\tAssert(n >= 0 && n < wstate->num_workers);\n> +\n> +\t/* Save prior output buffer pointer */\n> +\twstate->prev_str = es->str;\n> +\n> +\tif (!wstate->worker_inited[n])\n> +\t{\n> +\t\t/* First time through, so create the buffer for this worker */\n> +\t\tinitStringInfo(&wstate->worker_str[n]);\n> +\t\tes->str = &wstate->worker_str[n];\n> +\n> +\t\t/*\n> +\t\t * Push suitable initial formatting state for this worker's field\n> +\t\t * group. We allow one extra logical nesting level, since this group\n> +\t\t * will eventually be wrapped in an outer \"Workers\" group.\n> +\t\t */\n> +\t\tExplainOpenSetAsideGroup(\"Worker\", NULL, true, 2, es);\n> +\n> +\t\t/*\n> +\t\t * In non-TEXT formats we always emit a \"Worker Number\" field, even if\n> +\t\t * there's no other data for this worker.\n> +\t\t */\n> +\t\tif (es->format != EXPLAIN_FORMAT_TEXT)\n> +\t\t\tExplainPropertyInteger(\"Worker Number\", NULL, n, es);\n> +\n> +\t\twstate->worker_inited[n] = true;\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/* Resuming output for a worker we've already emitted some data for */\n> +\t\tes->str = &wstate->worker_str[n];\n> +\n> +\t\t/* Restore formatting state saved by last ExplainCloseWorker() */\n> +\t\tExplainRestoreGroup(es, 2, &wstate->worker_state_save[n]);\n> +\t}\n> +\n> +\t/*\n> +\t * In TEXT format, prefix the first output line for this worker with\n> +\t * \"Worker N:\". Then, any additional lines should be indented one more\n> +\t * stop than the \"Worker N\" line is.\n> +\t */\n> +\tif (es->format == EXPLAIN_FORMAT_TEXT)\n> +\t{\n> +\t\tif (es->str->len == 0)\n> +\t\t{\n> +\t\t\tExplainIndentText(es);\n> +\t\t\tappendStringInfo(es->str, \"Worker %d: \", n);\n> +\t\t}\n> +\n> +\t\tes->indent++;\n> +\t}\n> +}\n\nI don't quite get the Worker %d bit. Why are we not outputting that in\nthe !worker_inited block?\n\n\n> +/*\n> + * Print per-worker info for current node, then free the ExplainWorkersState.\n> + */\n> +static void\n> +ExplainFlushWorkersState(ExplainState *es)\n> +{\n> +\tExplainWorkersState *wstate = es->workers_state;\n> +\n> +\tExplainOpenGroup(\"Workers\", \"Workers\", false, es);\n> +\tfor (int i = 0; i < wstate->num_workers; i++)\n> +\t{\n> +\t\tif (wstate->worker_inited[i])\n> +\t\t{\n> +\t\t\t/* This must match previous ExplainOpenSetAsideGroup call */\n> +\t\t\tExplainOpenGroup(\"Worker\", NULL, true, es);\n> +\t\t\tappendStringInfoString(es->str, wstate->worker_str[i].data);\n\nProbably never matters, but given we do have the string length already,\nwe could use appendBinaryStringInfo().\n\n\n> +\t\t\tExplainCloseGroup(\"Worker\", NULL, true, es);\n\nNot related to this patch: I never got why we maintain a grouping stack\nfor some things, but don't do it for the group properties\nthemselves.\n\n\n> /*\n> + * Open a group of related objects, without emitting actual data.\n> + *\n> + * Prepare the formatting state as though we were beginning a group with\n> + * the identified properties, but don't actually emit anything. Output\n> + * subsequent to this call can be redirected into a separate output buffer,\n> + * and then eventually appended to the main output buffer after doing a\n> + * regular ExplainOpenGroup call (with the same parameters).\n> + *\n> + * The extra \"depth\" parameter is the new group's depth compared to current.\n> + * It could be more than one, in case the eventual output will be enclosed\n> + * in additional nesting group levels. We assume we don't need to track\n> + * formatting state for those levels while preparing this group's output.\n> + *\n> + * There is no ExplainCloseSetAsideGroup --- in current usage, we always\n> + * pop this state with ExplainSaveGroup.\n> + */\n> +static void\n> +ExplainOpenSetAsideGroup(const char *objtype, const char *labelname,\n> +\t\t\t\t\t\t bool labeled, int depth, ExplainState *es)\n> +{\n> +\tswitch (es->format)\n> +\t{\n> +\t\tcase EXPLAIN_FORMAT_TEXT:\n> +\t\t\t/* nothing to do */\n> +\t\t\tbreak;\n> +\n> +\t\tcase EXPLAIN_FORMAT_XML:\n> +\t\t\tes->indent += depth;\n> +\t\t\tbreak;\n> +\n> +\t\tcase EXPLAIN_FORMAT_JSON:\n> +\t\t\tes->grouping_stack = lcons_int(0, es->grouping_stack);\n> +\t\t\tes->indent += depth;\n> +\t\t\tbreak;\n> +\n> +\t\tcase EXPLAIN_FORMAT_YAML:\n> +\t\t\tif (labelname)\n> +\t\t\t\tes->grouping_stack = lcons_int(1, es->grouping_stack);\n> +\t\t\telse\n> +\t\t\t\tes->grouping_stack = lcons_int(0, es->grouping_stack);\n> +\t\t\tes->indent += depth;\n> +\t\t\tbreak;\n> +\t}\n> +}\n\nHm. It might be worthwhile to rename ExplainOpenSetAsideGroup and use it\nfrom ExplainOpenGroup()? Seems we could just call it after\nExplainOpenGroup()'s switch, and remove all the indent/grouping_stack\nrelated code from ExplainOpenGroup().\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 25 Jan 2020 16:02:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if we could introduce a debug GUC that makes parallel worker\n> acquisition just retry in a loop, for a time determined by the GUC. That\n> obviously would be a bad idea to do in a production setup, but it could\n> be good enough for regression tests? There are some deadlock dangers,\n> but I'm not sure they really matter for the tests.\n\nHmmm .... might work. Seems like a better idea than \"run it by itself\"\nas we have to do now.\n\n> I'd personally move this into a separate function, given the patches\n> moves the code around already. ExplainNode() is already hard enough to\n> navigate...\n\nWell, it was already inline in ExplainNode, so this just moved the\ncode a few lines. I'm not that excited about moving little bits of\nthat function out-of-line.\n\n>> +/*\n>> + * Begin or resume output into the set-aside group for worker N.\n>> + */\n>> +static void\n\n> Would it make sense to make these functions non-static? It seems\n> plausible that code for a custom node or such would want to add\n> its own information?\n\nMaybe, but seems to me that there'd be a whole lot of other infrastructure\nneeded to track additional per-worker state. I'd just as soon not\nexpose this stuff until (a) there's a concrete not hypothetical use case\nand (b) it's been around long enough to feel comfortable that there's\nnothing wrong with the design.\n\n>> +\t/*\n>> +\t * In TEXT format, prefix the first output line for this worker with\n>> +\t * \"Worker N:\". Then, any additional lines should be indented one more\n>> +\t * stop than the \"Worker N\" line is.\n>> +\t */\n\n> I don't quite get the Worker %d bit. Why are we not outputting that in\n> the !worker_inited block?\n\nWe might strip it off again in ExplainCloseWorker, and then potentially\nadd it back again in a later ExplainOpenWorker. That could only happen\nif an earlier ExplainOpen/CloseWorker fragment decides not to emit any\ntext and then a later one wants to do so. Which I'm pretty sure is\nunreachable right now, but I don't think this code should assume that.\n\n>> +\t\t\tappendStringInfoString(es->str, wstate->worker_str[i].data);\n\n> Probably never matters, but given we do have the string length already,\n> we could use appendBinaryStringInfo().\n\nAh, I was thinking about making that change but then forgot.\n\n>> +\t\t\tExplainCloseGroup(\"Worker\", NULL, true, es);\n\n> Not related to this patch: I never got why we maintain a grouping stack\n> for some things, but don't do it for the group properties\n> themselves.\n\nRight now it'd just be extra overhead. If we ever have a case where it's\nnot convenient for an ExplainCloseGroup caller to provide the same data\nas for ExplainOpenGroup, then I'd be on board with storing that info.\n\n> Hm. It might be worthwhile to rename ExplainOpenSetAsideGroup and use it\n> from ExplainOpenGroup()? Seems we could just call it after\n> ExplainOpenGroup()'s switch, and remove all the indent/grouping_stack\n> related code from ExplainOpenGroup().\n\nHmm. It seemed easier to me to keep them separate, but ...\n\nI did consider a design in which, instead of ExplainOpenSetAsideGroup,\nthere was some function that would initialize the \"state_save\" area and\nthen you'd call the \"restore\" function to make that state active. It\nseemed like that would be too dissimilar from ExplainOpenGroup --- but\nconceivably, we could reimplement ExplainOpenGroup as calling the\ninitialize function and then the restore function (along with doing some\noutput). Not really sure that'd be an improvement though: it'd involve\nless duplicate code, but the functions would individually be harder to\nwrap your brain around.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Jan 2020 19:30:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I wonder if we could introduce a debug GUC that makes parallel worker\n>> acquisition just retry in a loop, for a time determined by the GUC. That\n>> obviously would be a bad idea to do in a production setup, but it could\n>> be good enough for regression tests? There are some deadlock dangers,\n>> but I'm not sure they really matter for the tests.\n\n> Hmmm .... might work. Seems like a better idea than \"run it by itself\"\n> as we have to do now.\n\nThe more I think about this, the more it seems like a good idea, and\nnot only for regression test purposes. If you're about to launch a\nquery that will run for hours even with the max number of workers,\nyou don't want it to launch with less than that number just because\nsomebody else was eating a worker slot for a few milliseconds.\n\nSo I'm imagining a somewhat general-purpose GUC defined like\n\"max_delay_to_acquire_parallel_worker\", measured say in milliseconds.\nThe default would be zero (current behavior: try once and give up),\nbut you could set it to small positive values if you have that kind\nof production concern, while the regression tests could set it to big\npositive values. This would alleviate all sorts of problems we have\nwith not being able to assume stable results from parallel worker\nacquisition in the tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Jan 2020 18:00:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate Workers entries in some EXPLAIN plans" } ]
[ { "msg_contents": "Hello\n\nTo benchmark with tpcb model, I tried to create a foreign key in the partitioned history table, but backend process killed by OOM.\nthe number of partitions is 8192. I tried in master(commit: ad4b7aeb84).\n\nI did the same thing in another server which has 200GB memory, but creating foreign key did not end in 24 hours.\n\nIs the community aware of this? is anyone working on this?\nIf you are discussing, please let me know the thread.\n\nTable definition and pstack are as follows.\n\n* table definition *\n\nCREATE TABLE accounts (aid INTEGER, bid INTEGER, abalance INTEGER, filler CHAR(84)) PARTITION BY HASH(aid);\nCREATE TABLE history (tid INTEGER, bid INTEGER, aid INTEGER, delta INTEGER, mtime TIMESTAMP, filler CHAR(22)) PARTITION BY HASH(aid);\n\\o /dev/null\nSELECT 'CREATE TABLE accounts_' || p || ' PARTITION OF accounts FOR VALUES WITH (modulus 8192, remainder ' || p || ');' FROM generate_series(0, 8191) p;\n\\gexec\nSELECT 'CREATE TABLE history_' || p || ' PARTITION OF history FOR VALUES WITH (modulus 8192, remainder ' || p || ');' FROM generate_series(0, 8191) p;\n\\gexec\n\\o\nALTER TABLE accounts ADD CONSTRAINT accounts_pk PRIMARY KEY (aid);\nALTER TABLE history ADD CONSTRAINT history_fk3 FOREIGN KEY (aid) REFERENCES accounts (aid);\n\n* pstack before killed by OOM *\n\n#0 0x0000000000a84aec in ReleaseSysCache (tuple=0x7fbb0a15dc28) at syscache.c:1175\n#1 0x0000000000a7135d in get_rel_relkind (relid=164628) at lsyscache.c:1816\n#2 0x0000000000845f0a in RelationBuildPartitionDesc (rel=0x7fbadb9bfb10) at partdesc.c:230\n#3 0x0000000000a78b9a in RelationBuildDesc (targetRelId=139268, insertIt=false) at relcache.c:1173\n#4 0x0000000000a7b52e in RelationClearRelation (relation=0x7fbb0a1393e8, rebuild=true) at relcache.c:2534\n#5 0x0000000000a7bacf in RelationFlushRelation (relation=0x7fbb0a1393e8) at relcache.c:2692\n#6 0x0000000000a7bbe1 in RelationCacheInvalidateEntry (relationId=139268) at relcache.c:2744\n#7 0x0000000000a6e11d in LocalExecuteInvalidationMessage (msg=0x7fbadb62e480) at inval.c:589\n#8 0x0000000000a6de7d in ProcessInvalidationMessages (hdr=0x1d36d48, func=0xa6e01a <LocalExecuteInvalidationMessage>) at inval.c:460\n#9 0x0000000000a6e94e in CommandEndInvalidationMessages () at inval.c:1095\n#10 0x0000000000559c93 in AtCCI_LocalCache () at xact.c:1458\n#11 0x00000000005596ac in CommandCounterIncrement () at xact.c:1040\n#12 0x00000000006b1811 in addFkRecurseReferenced (wqueue=0x7fffcb0a0588, fkconstraint=0x20cf6a0, rel=0x7fbb0a1393e8, pkrel=0x7fbadb9bbe90, indexOid=189582, parentConstr=204810, numfks=1, pkattnum=0x7fffcb0a0190, fkattnum=0x7fffcb0a0150, pfeqoperators=0x7fffcb09ff50, ppeqoperators=0x7fffcb09fed0, ffeqoperators=0x7fffcb09fe50, old_check_ok=false) at tablecmds.c:8168\n#13 0x00000000006b1a0b in addFkRecurseReferenced (wqueue=0x7fffcb0a0588, fkconstraint=0x20cf6a0, rel=0x7fbb0a1393e8, pkrel=0x7fbadc188840, indexOid=188424, parentConstr=0, numfks=1, pkattnum=0x7fffcb0a0190, fkattnum=0x7fffcb0a0150, pfeqoperators=0x7fffcb09ff50, ppeqoperators=0x7fffcb09fed0, ffeqoperators=0x7fffcb09fe50, old_check_ok=false) at tablecmds.c:8219\n#14 0x00000000006b13e0 in ATAddForeignKeyConstraint (wqueue=0x7fffcb0a0588, tab=0x20cf4d8, rel=0x7fbb0a1393e8, fkconstraint=0x20cf6a0, parentConstr=0, recurse=true, recursing=false, lockmode=6) at tablecmds.c:8005\n#15 0x00000000006afa0c in ATExecAddConstraint (wqueue=0x7fffcb0a0588, tab=0x20cf4d8, rel=0x7fbb0a1393e8, newConstraint=0x20cf6a0, recurse=true, is_readd=false, lockmode=6) at tablecmds.c:7419\n#16 0x00000000006a8a7a in ATExecCmd (wqueue=0x7fffcb0a0588, tab=0x20cf4d8, rel=0x7fbb0a1393e8, cmd=0x20cf648, lockmode=6) at tablecmds.c:4300\n#17 0x00000000006a8448 in ATRewriteCatalogs (wqueue=0x7fffcb0a0588, lockmode=6) at tablecmds.c:4185\n#18 0x00000000006a7bf9 in ATController (parsetree=0x1cb4350, rel=0x7fbb0a1393e8, cmds=0x20cf428, recurse=true, lockmode=6) at tablecmds.c:3843\n#19 0x00000000006a78a4 in AlterTable (relid=139268, lockmode=6, stmt=0x1cb4350) at tablecmds.c:3504\n#20 0x0000000000914999 in ProcessUtilitySlow (pstate=0x1cb3a10, pstmt=0x1c91380, queryString=0x1c90170 \"ALTER TABLE history ADD CONSTRAINT history_fk3 FOREIGN KEY (aid) REFERENCES accounts (aid);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1c91470, completionTag=0x7fffcb0a0d20 \"\") at utility.c:1131\n#21 0x0000000000914490 in standard_ProcessUtility (pstmt=0x1c91380, queryString=0x1c90170 \"ALTER TABLE history ADD CONSTRAINT history_fk3 FOREIGN KEY (aid) REFERENCES accounts (aid);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1c91470, completionTag=0x7fffcb0a0d20 \"\") at utility.c:927\n#22 0x0000000000913534 in ProcessUtility (pstmt=0x1c91380, queryString=0x1c90170 \"ALTER TABLE history ADD CONSTRAINT history_fk3 FOREIGN KEY (aid) REFERENCES accounts (aid);\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x1c91470, completionTag=0x7fffcb0a0d20 \"\") at utility.c:360\n#23 0x000000000091245a in PortalRunUtility (portal=0x1cf5ee0, pstmt=0x1c91380, isTopLevel=true, setHoldSnapshot=false, dest=0x1c91470, completionTag=0x7fffcb0a0d20 \"\") at pquery.c:1175\n#24 0x0000000000912671 in PortalRunMulti (portal=0x1cf5ee0, isTopLevel=true, setHoldSnapshot=false, dest=0x1c91470, altdest=0x1c91470, completionTag=0x7fffcb0a0d20 \"\") at pquery.c:1321\n#25 0x0000000000911ba6 in PortalRun (portal=0x1cf5ee0, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1c91470, altdest=0x1c91470, completionTag=0x7fffcb0a0d20 \"\") at pquery.c:796\n#26 0x000000000090b9ad in exec_simple_query (query_string=0x1c90170 \"ALTER TABLE history ADD CONSTRAINT history_fk3 FOREIGN KEY (aid) REFERENCES accounts (aid);\") at postgres.c:1231\n#27 0x000000000090fd13 in PostgresMain (argc=1, argv=0x1cb9fb8, dbname=0x1cb9ed0 \"postgres\", username=0x1cb9eb0 \"k5user\") at postgres.c:4256\n#28 0x0000000000864cbd in BackendRun (port=0x1cb1e90) at postmaster.c:4498\n#29 0x000000000086449b in BackendStartup (port=0x1cb1e90) at postmaster.c:4189\n#30 0x00000000008608d7 in ServerLoop () at postmaster.c:1727\n#31 0x000000000086018d in PostmasterMain (argc=1, argv=0x1c8aa40) at postmaster.c:1400\n#32 0x0000000000770835 in main (argc=1, argv=0x1c8aa40) at main.c:210\n\nregards,\n\nKato Sho\n\n\n", "msg_date": "Wed, 23 Oct 2019 05:59:01 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On 2019-Oct-23, kato-sho@fujitsu.com wrote:\n\n> Hello\n> \n> To benchmark with tpcb model, I tried to create a foreign key in the partitioned history table, but backend process killed by OOM.\n> the number of partitions is 8192. I tried in master(commit: ad4b7aeb84).\n> \n> I did the same thing in another server which has 200GB memory, but creating foreign key did not end in 24 hours.\n\nThanks for reporting. It sounds like there must be a memory leak here.\nI am fairly pressed for time at present so I won't be able to\ninvestigate this until, at least, mid November.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 24 Oct 2019 15:48:57 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Thu, Oct 24, 2019 at 03:48:57PM -0300, Alvaro Herrera wrote:\n>On 2019-Oct-23, kato-sho@fujitsu.com wrote:\n>\n>> Hello\n>>\n>> To benchmark with tpcb model, I tried to create a foreign key in the partitioned history table, but backend process killed by OOM.\n>> the number of partitions is 8192. I tried in master(commit: ad4b7aeb84).\n>>\n>> I did the same thing in another server which has 200GB memory, but creating foreign key did not end in 24 hours.\n>\n>Thanks for reporting. It sounds like there must be a memory leak here.\n>I am fairly pressed for time at present so I won't be able to\n>investigate this until, at least, mid November.\n>\n\nI've briefly looked into this, and I think the main memory leak is in\nRelationBuildPartitionDesc. It gets called with PortalContext, it\nallocates a lot of memory building the descriptor, copies it into\nCacheContext but does not even try to free anything. So we end up with\nsomething like this:\n\nTopMemoryContext: 215344 total in 11 blocks; 47720 free (12 chunks); 167624 used\n pgstat TabStatusArray lookup hash table: 32768 total in 3 blocks; 9160 free (4 chunks); 23608 used\n TopTransactionContext: 4194304 total in 10 blocks; 1992968 free (18 chunks); 2201336 used\n RowDescriptionContext: 8192 total in 1 blocks; 6880 free (0 chunks); 1312 used\n MessageContext: 8192 total in 1 blocks; 3256 free (1 chunks); 4936 used\n Operator class cache: 8192 total in 1 blocks; 512 free (0 chunks); 7680 used\n smgr relation table: 32768 total in 3 blocks; 16768 free (8 chunks); 16000 used\n TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0 chunks); 264 used\n Portal hash: 8192 total in 1 blocks; 512 free (0 chunks); 7680 used\n TopPortalContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544 used\n PortalContext: 1557985728 total in 177490 blocks; 9038656 free (167645 chunks); 1548947072 used: \n Relcache by OID: 16384 total in 2 blocks; 3424 free (3 chunks); 12960 used\n CacheMemoryContext: 17039424 total in 13 blocks; 7181480 free (9 chunks); 9857944 used\n partition key: 1024 total in 1 blocks; 168 free (0 chunks); 856 used: history\n index info: 2048 total in 2 blocks; 568 free (1 chunks); 1480 used: pg_class_tblspc_relfilenode_index\n ...\n index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: pg_class_oid_index\n WAL record construction: 49776 total in 2 blocks; 6344 free (0 chunks); 43432 used\n PrivateRefCount: 8192 total in 1 blocks; 2584 free (0 chunks); 5608 used\n MdSmgr: 8192 total in 1 blocks; 5976 free (0 chunks); 2216 used\n LOCALLOCK hash: 65536 total in 4 blocks; 18584 free (12 chunks); 46952 used\n Timezones: 104128 total in 2 blocks; 2584 free (0 chunks); 101544 used\n ErrorContext: 8192 total in 1 blocks; 6840 free (4 chunks); 1352 used\nGrand total: 1580997216 bytes in 177834 blocks; 18482808 free (167857 chunks); 1562514408 used\n\n(At which point I simply interrupted the query, it'd allocate more and\nmore memory until an OOM).\n\nThe attached patch trivially fixes that by adding a memory context\ntracking all the temporary data, and then just deletes it as a whole at\nthe end of the function. This significantly reduces the memory usage for\nme, not sure it's 100% correct.\n\nFWIW, even with this fix it still takes an awful lot to create the\nforeign key, because the CPU is stuck doing this\n\n 60.78% 60.78% postgres postgres [.] bms_equal\n 32.58% 32.58% postgres postgres [.] get_eclass_for_sort_expr\n 3.83% 3.83% postgres postgres [.] add_child_rel_equivalences\n 0.23% 0.00% postgres [unknown] [.] 0x0000000000000005\n 0.22% 0.00% postgres [unknown] [.] 0000000000000000\n 0.18% 0.18% postgres postgres [.] AllocSetCheck\n ...\n\nHaven't looked into the details yet.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 25 Oct 2019 00:17:58 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Fri, Oct 25, 2019 at 12:17:58AM +0200, Tomas Vondra wrote:\n>\n> ...\n>\n>FWIW, even with this fix it still takes an awful lot to create the\n>foreign key, because the CPU is stuck doing this\n>\n> 60.78% 60.78% postgres postgres [.] bms_equal\n> 32.58% 32.58% postgres postgres [.] get_eclass_for_sort_expr\n> 3.83% 3.83% postgres postgres [.] add_child_rel_equivalences\n> 0.23% 0.00% postgres [unknown] [.] 0x0000000000000005\n> 0.22% 0.00% postgres [unknown] [.] 0000000000000000\n> 0.18% 0.18% postgres postgres [.] AllocSetCheck\n> ...\n>\n>Haven't looked into the details yet.\n>\n\nOK, a bit more info. A better perf report (with framepointers etc) looks\nlike this:\n\n + 99.98% 0.00% postgres [unknown] [.] 0x495641002c4d133d\n + 99.98% 0.00% postgres libc-2.29.so [.] __libc_start_main\n + 99.98% 0.00% postgres postgres [.] startup_hacks\n + 99.98% 0.00% postgres postgres [.] PostmasterMain\n + 99.98% 0.00% postgres postgres [.] ServerLoop\n + 99.98% 0.00% postgres postgres [.] BackendStartup\n + 99.98% 0.00% postgres postgres [.] ExitPostmaster\n + 99.98% 0.00% postgres postgres [.] PostgresMain\n + 99.98% 0.00% postgres postgres [.] exec_simple_query\n + 99.98% 0.00% postgres postgres [.] PortalRun\n + 99.98% 0.00% postgres postgres [.] PortalRunMulti\n + 99.98% 0.00% postgres postgres [.] PortalRunUtility\n + 99.98% 0.00% postgres postgres [.] ProcessUtility\n + 99.98% 0.00% postgres postgres [.] standard_ProcessUtility\n + 99.98% 0.00% postgres postgres [.] ProcessUtilitySlow\n + 99.98% 0.00% postgres postgres [.] AlterTable\n + 99.98% 0.00% postgres postgres [.] ATController\n + 99.98% 0.00% postgres postgres [.] ATRewriteTables\n + 99.98% 0.00% postgres postgres [.] validateForeignKeyConstraint\n + 99.98% 0.00% postgres postgres [.] RI_Initial_Check\n + 99.96% 0.00% postgres postgres [.] SPI_execute_snapshot\n + 99.86% 0.00% postgres postgres [.] _SPI_execute_plan\n + 99.70% 0.00% postgres postgres [.] GetCachedPlan\n + 99.70% 0.00% postgres postgres [.] BuildCachedPlan\n + 99.66% 0.00% postgres postgres [.] pg_plan_queries\n + 99.66% 0.00% postgres postgres [.] pg_plan_query\n + 99.66% 0.00% postgres postgres [.] planner\n + 99.66% 0.00% postgres postgres [.] standard_planner\n + 99.62% 0.00% postgres postgres [.] subquery_planner\n + 99.62% 0.00% postgres postgres [.] grouping_planner\n + 99.62% 0.00% postgres postgres [.] query_planner\n + 99.31% 0.00% postgres postgres [.] make_one_rel\n + 97.53% 0.00% postgres postgres [.] set_base_rel_pathlists\n + 97.53% 0.00% postgres postgres [.] set_rel_pathlist\n + 97.53% 0.01% postgres postgres [.] set_append_rel_pathlist\n + 97.42% 0.00% postgres postgres [.] set_plain_rel_pathlist\n + 97.40% 0.02% postgres postgres [.] create_index_paths\n + 97.16% 0.01% postgres postgres [.] get_index_paths\n + 97.12% 0.02% postgres postgres [.] build_index_paths\n + 96.67% 0.01% postgres postgres [.] build_index_pathkeys\n + 96.61% 0.01% postgres postgres [.] make_pathkey_from_sortinfo\n + 95.70% 21.27% postgres postgres [.] get_eclass_for_sort_expr\n + 75.21% 75.21% postgres postgres [.] bms_equal\n + 48.72% 0.00% postgres postgres [.] consider_index_join_clauses\n + 48.72% 0.00% postgres postgres [.] consider_index_join_outer_rels\n + 48.72% 0.02% postgres postgres [.] get_join_index_paths\n + 1.78% 0.00% postgres postgres [.] set_base_rel_sizes\n + 1.78% 0.00% postgres postgres [.] set_rel_size\n + 1.78% 0.01% postgres postgres [.] set_append_rel_size\n + 1.66% 1.34% postgres postgres [.] add_child_rel_equivalences\n\nIt is (pretty much) a single callstack, i.e. each function is simply\ncalling the one below it (with some minor exceptions at the end, but\nthat's pretty negligible here).\n\nThis essentially says that planning queries executed by RI_Initial_Check\nwith many partitions is damn expensive. An example query is this one:\n\ntest=# \\timing\n\ntest=# SELECT fk.\"aid\" FROM ONLY \"public\".\"history_60\" fk LEFT OUTER\nJOIN \"public\".\"accounts\" pk ON ( pk.\"aid\" OPERATOR(pg_catalog.=)\nfk.\"aid\") WHERE pk.\"aid\" IS NULL AND (fk.\"aid\" IS NOT NULL);\n\n aid \n-----\n(0 rows)\n\nTime: 28791.492 ms (00:28.791)\n\nBear in mind those are *empty* tables, so the execution is pretty cheap\n(explain analyze says the execution takes ~65ms, but the planning itself\ntakes ~28 seconds). And we have 8192 such partitions, which means we'd\nspend ~230k seconds just planning the RI queries. That's 64 hours.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 25 Oct 2019 01:07:56 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "Hi,\n\nOn 2019-10-23 05:59:01 +0000, kato-sho@fujitsu.com wrote:\n> To benchmark with tpcb model, I tried to create a foreign key in the partitioned history table, but backend process killed by OOM.\n> the number of partitions is 8192. I tried in master(commit: ad4b7aeb84).\n\nObviously this should be improved. But I think it's also worthwhile to\nnote that using 8k partitions is very unlikely to be a good choice for\nanything. The metadata, partition pruning, etc overhead is just going to\nbe very substantial.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 16:28:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Thu, Oct 24, 2019 at 04:28:38PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-10-23 05:59:01 +0000, kato-sho@fujitsu.com wrote:\n>> To benchmark with tpcb model, I tried to create a foreign key in the partitioned history table, but backend process killed by OOM.\n>> the number of partitions is 8192. I tried in master(commit: ad4b7aeb84).\n>\n>Obviously this should be improved. But I think it's also worthwhile to\n>note that using 8k partitions is very unlikely to be a good choice for\n>anything. The metadata, partition pruning, etc overhead is just going to\n>be very substantial.\n>\n\nTrue. Especially with two partitioned tables, each with 8k partitions.\n\nI do think it makes sense to reduce the memory usage, because just\neating all available memory (in the extreme case) is not very nice. I've\nadded that patch to the CF, although the patch I shared is very crude\nand I'm by no means suggesting it's how it should be done ultimately.\n\nThe other bit (speed of planning with 8k partitions) is probably a more\ngeneral issue, and I suppose we'll improve that over time. I don't think\nthere's a simple change magically improving that.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 30 Oct 2019 19:29:52 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Thu, 31 Oct 2019 at 07:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Oct 24, 2019 at 04:28:38PM -0700, Andres Freund wrote:\n> >Hi,\n> >\n> >On 2019-10-23 05:59:01 +0000, kato-sho@fujitsu.com wrote:\n> >> To benchmark with tpcb model, I tried to create a foreign key in the partitioned history table, but backend process killed by OOM.\n> >> the number of partitions is 8192. I tried in master(commit: ad4b7aeb84).\n> >\n> >Obviously this should be improved. But I think it's also worthwhile to\n> >note that using 8k partitions is very unlikely to be a good choice for\n> >anything. The metadata, partition pruning, etc overhead is just going to\n> >be very substantial.\n> >\n>\n> True. Especially with two partitioned tables, each with 8k partitions.\n\nIn Ottawa this year, Andres and I briefly talked about the possibility\nof making a series of changes to how equalfuncs.c works. The idea was\nto make it easy by using some pre-processor magic to allow us to\ncreate another version of equalfuncs which would let us have an equal\ncomparison function that returns -1 / 0 / +1 rather than just true or\nfalse. This would allow us to Binary Search Trees of objects. I\nidentified that EquivalenceClass.ec_members would be better written as\na BST to allow much faster lookups in get_eclass_for_sort_expr().\n\nThe implementation I had in mind for the BST was a compact tree that\ninstead of using pointers for the left and right children, it just\nuses an integer to reference the array element number. This would\nallow us to maintain very fast insert-order traversals. Deletes would\nneed to decrement all child references greater than the deleted index.\nThis is sort of on-par with how the new List implementation in master.\ni.e deletes take additional effort, but inserts are fast if there's\nenough space in the array for a new element, traversals are\ncache-friendly, etc. I think trees might be better than hash tables\nfor this as a hash function needs to hash all fields, whereas a\ncomparison function can stop when it finds the first non-match.\n\nThis may also be able to help simplify the code in setrefs.c to get\nrid of the complex code around indexed tlists. tlist_member() would\nbecome O(log n) instead of O(n), so perhaps there'd be not much point\nin having both search_indexed_tlist_for_var() and\nsearch_indexed_tlist_for_non_var().\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Thu, 31 Oct 2019 11:19:05 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "Hi,\n\nOn 2019-10-31 11:19:05 +1300, David Rowley wrote:\n> In Ottawa this year, Andres and I briefly talked about the possibility\n> of making a series of changes to how equalfuncs.c works. The idea was\n> to make it easy by using some pre-processor magic to allow us to\n> create another version of equalfuncs which would let us have an equal\n> comparison function that returns -1 / 0 / +1 rather than just true or\n> false.\n\nSee also the thread at\nhttps://www.postgresql.org/message-id/20190920051857.2fhnvhvx4qdddviz%40alap3.anarazel.de\nwhich would make this fairly easy, without having to compile equalfuncs\ntwice or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Oct 2019 15:29:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> In Ottawa this year, Andres and I briefly talked about the possibility\n> of making a series of changes to how equalfuncs.c works. The idea was\n> to make it easy by using some pre-processor magic to allow us to\n> create another version of equalfuncs which would let us have an equal\n> comparison function that returns -1 / 0 / +1 rather than just true or\n> false. This would allow us to Binary Search Trees of objects. I\n> identified that EquivalenceClass.ec_members would be better written as\n> a BST to allow much faster lookups in get_eclass_for_sort_expr().\n\nThis seems like a good idea, but why would we want to maintain two\nversions? Just change equalfuncs.c into comparefuncs.c, full stop.\nequal() would be a trivial wrapper for (compare_objects(a,b) == 0).\n\nAndres' ideas about autogenerating all that boilerplate aren't\nbad, but that's no justification for carrying two full sets of\nper-node logic when one set would do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 00:56:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Thu, 31 Oct 2019 at 17:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > In Ottawa this year, Andres and I briefly talked about the possibility\n> > of making a series of changes to how equalfuncs.c works. The idea was\n> > to make it easy by using some pre-processor magic to allow us to\n> > create another version of equalfuncs which would let us have an equal\n> > comparison function that returns -1 / 0 / +1 rather than just true or\n> > false. This would allow us to Binary Search Trees of objects. I\n> > identified that EquivalenceClass.ec_members would be better written as\n> > a BST to allow much faster lookups in get_eclass_for_sort_expr().\n>\n> This seems like a good idea, but why would we want to maintain two\n> versions? Just change equalfuncs.c into comparefuncs.c, full stop.\n> equal() would be a trivial wrapper for (compare_objects(a,b) == 0).\n\nIf we can do that without slowing down the comparison, then sure.\nChecking which node sorts earlier is a bit more expensive than just\nchecking for equality. But if that's not going to be noticeable in\nreal-world test cases, then I agree.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Fri, 1 Nov 2019 09:17:25 +1300", "msg_from": "David Rowley <david.rowley@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "Hello,\n\nOn Fri, Oct 25, 2019 at 7:18 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Thu, Oct 24, 2019 at 03:48:57PM -0300, Alvaro Herrera wrote:\n> >On 2019-Oct-23, kato-sho@fujitsu.com wrote:\n> >\n> >> Hello\n> >>\n> >> To benchmark with tpcb model, I tried to create a foreign key in the partitioned history table, but backend process killed by OOM.\n> >> the number of partitions is 8192. I tried in master(commit: ad4b7aeb84).\n> >>\n> >> I did the same thing in another server which has 200GB memory, but creating foreign key did not end in 24 hours.\n> >\n> >Thanks for reporting.\n\nThank you Kato-san.\n\n> It sounds like there must be a memory leak here.\n> >I am fairly pressed for time at present so I won't be able to\n> >investigate this until, at least, mid November.\n>\n> I've briefly looked into this, and I think the main memory leak is in\n> RelationBuildPartitionDesc. It gets called with PortalContext, it\n> allocates a lot of memory building the descriptor, copies it into\n> CacheContext but does not even try to free anything. So we end up with\n> something like this:\n...\n> The attached patch trivially fixes that by adding a memory context\n> tracking all the temporary data, and then just deletes it as a whole at\n> the end of the function. This significantly reduces the memory usage for\n> me, not sure it's 100% correct.\n\nThank you Tomas. I think we have considered this temporary context\nfix a number of times before, but it got stalled for one reason or\nanother ([1] comes to mind as the last thread where this came up).\n\nAnother angle to look at this is that our design where PartitionDesc\nis rebuilt on relcache reload of the parent relation is not a great\none after all. It seems that we're rightly (?) invalidating the\nparent's relcache 8192 times in this case, because its cacheable\nforeign key descriptor changes on processing each partition, but\nPartitionDesc itself doesn't change. Having to pointlessly rebuild it\n8192 times seems really wasteful.\n\nI recall a discussion where it was proposed to build PartitionDesc\nonly when needed as opposed on every relcache reload of the parent\nrelation. Attached PoC-at-best patch that does that seems to go\nthrough without OOM and passes make check-world. I think this should\nhave a very minor impact on select queries.\n\nBut...\n\n> FWIW, even with this fix it still takes an awful lot to create the\n> foreign key, because the CPU is stuck doing this\n>\n> 60.78% 60.78% postgres postgres [.] bms_equal\n> 32.58% 32.58% postgres postgres [.] get_eclass_for_sort_expr\n> 3.83% 3.83% postgres postgres [.] add_child_rel_equivalences\n> 0.23% 0.00% postgres [unknown] [.] 0x0000000000000005\n> 0.22% 0.00% postgres [unknown] [.] 0000000000000000\n> 0.18% 0.18% postgres postgres [.] AllocSetCheck\n\n...we have many problems to solve here. :-(\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw%40mail.gmail.com", "msg_date": "Fri, 1 Nov 2019 17:37:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On 2019-Oct-25, Tomas Vondra wrote:\n\n> The attached patch trivially fixes that by adding a memory context\n> tracking all the temporary data, and then just deletes it as a whole at\n> the end of the function. This significantly reduces the memory usage for\n> me, not sure it's 100% correct.\n\nFWIW we already had this code (added by commit 2455ab48844c), but it was\nremoved by commit d3f48dfae42f. I think we should put it back. (I\nthink it may be useful to use a static MemoryContext that we can just\nreset each time, instead of creating and deleting each time, to save on\nmemcxt churn. That'd make the function non-reentrant, but I don't see\nthat we'd make the catalogs partitioned any time soon. This may be\npremature optimization though -- not really wedded to it.)\n\nWith Amit's patch to make RelationBuildPartitionDesc called lazily, the\ntime to plan the RI_InitialCheck query (using Kato Sho's test case) goes\nfrom 30 seconds to 14 seconds on my laptop. Obviously there's more that\nwe'd need to fix to make the scenario fully supported, but it seems a\ndecent step forward.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 13 Nov 2019 15:50:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Oct-25, Tomas Vondra wrote:\n>> The attached patch trivially fixes that by adding a memory context\n>> tracking all the temporary data, and then just deletes it as a whole at\n>> the end of the function. This significantly reduces the memory usage for\n>> me, not sure it's 100% correct.\n\n> FWIW we already had this code (added by commit 2455ab48844c), but it was\n> removed by commit d3f48dfae42f. I think we should put it back.\n\nI disagree. The point of d3f48dfae42f is that the management of that\nleakage is now being done at the caller level, and I'm quite firmly\nagainst having RelationBuildPartitionDesc duplicate that. If we\ndon't like the amount of space RelationBuildPartitionDesc is leaking,\nwe aren't going to like the amount of space that sibling routines\nsuch as RelationBuildTriggers leak, either.\n\nWhat we ought to be thinking about instead is adjusting the\nRECOVER_RELATION_BUILD_MEMORY heuristic in relcache.c. I am not\nsure what it ought to look like, but I doubt that \"do it all the\ntime\" has suddenly become the right answer, when it wasn't the\nright answer for 20-something years.\n\nIt's conceivable that \"do it if CCA is on, or if the current\ntable is a partition child table\" is a reasonable approach.\nBut I'm not sure whether we can know the relation relkind\nearly enough for that :-(\n\n(BTW, a different question one could ask is exactly why\nRelationBuildPartitionDesc is so profligate of leaked memory.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Nov 2019 14:31:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On 2019-Nov-13, Tom Lane wrote:\n\n> (BTW, a different question one could ask is exactly why\n> RelationBuildPartitionDesc is so profligate of leaked memory.)\n\nThe original partitioning code (f0e44751d717) decided that it didn't\nwant to bother with adding a \"free\" routine for PartitionBoundInfo\nstructs, maybe because it had too many pointers, so there's no way for\nRelationBuildPartitionDesc to free everything it allocates anyway. We\ncould add a couple of pfrees and list_frees here and there, but for the\nmain thing being leaked we'd need to improve that API.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 Nov 2019 17:00:11 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On 2019-Nov-13, Alvaro Herrera wrote:\n\n> On 2019-Nov-13, Tom Lane wrote:\n> \n> > (BTW, a different question one could ask is exactly why\n> > RelationBuildPartitionDesc is so profligate of leaked memory.)\n> \n> The original partitioning code (f0e44751d717) decided that it didn't\n> want to bother with adding a \"free\" routine for PartitionBoundInfo\n> structs, maybe because it had too many pointers, so there's no way for\n> RelationBuildPartitionDesc to free everything it allocates anyway. We\n> could add a couple of pfrees and list_frees here and there, but for the\n> main thing being leaked we'd need to improve that API.\n\nAh, we also leak an array of PartitionBoundSpec, which is a Node. Do we\nhave any way to free those? I don't think we do.\n\nIn short, it looks to me as if this function was explicitly designed\nwith the idea that it'd be called in a temp mem context.\n\nI looked at d3f48dfae42f again per your earlier suggestion. Doing that\nmemory context dance for partitioned relations does seem to fix the\nproblem too; we just need to move the context creation to just after\nScanPgRelation, at which point we have the relkind. (Note: I think the\nproblematic case is the partitioned table, not the partitions\nthemselves. At least, with the attached patch the problem goes away. I\nguess it would be sensible to research whether we need to do this for\nrelispartition=true as well, but I haven't done that.)\n\nThere is indeed some leakage for relations that have triggers too (or\nrules), but in order for those to become significant you would have to\nhave thousands of triggers or rules ... and in reasonable designs, you\njust don't because it doesn't make sense. But it is not totally\nunreasonable to have lots of partitions, and as we improve the system,\nmore and more people will want to.\n\n\nAside: while messing with this I noticed that how significant pg_strtok\nis as a resource hog when building partition descs (from the\nstringToNode that's applied to each partition's partbound.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 13 Nov 2019 18:45:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Wed, Nov 13, 2019 at 4:46 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> But it is not totally\n> unreasonable to have lots of partitions, and as we improve the system,\n> more and more people will want to.\n\nYep.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 Nov 2019 08:30:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On 11/13/19 4:45 PM, Alvaro Herrera wrote:\n >\n> But it is not totally\n> unreasonable to have lots of partitions, and as we improve the system,\n> more and more people will want to.\n\n+1\n\nThis patch still applies but there seems to be some disagreement on how \nto proceed.\n\nAny thoughts?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 24 Mar 2020 10:39:09 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On 2020-Mar-24, David Steele wrote:\n\n> This patch still applies but there seems to be some disagreement on\n> how to proceed.\n\nActually, I don't think there's any disagreement regarding the patch I\nlast posted. (There was disagreement on the previous patches, which\nwere very different). Tom suggested to look at the heuristics used for\nRECOVER_RELATION_BUILD_MEMORY, and the patch does exactly that. It\nwould be great if Kato Sho can try the original test case with my latest\npatch (the one in https://postgr.es/m/20191113214544.GA16060@alvherre.pgsql )\nand let us know if it improves things.\n\nThe patch as posted generates these warnings in my current GCC that it\ndidn't when I checked last, but they're harmless -- if/when I push,\nit'll be without the parens.\n\n/pgsql/source/master/src/backend/utils/cache/relcache.c:1064:21: warning: equality comparison with extraneous parentheses [-Wparentheses-equality]\n if ((relp->relkind == RELKIND_PARTITIONED_TABLE)\n ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master/src/backend/utils/cache/relcache.c:1064:21: note: remove extraneous parentheses around the comparison to silence this warning\n if ((relp->relkind == RELKIND_PARTITIONED_TABLE)\n ~ ^ ~\n/pgsql/source/master/src/backend/utils/cache/relcache.c:1064:21: note: use '=' to turn this equality comparison into an assignment\n if ((relp->relkind == RELKIND_PARTITIONED_TABLE)\n ^~\n =\n/pgsql/source/master/src/backend/utils/cache/relcache.c:1242:33: warning: equality comparison with extraneous parentheses [-Wparentheses-equality]\n if ((relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master/src/backend/utils/cache/relcache.c:1242:33: note: remove extraneous parentheses around the comparison to silence this warning\n if ((relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n ~ ^ ~\n/pgsql/source/master/src/backend/utils/cache/relcache.c:1242:33: note: use '=' to turn this equality comparison into an assignment\n if ((relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n ^~\n =\n2 warnings generated.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 24 Mar 2020 12:26:23 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "> On 24 Mar 2020, at 16:26, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> It\n> would be great if Kato Sho can try the original test case with my latest\n> patch (the one in https://postgr.es/m/20191113214544.GA16060@alvherre.pgsql )\n> and let us know if it improves things.\n\nHi!,\n\nAre you able to test Alvaros latest patch to see if that solves the originally\nreported problem, so that we can reach closure on this item during the\ncommitfest?\n\ncheers ./daniel\n\n", "msg_date": "Tue, 14 Jul 2020 16:28:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Tuesday, July 14, 2020 11:29 PM, Daniel Fustafsson wrote:\n>Are you able to test Alvaros latest patch to see if that solves the originally reported problem, so that we can reach >closure on this item during the commitfest?\n\nSorry for the too late replay. I missed this mail.\nAnd, thanks for writing patches. I start test now.\nI'll report the result before the end of August .\n\nRegards,\nSho kato\n\n\n", "msg_date": "Wed, 5 Aug 2020 00:43:14 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Wednesday, August 5, 2020 9:43 AM I wrote:\n> I'll report the result before the end of August .\n\nI test v2-0001-build-partdesc-memcxt.patch at 9a9db08ae4 and it is ok.\n\nFirstly, I execute ALTER TABLE ADD CONSTRAINT FOREIGN KEY on the table which has 8k tables.\nThis query execution completes in about 22 hours without OOM.\n\nSecondary, I confirm the reduction of memory context usage.\nRunning with 8k partitions takes too long, I confirm with 1k partitions.\nI use gdb and call MemoryContextStats(TopMemoryContext) at addFkRecurseReferencing().\n\nCacheMemoryContext size becomes small, so I think it is working as expected.\nThe Results are as follows.\n\n- before applying patch\n\nTopMemoryContext: 418896 total in 18 blocks; 91488 free (13 chunks); 327408 used\n pgstat TabStatusArray lookup hash table: 65536 total in 4 blocks; 16808 free (7 chunks); 48728 used\n TopTransactionContext: 4194304 total in 10 blocks; 1045728 free (18 chunks); 3148576 used\n TableSpace cache: 8192 total in 1 blocks; 2048 free (0 chunks); 6144 used\n Type information cache: 24624 total in 2 blocks; 2584 free (0 chunks); 22040 used\n Operator lookup cache: 24576 total in 2 blocks; 10712 free (4 chunks); 13864 used\n RowDescriptionContext: 8192 total in 1 blocks; 6880 free (0 chunks); 1312 used\n MessageContext: 8192 total in 1 blocks; 3064 free (0 chunks); 5128 used\n Operator class cache: 8192 total in 1 blocks; 512 free (0 chunks); 7680 used\n smgr relation table: 32768 total in 3 blocks; 16768 free (8 chunks); 16000 used\n TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0 chunks); 264 used\n Portal hash: 8192 total in 1 blocks; 512 free (0 chunks); 7680 used\n TopPortalContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544 used\n PortalContext: 9621216 total in 1179 blocks; 13496 free (13 chunks); 9607720 used:\n Relcache by OID: 16384 total in 2 blocks; 3424 free (3 chunks); 12960 used\n CacheMemoryContext: 4243584 total in 12 blocks; 1349808 free (12 chunks); 2893776 used\n index info: 2048 total in 2 blocks; 736 free (0 chunks); 1312 used: pg_trigger_tgconstraint_index\n index info: 2048 total in 2 blocks; 736 free (0 chunks); 1312 used: pg_trigger_oid_index\n index info: 2048 total in 2 blocks; 352 free (1 chunks); 1696 used: pg_inherits_relid_seqno_index\n partition descriptor: 65344 total in 12 blocks; 7336 free (4 chunks); 58008 used: accounts\n index info: 2048 total in 2 blocks; 736 free (0 chunks); 1312 used: pg_inherits_parent_index\n partition key: 1024 total in 1 blocks; 160 free (0 chunks); 864 used: accounts\n ...\n index info: 2048 total in 2 blocks; 736 free (2 chunks); 1312 used: pg_database_oid_index\n index info: 2048 total in 2 blocks; 736 free (2 chunks); 1312 used: pg_authid_rolname_index\n WAL record construction: 49776 total in 2 blocks; 6344 free (0 chunks); 43432 used\n PrivateRefCount: 8192 total in 1 blocks; 2584 free (0 chunks); 5608 used\n MdSmgr: 8192 total in 1 blocks; 5528 free (0 chunks); 2664 used\n LOCALLOCK hash: 131072 total in 5 blocks; 26376 free (15 chunks); 104696 used\n Timezones: 104128 total in 2 blocks; 2584 free (0 chunks); 101544 used\n ErrorContext: 8192 total in 1 blocks; 7928 free (3 chunks); 264 used\nGrand total: 19322960 bytes in 1452 blocks; 2743560 free (186 chunks); 16579400 used\n\n- after applying patch\n\nTopMemoryContext: 418896 total in 18 blocks; 91488 free (13 chunks); 327408 used\n pgstat TabStatusArray lookup hash table: 65536 total in 4 blocks; 16808 free (7 chunks); 48728 used\n TopTransactionContext: 4194304 total in 10 blocks; 1045728 free (18 chunks); 3148576 used\n RowDescriptionContext: 8192 total in 1 blocks; 6880 free (0 chunks); 1312 used\n MessageContext: 8192 total in 1 blocks; 3064 free (0 chunks); 5128 used\n Operator class cache: 8192 total in 1 blocks; 512 free (0 chunks); 7680 used\n smgr relation table: 32768 total in 3 blocks; 16768 free (8 chunks); 16000 used\n TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0 chunks); 264 used\n Portal hash: 8192 total in 1 blocks; 512 free (0 chunks); 7680 used\n TopPortalContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544 used\n PortalContext: 9621216 total in 1179 blocks; 13496 free (13 chunks); 9607720 used:\n Relcache by OID: 16384 total in 2 blocks; 3424 free (3 chunks); 12960 used\n CacheMemoryContext: 2113600 total in 10 blocks; 556240 free (10 chunks); 1557360 used\n index info: 2048 total in 2 blocks; 736 free (0 chunks); 1312 used: pg_trigger_tgconstraint_index\n index info: 2048 total in 2 blocks; 736 free (0 chunks); 1312 used: pg_trigger_oid_index\n index info: 2048 total in 2 blocks; 352 free (1 chunks); 1696 used: pg_inherits_relid_seqno_index\n partition descriptor: 65344 total in 12 blocks; 7336 free (4 chunks); 58008 used: accounts\n index info: 2048 total in 2 blocks; 736 free (0 chunks); 1312 used: pg_inherits_parent_index\n partition key: 1024 total in 1 blocks; 160 free (0 chunks); 864 used: accounts\n ...\n index info: 2048 total in 2 blocks; 736 free (2 chunks); 1312 used: pg_database_oid_index\n index info: 2048 total in 2 blocks; 736 free (2 chunks); 1312 used: pg_authid_rolname_index\n WAL record construction: 49776 total in 2 blocks; 6344 free (0 chunks); 43432 used\n PrivateRefCount: 8192 total in 1 blocks; 2584 free (0 chunks); 5608 used\n MdSmgr: 8192 total in 1 blocks; 6360 free (0 chunks); 1832 used\n LOCALLOCK hash: 131072 total in 5 blocks; 26376 free (15 chunks); 104696 used\n Timezones: 104128 total in 2 blocks; 2584 free (0 chunks); 101544 used\n ErrorContext: 8192 total in 1 blocks; 7928 free (3 chunks); 264 used\nGrand total: 17131488 bytes in 1441 blocks; 1936008 free (234 chunks); 15195480 used\n\nFinally, I do make check and all tests are passed.\nSo, I'll change this patch status to ready for committer.\n\nRegards,\nSho Kato\n\n\n", "msg_date": "Thu, 6 Aug 2020 07:25:00 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "Hi Alvaro,\n\nOn Thu, Aug 6, 2020 at 4:25 PM kato-sho@fujitsu.com\n<kato-sho@fujitsu.com> wrote:\n> On Wednesday, August 5, 2020 9:43 AM I wrote:\n> > I'll report the result before the end of August .\n>\n> I test v2-0001-build-partdesc-memcxt.patch at 9a9db08ae4 and it is ok.\n\nIs this patch meant for HEAD or back-patching? I ask because v13 got this:\n\ncommit 5b9312378e2f8fb35ef4584aea351c3319a10422\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Dec 25 14:43:13 2019 -0500\n\n Load relcache entries' partitioning data on-demand, not immediately.\n\nwhich prevents a partitioned table's PartitionDesc from being rebuilt\nrepeatedly as would happen before this commit in Kato-san's case,\nbecause it moves RelationBuildPartitionDesc out of the relcache flush\ncode path.\n\nSo, the OOM situation that Kato-san original reported should not occur\nas of v13.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Aug 2020 14:02:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On 2020-Aug-19, Amit Langote wrote:\n\nHello\n\n> On Thu, Aug 6, 2020 at 4:25 PM kato-sho@fujitsu.com\n> <kato-sho@fujitsu.com> wrote:\n> > On Wednesday, August 5, 2020 9:43 AM I wrote:\n> > > I'll report the result before the end of August .\n> >\n> > I test v2-0001-build-partdesc-memcxt.patch at 9a9db08ae4 and it is ok.\n> \n> Is this patch meant for HEAD or back-patching? I ask because v13 got this:\n> \n> commit 5b9312378e2f8fb35ef4584aea351c3319a10422\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Wed Dec 25 14:43:13 2019 -0500\n> \n> Load relcache entries' partitioning data on-demand, not immediately.\n> \n> which prevents a partitioned table's PartitionDesc from being rebuilt\n> repeatedly as would happen before this commit in Kato-san's case,\n> because it moves RelationBuildPartitionDesc out of the relcache flush\n> code path.\n\nHmm, so this is a problem only in v11 and v12? It seems that putting\nthe patch in master *only* is pointless. OTOH v11 had other severe\nperformance drawbacks with lots of partitions, so it might not be needed\nthere.\n\nI admit I'm hesitant to carry code in only one or two stable branches\nthat exists nowhere else. But maybe the problem is serious enough in\nthose branches (that will still live for quite a few years) that we\nshould get it there.\n\nOTOH it could be argued that the coding in master is not all that great\nanyway (given the willingness for memory to be leaked) that it should\napply to all three branches.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Aug 2020 14:06:12 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 20, 2020 at 3:06 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Aug-19, Amit Langote wrote:\n> > On Thu, Aug 6, 2020 at 4:25 PM kato-sho@fujitsu.com\n> > <kato-sho@fujitsu.com> wrote:\n> > > On Wednesday, August 5, 2020 9:43 AM I wrote:\n> > > > I'll report the result before the end of August .\n> > >\n> > > I test v2-0001-build-partdesc-memcxt.patch at 9a9db08ae4 and it is ok.\n> >\n> > Is this patch meant for HEAD or back-patching? I ask because v13 got this:\n> >\n> > commit 5b9312378e2f8fb35ef4584aea351c3319a10422\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > Date: Wed Dec 25 14:43:13 2019 -0500\n> >\n> > Load relcache entries' partitioning data on-demand, not immediately.\n> >\n> > which prevents a partitioned table's PartitionDesc from being rebuilt\n> > repeatedly as would happen before this commit in Kato-san's case,\n> > because it moves RelationBuildPartitionDesc out of the relcache flush\n> > code path.\n>\n> Hmm, so this is a problem only in v11 and v12? It seems that putting\n> the patch in master *only* is pointless. OTOH v11 had other severe\n> performance drawbacks with lots of partitions, so it might not be needed\n> there.\n>\n> I admit I'm hesitant to carry code in only one or two stable branches\n> that exists nowhere else. But maybe the problem is serious enough in\n> those branches (that will still live for quite a few years) that we\n> should get it there.\n>\n> OTOH it could be argued that the coding in master is not all that great\n> anyway (given the willingness for memory to be leaked) that it should\n> apply to all three branches.\n\nFwiw, I am fine with applying the memory-leak fix in all branches down\nto v12 if we are satisfied with the implementation.\n\nThat said, I don't offhand know any real world use case beside\nKato-san's that's affected by this leak. Kato-san's case is creating\na foreign key referencing a partitioned table with many partitions,\nsomething that's only supported from v12. (You may have noticed that\nthe leak that occurs when rebuilding referencing table's PartitionDesc\naccumulates while addFkRecurseReferenced is looping on referenced\ntable's partitions.)\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Aug 2020 10:50:29 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Thu, Aug 20, 2020 at 10:50 AM Amit Langote <amitlangote09@gmail.com> wrote:\n On Thu, Aug 20, 2020 at 3:06 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> > On 2020-Aug-19, Amit Langote wrote:\n> > > On Thu, Aug 6, 2020 at 4:25 PM kato-sho@fujitsu.com\n> > > <kato-sho@fujitsu.com> wrote:\n> > > > On Wednesday, August 5, 2020 9:43 AM I wrote:\n> > > > > I'll report the result before the end of August .\n> > > >\n> > > > I test v2-0001-build-partdesc-memcxt.patch at 9a9db08ae4 and it is ok.\n>\n> Fwiw, I am fine with applying the memory-leak fix in all branches down\n> to v12 if we are satisfied with the implementation.\n\nI have revised the above patch slightly to introduce a variable for\nthe condition whether to use a temporary memory context.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 21 Aug 2020 11:20:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n>> Fwiw, I am fine with applying the memory-leak fix in all branches down\n>> to v12 if we are satisfied with the implementation.\n\n> I have revised the above patch slightly to introduce a variable for\n> the condition whether to use a temporary memory context.\n\nThis CF entry has been marked \"ready for committer\", which I find\ninappropriate since there doesn't seem to be any consensus about\nwhether we need it.\n\nI tried running the original test case under HEAD. I do not see\nany visible memory leak, which I think indicates that 5b9312378 or\nsome other fix has taken care of the leak since the original report.\nHowever, after waiting awhile and noting that the ADD FOREIGN KEY\nwasn't finishing, I poked into its progress with a debugger and\nobserved that each iteration of RI_Initial_Check() was taking about\n15 seconds. I presume we have to do that for each partition,\nwhich leads to the estimate that it'll take 34 hours to finish this\n... and that's with no data in the partitions, god help me if\nthere'd been a lot.\n\nSome quick \"perf\" work says that most of the time seems to be\ngetting spent in the planner, in get_eclass_for_sort_expr().\nSo this is likely just a variant of performance issues we've\nseen before. (I do wonder why we're not able to prune the\njoin to just the matching PK partition, though.)\n\nAnyway, the long and the short of it is that this test case is far\nlarger than anything anyone could practically use in HEAD, let alone\nin released branches. I can't get excited about back-patching a fix\nto a memory leak if that's just going to allow people to hit other\nperformance-killing issues.\n\nIn short, I don't see a reason why we need this patch in any branch,\nso I recommend rejecting it. If we did think we need a leak fix in\nthe back branches, back-porting 5b9312378 would likely be a saner\nway to proceed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Sep 2020 14:35:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "On Fri, Sep 4, 2020 at 12:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> >> Fwiw, I am fine with applying the memory-leak fix in all branches down\n> >> to v12 if we are satisfied with the implementation.\n>\n> > I have revised the above patch slightly to introduce a variable for\n> > the condition whether to use a temporary memory context.\n>\n> This CF entry has been marked \"ready for committer\", which I find\n> inappropriate since there doesn't seem to be any consensus about\n> whether we need it.\n>\n> I tried running the original test case under HEAD. I do not see\n> any visible memory leak, which I think indicates that 5b9312378 or\n> some other fix has taken care of the leak since the original report.\n> However, after waiting awhile and noting that the ADD FOREIGN KEY\n> wasn't finishing, I poked into its progress with a debugger and\n> observed that each iteration of RI_Initial_Check() was taking about\n> 15 seconds. I presume we have to do that for each partition,\n> which leads to the estimate that it'll take 34 hours to finish this\n> ... and that's with no data in the partitions, god help me if\n> there'd been a lot.\n>\n> Some quick \"perf\" work says that most of the time seems to be\n> getting spent in the planner, in get_eclass_for_sort_expr().\n> So this is likely just a variant of performance issues we've\n> seen before. (I do wonder why we're not able to prune the\n> join to just the matching PK partition, though.)\n>\n\nConsider this example\npostgres=# create table t1 (a int, b int, CHECK (a between 100 and 150));\nCREATE TABLE\npostgres=# create table part(a int, b int) partition by range(a);\nCREATE TABLE\npostgres=# create table part_p1 partition of part for values from (0) to (50);\nCREATE TABLE\npostgres=# create table part_p2 partition of part for values from (50) to (100);\nCREATE TABLE\npostgres=# create table part_p3 partition of part for values from\n(100) to (150);\nCREATE TABLE\npostgres=# create table part_p4 partition of part for values from\n(150) to (200);\nCREATE TABLE\npostgres=# explain (costs off) select * from t1 r1, part r2 where r1.a = r2.a;\n QUERY PLAN\n--------------------------------------\n Hash Join\n Hash Cond: (r2.a = r1.a)\n -> Append\n -> Seq Scan on part_p1 r2_1\n -> Seq Scan on part_p2 r2_2\n -> Seq Scan on part_p3 r2_3\n -> Seq Scan on part_p4 r2_4\n -> Hash\n -> Seq Scan on t1 r1\n(9 rows)\n\nGiven that t1.a can not have any value less than 100 and greater than\n150, any row in t1 won't have its joining partner in part_p1 and\npart_p2. So those two partitions can be pruned. But I think we don't\nconsider the constraints on table when joining two tables to render a\njoin empty or even prune partitions. That would be a good optimization\nwhich will improve this case as well.\n\nBut further to that, I think when we add constraint on the partition\ntable which translates to constraints on individual partitions, we\nshould check the entire partitioned relation rather than individual\npartitions. If we do that, we won't need to plan query for every\npartition. If the foreign key happens to be partition key e.g in star\nschema, this will use partitionwise join to further improve query\nperformance. Somewhere in future, we will be able to repartition the\nforeign key table by foreign key and perform partitionwise join.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 4 Sep 2020 17:04:29 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" }, { "msg_contents": "I ran into a situation that echoed this original one by Kato in the\nstart of this thread:\nhttps://www.postgresql.org/message-id/OSAPR01MB374809E8DE169C8BF2B82CBD9F6B0%40OSAPR01MB3748.jpnprd01.prod.outlook.com\n\nMore below.\n\nTom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:\n>\n> I tried running the original test case under HEAD. I do not see\n> any visible memory leak, which I think indicates that 5b9312378 or\n> some other fix has taken care of the leak since the original report.\n> However, after waiting awhile and noting that the ADD FOREIGN KEY\n> wasn't finishing, I poked into its progress with a debugger and\n> observed that each iteration of RI_Initial_Check() was taking about\n> 15 seconds. I presume we have to do that for each partition,\n> which leads to the estimate that it'll take 34 hours to finish this\n> ... and that's with no data in the partitions, god help me if\n> there'd been a lot.\n>\n> Some quick \"perf\" work says that most of the time seems to be\n> getting spent in the planner, in get_eclass_for_sort_expr().\n> So this is likely just a variant of performance issues we've\n> seen before. (I do wonder why we're not able to prune the\n> join to just the matching PK partition, though.)\n\nI found that the original example from Kato finishes in a little over\na minute now to create the FK constraint in Postgres 14-16.\n\nHowever, in my case, I'm using composite partitions and that is taking\n60x as long for an equivalent number of partitions. I must emphasize\nthis is with ZERO rows of data.\n\nI'm using 1200 partitions in my example to finish a bit faster and\nbecause that's my actual use case and less extreme than 8,000\npartitions.\n\nIf I reach my 1,200 with 80 top-level and 15 leaf-level partitions in\na composite hierarchy (80x15 = 1,200 still) the speed is very slow.\nI'm using composite partitions primarily because in my real code I\nneed LIST partitions which don't support multiple keys so I worked\naround that using composite partitions with one LIST key in each\nlevel. Doing some other workaround like a concatenated single key was\nmessy for my use case.\n\nI modified Kato's test case to repeat the issue I'm having and saw\nsome very odd query plan behavior which is likely part of the issue.\n\nThe flat version with 1,200 partitions at a single level and no\ncomposite partitions finishes in a little over a second while the 80 x\n15 version with composite partitions takes over a minute (60x longer).\nIn my actual database with many such partitions with FK, the time\ncompounds and FK creation takes >30 minutes per FK leading to hours\njust making FKs.\n\n== COMPOSITE PARTITION INTERNAL SELECT PLAN ==\n\nIf I cancel the composite FK creation, I see where it stopped and that\ngives a clue about the difference in speed. For the composite, it's\nthis statement with a plan linked on dalibo showing a massive amount\nof sequential scans and Postgres making some assumptions about 1 row\nexisting.\n\nERROR: canceling statement due to user request\nCONTEXT: SQL statement SELECT fk.\"aid\" FROM ONLY\n\"public\".\"xhistory_25_12\" fk LEFT OUTER JOIN \"public\".\"xaccounts\" pk\nON ( pk.\"aid\" OPERATOR(pg_catalog.=) fk.\"aid\") WHERE pk.\"aid\" IS NULL\nAND (fk.\"aid\" IS NOT NULL)\nSQL state: 57014\n\nPLAN DETAILS: https://explain.dalibo.com/plan/fad72gdacb6727b4#plan\n\n== FLAT PARTITION INTERNAL SELECT PLAN ==\n\nThis gives a direct result node and has no complexity at all\n\nSELECT fk.\"aid\" FROM ONLY \"public\".\"history_23\" fk LEFT OUTER JOIN\n\"public\".\"accounts\" pk ON ( pk.\"aid\" OPERATOR(pg_catalog.=) fk.\"aid\")\nWHERE pk.\"aid\" IS NULL AND (fk.\"aid\" IS NOT NULL)\n\nPLAN DETAILS: https://explain.dalibo.com/plan/a83dae9b9569ebcd\n\nTest cases to repeat easily below:\n\n== FLAT PARTITION FAST FK DDL ==\n\nCREATE DATABASE fastflatfk\n\nCREATE TABLE accounts (aid INTEGER, bid INTEGER, abalance INTEGER,\nfiller CHAR(84)) PARTITION BY HASH(aid);\nCREATE TABLE history (tid INTEGER, bid INTEGER, aid INTEGER, delta\nINTEGER, mtime TIMESTAMP, filler CHAR(22)) PARTITION BY HASH(aid);\n\nDO $$\nDECLARE\n p INTEGER;\nBEGIN\n FOR p IN 0..1023 LOOP\n EXECUTE 'CREATE TABLE accounts_' || p || ' PARTITION OF accounts\nFOR VALUES WITH (modulus 1024, remainder ' || p || ') PARTITION BY\nHASH(aid);';\n EXECUTE 'CREATE TABLE history_' || p || ' PARTITION OF history FOR\nVALUES WITH (modulus 1024, remainder ' || p || ') PARTITION BY\nHASH(aid);';\n END LOOP;\nEND $$;\n\nALTER TABLE accounts ADD CONSTRAINT accounts_pk PRIMARY KEY (aid);\n\n-- Query returned successfully in 1 secs 547 msec.\nALTER TABLE history ADD CONSTRAINT history_fk FOREIGN KEY (aid)\nREFERENCES accounts (aid) ON DELETE CASCADE;\n\n--run to drop FK before you recreate it\n--ALTER TABLE history DROP CONSTRAINT history_fk\n\n== COMPOSITE PARTITION SLOW FK DDL ==\n\nNow the composite partition version with 80 x 15 partitions which\nfinishes in a bit over a minute (60x the time)\n\nCREATE DATABASE slowcompfk\n\n-- Create the parent tables for xaccounts and xhistory\nCREATE TABLE xaccounts (aid INTEGER, bid INTEGER, abalance INTEGER,\nfiller CHAR(84)) PARTITION BY HASH(aid);\nCREATE TABLE xhistory (tid INTEGER, bid INTEGER, aid INTEGER, delta\nINTEGER, mtime TIMESTAMP, filler CHAR(22)) PARTITION BY HASH(aid);\n\n-- Generate SQL for creating 80 partitions for xaccounts\nDO $$\nDECLARE\n p INTEGER;\nBEGIN\n FOR p IN 0..79 LOOP\n EXECUTE 'CREATE TABLE xaccounts_' || p || ' PARTITION OF xaccounts\nFOR VALUES WITH (modulus 80, remainder ' || p || ') PARTITION BY\nHASH(aid);';\n END LOOP;\nEND $$;\n\n-- Generate SQL for creating 15 sub-partitions within each partition\nfor xaccounts\nDO $$\nDECLARE\n main_partition INTEGER;\n sub_partition INTEGER;\nBEGIN\n FOR main_partition IN 0..79 LOOP\n FOR sub_partition IN 0..14 LOOP\n EXECUTE 'CREATE TABLE xaccounts_' || main_partition || '_' ||\nsub_partition || ' PARTITION OF xaccounts_' || main_partition || ' FOR\nVALUES WITH (modulus 15, remainder ' || sub_partition || ');';\n END LOOP;\n END LOOP;\nEND $$;\n\n-- Generate SQL for creating 80 partitions for xhistory\nDO $$\nDECLARE\n p INTEGER;\nBEGIN\n FOR p IN 0..79 LOOP\n EXECUTE 'CREATE TABLE xhistory_' || p || ' PARTITION OF xhistory\nFOR VALUES WITH (modulus 80, remainder ' || p || ') PARTITION BY\nHASH(aid);';\n END LOOP;\nEND $$;\n\n-- Generate SQL for creating 15 sub-partitions within each partition\nfor xhistory\nDO $$\nDECLARE\n main_partition INTEGER;\n sub_partition INTEGER;\nBEGIN\n FOR main_partition IN 0..79 LOOP\n FOR sub_partition IN 0..14 LOOP\n EXECUTE 'CREATE TABLE xhistory_' || main_partition || '_' ||\nsub_partition || ' PARTITION OF xhistory_' || main_partition || ' FOR\nVALUES WITH (modulus 15, remainder ' || sub_partition || ');';\n END LOOP;\n END LOOP;\nEND $$;\n\nALTER TABLE xaccounts ADD CONSTRAINT xaccounts_pk PRIMARY KEY (aid);\n\n-- Query returned successfully in 1 min 18 secs.\nALTER TABLE xhistory ADD CONSTRAINT xhistory_fk FOREIGN KEY (aid)\nREFERENCES xaccounts (aid) ON DELETE CASCADE;\n\n--run to drop FK before you recreate it\n--ALTER TABLE xhistory DROP CONSTRAINT xhistory_fk\n\n\n", "msg_date": "Mon, 23 Oct 2023 16:39:59 -0500", "msg_from": "Alec Lazarescu <alecl@alecl.com>", "msg_from_op": false, "msg_subject": "Re: Creating foreign key on partitioned table is too slow" } ]
[ { "msg_contents": "Hi,\n\nWhile working on some slides explaining EXPLAIN, I couldn't resist the\nurge to add the missing $SUBJECT. The attached 0001 patch gives the\nfollowing:\n\nGather ... time=0.146..33.077 rows=1 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=4425\n -> Parallel Seq Scan on public.t ... time=19.421..30.092 rows=0 loops=3)\n Filter: (t.i = 42)\n Rows Removed by Filter: 333333\n Leader: actual time=0.013..32.025 rows=1 loops=1 <--- NEW\n Buffers: shared hit=1546 <--- NEW\n Worker 0: actual time=29.069..29.069 rows=0 loops=1\n Buffers: shared hit=1126\n Worker 1: actual time=29.181..29.181 rows=0 loops=1\n Buffers: shared hit=1753\n\nWithout that, you have to deduce what work was done in the leader, but\nI'd rather just show it.\n\nThe 0002 patch adjusts Sort for consistency with that scheme, so you get:\n\nSort ... time=84.303..122.238 rows=333333 loops=3)\n Output: t1.i\n Sort Key: t1.i\n Leader: Sort Method: external merge Disk: 5864kB <--- DIFFERENT\n Worker 0: Sort Method: external merge Disk: 3376kB\n Worker 1: Sort Method: external merge Disk: 4504kB\n Leader: actual time=119.624..165.949 rows=426914 loops=1\n Worker 0: actual time=61.239..90.984 rows=245612 loops=1\n Worker 1: actual time=72.046..109.782 rows=327474 loops=1\n\nWithout the \"Leader\" label, it's not really clear to the uninitiated\nwhether you're looking at combined, average or single process numbers.\n\nOf course there are some more things that could be reported in a\nsimilar way eventually, such as filter counters and hash join details.\n\nFor the XML/JSON/YAML formats, I decided to use a <Worker> element\nwith <Worker-Number>-1</Worker-Number> to indicate the leader.\nPerhaps there should be a <Leader> element instead?\n\nThoughts?", "msg_date": "Wed, 23 Oct 2019 20:29:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Wed, Oct 23, 2019 at 12:30 AM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n>\n> While working on some slides explaining EXPLAIN, I couldn't resist the\n> urge to add the missing $SUBJECT. The attached 0001 patch gives the\n> following:\n>\n> Gather ... time=0.146..33.077 rows=1 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=4425\n> -> Parallel Seq Scan on public.t ... time=19.421..30.092 rows=0 loops=3)\n> Filter: (t.i = 42)\n> Rows Removed by Filter: 333333\n> Leader: actual time=0.013..32.025 rows=1 loops=1 <--- NEW\n> Buffers: shared hit=1546 <--- NEW\n> Worker 0: actual time=29.069..29.069 rows=0 loops=1\n> Buffers: shared hit=1126\n> Worker 1: actual time=29.181..29.181 rows=0 loops=1\n> Buffers: shared hit=1753\n>\n> Without that, you have to deduce what work was done in the leader, but\n> I'd rather just show it.\n>\n> The 0002 patch adjusts Sort for consistency with that scheme, so you get:\n>\n> Sort ... time=84.303..122.238 rows=333333 loops=3)\n> Output: t1.i\n> Sort Key: t1.i\n> Leader: Sort Method: external merge Disk: 5864kB <--- DIFFERENT\n> Worker 0: Sort Method: external merge Disk: 3376kB\n> Worker 1: Sort Method: external merge Disk: 4504kB\n> Leader: actual time=119.624..165.949 rows=426914 loops=1\n> Worker 0: actual time=61.239..90.984 rows=245612 loops=1\n> Worker 1: actual time=72.046..109.782 rows=327474 loops=1\n>\n> Without the \"Leader\" label, it's not really clear to the uninitiated\n> whether you're looking at combined, average or single process numbers.\n>\n\nCool! I dig it.\nChecked out the patches a bit and noticed that the tuplesort\ninstrumentation uses spaceUsed and I saw this comment in\ntuplesort_get_stats()\n\n/*\n* Note: it might seem we should provide both memory and disk usage for a\n* disk-based sort. However, the current code doesn't track memory space\n* accurately once we have begun to return tuples to the caller (since we\n* don't account for pfree's the caller is expected to do), so we cannot\n* rely on availMem in a disk sort. This does not seem worth the overhead\n* to fix. Is it worth creating an API for the memory context code to\n* tell us how much is actually used in sortcontext?\n*/\n\nmight it be worth trying out the memory accounting API\n5dd7fc1519461548eebf26c33eac6878ea3e8788 here?\n\n\n>\n> Of course there are some more things that could be reported in a\n> similar way eventually, such as filter counters and hash join details.\n>\n>\nThis made me think about other explain wishlist items.\nFor parallel hashjoin, I would find it useful to know which batches\neach worker participated in (maybe just probing to start with, but\nloading would be great too).\n\nI'm not sure anyone else (especially users) would care about this,\nthough.\n\n-- \nMelanie Plageman\n\nOn Wed, Oct 23, 2019 at 12:30 AM Thomas Munro <thomas.munro@gmail.com> wrote:\nWhile working on some slides explaining EXPLAIN, I couldn't resist the\nurge to add the missing $SUBJECT.  The attached 0001 patch gives the\nfollowing:\n\nGather  ... time=0.146..33.077 rows=1 loops=1)\n  Workers Planned: 2\n  Workers Launched: 2\n  Buffers: shared hit=4425\n  ->  Parallel Seq Scan on public.t ... time=19.421..30.092 rows=0 loops=3)\n        Filter: (t.i = 42)\n        Rows Removed by Filter: 333333\n        Leader: actual time=0.013..32.025 rows=1 loops=1      <--- NEW\n          Buffers: shared hit=1546                            <--- NEW\n        Worker 0: actual time=29.069..29.069 rows=0 loops=1\n          Buffers: shared hit=1126\n        Worker 1: actual time=29.181..29.181 rows=0 loops=1\n          Buffers: shared hit=1753\n\nWithout that, you have to deduce what work was done in the leader, but\nI'd rather just show it.\n\nThe 0002 patch adjusts Sort for consistency with that scheme, so you get:\n\nSort  ... time=84.303..122.238 rows=333333 loops=3)\n   Output: t1.i\n   Sort Key: t1.i\n   Leader:  Sort Method: external merge  Disk: 5864kB       <--- DIFFERENT\n   Worker 0:  Sort Method: external merge  Disk: 3376kB\n   Worker 1:  Sort Method: external merge  Disk: 4504kB\n   Leader: actual time=119.624..165.949 rows=426914 loops=1\n   Worker 0: actual time=61.239..90.984 rows=245612 loops=1\n   Worker 1: actual time=72.046..109.782 rows=327474 loops=1\n\nWithout the \"Leader\" label, it's not really clear to the uninitiated\nwhether you're looking at combined, average or single process numbers.Cool! I dig it. Checked out the patches a bit and noticed that the tuplesortinstrumentation uses spaceUsed and I saw this comment intuplesort_get_stats()/** Note: it might seem we should provide both memory and disk usage for a* disk-based sort.  However, the current code doesn't track memory space* accurately once we have begun to return tuples to the caller (since we* don't account for pfree's the caller is expected to do), so we cannot* rely on availMem in a disk sort.  This does not seem worth the overhead* to fix.  Is it worth creating an API for the memory context code to* tell us how much is actually used in sortcontext?*/might it be worth trying out the memory accounting API5dd7fc1519461548eebf26c33eac6878ea3e8788 here? \n\nOf course there are some more things that could be reported in a\nsimilar way eventually, such as filter counters and hash join details.\nThis made me think about other explain wishlist items.For parallel hashjoin, I would find it useful to know which batcheseach worker participated in (maybe just probing to start with, butloading would be great too).I'm not sure anyone else (especially users) would care about this,though.-- Melanie Plageman", "msg_date": "Wed, 30 Oct 2019 09:24:32 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Wed, Oct 30, 2019 at 9:24 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Checked out the patches a bit and noticed that the tuplesort\n> instrumentation uses spaceUsed and I saw this comment in\n> tuplesort_get_stats()\n\n> might it be worth trying out the memory accounting API\n> 5dd7fc1519461548eebf26c33eac6878ea3e8788 here?\n\nI made exactly the same suggestion several years back, not long after\nthe API was first proposed by Jeff. However, tuplesort.c has changed a\nlot since that time, to the extent that that comment now seems kind of\nobsolete. These days, availMem accounting still isn't used at all for\ndisk sorts. Rather, the slab allocator is used. Virtually all the\nmemory used to merge is now managed by logtape.c, with only fixed\nper-tape memory buffers within tuplesort.c. This per-tape stuff is\ntiny anyway -- slightly more than 1KiB per tape.\n\nIt would now be relatively straightforward to report the memory used\nby disk-based sorts, without needing to use the memory accounting API.\nI imagine that it would report the high watermark memory usage during\nthe final merge. It's possible for this to be lower than the high\nwatermark during initial run generation (e.g. because of the\nMaxAllocSize limit in buffer size within logtape.c), but that still\nseems like the most useful figure to users. There'd be a new\n\"LogicalTapeSetMemory()\" function to go along with the existing\nLogicalTapeSetBlocks() function, or something along those lines.\n\nNot planning to work on this now, but perhaps you have time for it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Oct 2019 10:39:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Wed, Oct 30, 2019 at 10:39:04AM -0700, Peter Geoghegan wrote:\n>On Wed, Oct 30, 2019 at 9:24 AM Melanie Plageman\n><melanieplageman@gmail.com> wrote:\n>> Checked out the patches a bit and noticed that the tuplesort\n>> instrumentation uses spaceUsed and I saw this comment in\n>> tuplesort_get_stats()\n>\n>> might it be worth trying out the memory accounting API\n>> 5dd7fc1519461548eebf26c33eac6878ea3e8788 here?\n>\n>I made exactly the same suggestion several years back, not long after\n>the API was first proposed by Jeff. However, tuplesort.c has changed a\n>lot since that time, to the extent that that comment now seems kind of\n>obsolete. These days, availMem accounting still isn't used at all for\n>disk sorts. Rather, the slab allocator is used. Virtually all the\n>memory used to merge is now managed by logtape.c, with only fixed\n>per-tape memory buffers within tuplesort.c. This per-tape stuff is\n>tiny anyway -- slightly more than 1KiB per tape.\n>\n>It would now be relatively straightforward to report the memory used\n>by disk-based sorts, without needing to use the memory accounting API.\n>I imagine that it would report the high watermark memory usage during\n>the final merge. It's possible for this to be lower than the high\n>watermark during initial run generation (e.g. because of the\n>MaxAllocSize limit in buffer size within logtape.c), but that still\n>seems like the most useful figure to users. There'd be a new\n>\"LogicalTapeSetMemory()\" function to go along with the existing\n>LogicalTapeSetBlocks() function, or something along those lines.\n>\n>Not planning to work on this now, but perhaps you have time for it.\n>\n\nAnother thing worth mentioning is that the memory accounting API does\nnothing about the pfree() calls, mentioned in the comment. The memory is\ntracked at the block level, so unless the pfree() frees the whole block\n(which only really happens for oversized chunks) the accounting info\ndoes not change.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 30 Oct 2019 20:19:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Thu, Oct 31, 2019 at 5:24 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Wed, Oct 23, 2019 at 12:30 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Of course there are some more things that could be reported in a\n>> similar way eventually, such as filter counters and hash join details.\n>\n> This made me think about other explain wishlist items.\n> For parallel hashjoin, I would find it useful to know which batches\n> each worker participated in (maybe just probing to start with, but\n> loading would be great too).\n>\n> I'm not sure anyone else (especially users) would care about this,\n> though.\n\nYeah, I think that'd be interesting. At some point in the patch set\nwhen I was working on the batch load balancing strategy I showed the\nnumber of tuples hashed and number of batches probed by each process\n(not the actual batch numbers, since that seems a bit over the top):\n\nhttps://www.postgresql.org/message-id/CAEepm%3D0th8Le2SDCv32zN7tMyCJYR9oGYJ52fXNYJz1hrpGW%2BQ%40mail.gmail.com\n\nI guess I thought of that as a debugging feature and took it out\nbecause it was too verbose, but maybe it just needs to be controlled\nby the VERBOSE switch. Do you think we should put that back?\n\n\n", "msg_date": "Mon, 4 Nov 2019 12:11:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Mon, Nov 4, 2019 at 12:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I guess I thought of that as a debugging feature and took it out\n> because it was too verbose, but maybe it just needs to be controlled\n> by the VERBOSE switch. Do you think we should put that back?\n\nBy which I mean: would you like to send a patch? :-)\n\nHere is a new version of the \"Leader:\" patch, because cfbot told me\nthat gcc didn't like it as much as clang.", "msg_date": "Mon, 4 Nov 2019 12:29:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Mon, 4 Nov 2019 at 00:30, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Nov 4, 2019 at 12:11 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > I guess I thought of that as a debugging feature and took it out\n> > because it was too verbose, but maybe it just needs to be controlled\n> > by the VERBOSE switch. Do you think we should put that back?\n>\n> By which I mean: would you like to send a patch? :-)\n>\n> Here is a new version of the \"Leader:\" patch, because cfbot told me\n> that gcc didn't like it as much as clang.\n>\n\nI was reviewing this patch and here are a few comments,\n\n+static void\n+ExplainNodePerProcess(ExplainState *es, bool *opened_group,\n+ int worker_number, Instrumentation *instrument)\n+{\n\nA small description about this routine would be helpful and will give the\nfile a consistent look.\n\nAlso, I noticed that the worker details are displayed for sort node even\nwithout verbose, but for scans it is only with verbose. Am I missing\nsomething or there is something behind? However, I am not sure if this is\nthe introduced by this patch-set.\n\n-- \nRegards,\nRafia Sabih\n\nOn Mon, 4 Nov 2019 at 00:30, Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Nov 4, 2019 at 12:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I guess I thought of that as a debugging feature and took it out\n> because it was too verbose, but maybe it just needs to be controlled\n> by the VERBOSE switch.  Do you think we should put that back?\n\nBy which I mean: would you like to send a patch?  :-)\n\nHere is a new version of the \"Leader:\" patch, because cfbot told me\nthat gcc didn't like it as much as clang.\nI was reviewing this patch and here are a few comments,+static void+ExplainNodePerProcess(ExplainState *es, bool *opened_group,+\t\t\t\t\t  int worker_number, Instrumentation *instrument)+{A small description about this routine would be helpful and will give the file a consistent look.Also, I noticed that the worker details are displayed for sort node even without verbose, but for scans it is only with verbose. Am I missing something or there is something behind? However, I am not sure if this is the introduced by this patch-set.-- Regards,Rafia Sabih", "msg_date": "Thu, 7 Nov 2019 11:37:12 +0100", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Thu, Nov 7, 2019 at 11:37 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> I was reviewing this patch and here are a few comments,\n\nHi Rafia,\n\nThanks!\n\n> +static void\n> +ExplainNodePerProcess(ExplainState *es, bool *opened_group,\n> + int worker_number, Instrumentation *instrument)\n> +{\n>\n> A small description about this routine would be helpful and will give the file a consistent look.\n\nDone for both new functions. I also improved the commit message for\n0001 a bit to explain the change better.\n\n> Also, I noticed that the worker details are displayed for sort node even without verbose, but for scans it is only with verbose. Am I missing something or there is something behind? However, I am not sure if this is the introduced by this patch-set.\n\nYeah, it's a pre-existing thing, but I agree it's an interesting\ndifference. We currently don't have a way to show a 'combined'\nversion of a parallel (oblivious) sort: we always show the per-process\nversion, and all this patch changes is how we label the leader's\nstats. I suppose someone could argue that in non-VERBOSE mode we\nshould show the total memory usage (sum from all processes). I suppose\nit's possible they use different sort types (one worker runs out of\nwork_mem and another doesn't), and I'm not sure how how you'd\naggregate that.", "msg_date": "Fri, 8 Nov 2019 15:47:39 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "Both patches aren't applying cleanly anymore.\nThe first patch in the set applied cleanly for me before b925a00f4ef65\n\nIt mostly seems like the default settings for the patch program were\nmy problem, but, since I noticed that the patch tester bot was failing\nto apply it also, I thought I would suggest rebasing it.\n\nI applied it to the sha before b925a00f4ef65 and then cherry-picked it\nto master and it applied fine. I attached that rebased patch with a\nnew version number (is that the preferred way to indicate that it is\nnewer even if it contains no new content?).\n\nThe second patch in the set needed a bit more looking at to rebase,\nwhich I didn't do yet.\n\nI played around with the first patch in the patchset and very much\nappreciate seeing the leaders contribution.\nHowever, I noticed that there were no EXPLAIN diffs in any test files\nand just wondered if this was a conscious choice (even with xxx the\nactual numbers, I would have thought that there would be an EXPLAIN\nVERBOSE with leader participation somewhere in regress).\n\n-- \nMelanie Plageman", "msg_date": "Fri, 17 Jan 2020 12:25:49 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "So, I think from a code review perspective the code in the patches\nLGTM.\nAs for the EXPLAIN ANALYZE tests--I don't see that many of them in\nregress, so maybe that's because they aren't normally very useful. In\nthis case, it would only be to protect against regressions in printing\nthe leader instrumentation, I think.\nThe problem with that is, even with all the non-deterministic info\nredacted, if the leader doesn't participate (which is not guaranteed),\nthen its stats wouldn't be printed at all and that would cause an\nincorrectly failing test case...okay I just talked myself out of the\nusefulness of testing this.\nSo, I would move it to \"ready for committer\", but, since it is not\napplying cleanly, I have changed the status to \"waiting on author\".\n\nSo, I think from a code review perspective the code in the patchesLGTM.As for the EXPLAIN ANALYZE tests--I don't see that many of them inregress, so maybe that's because they aren't normally very useful.  Inthis case, it would only be to protect against regressions in printingthe leader instrumentation, I think.The problem with that is, even with all the non-deterministic inforedacted, if the leader doesn't participate (which is not guaranteed),then its stats wouldn't be printed at all and that would cause anincorrectly failing test case...okay I just talked myself out of theusefulness of testing this.So, I would move it to \"ready for committer\", but, since it is notapplying cleanly, I have changed the status to \"waiting on author\".", "msg_date": "Fri, 24 Jan 2020 18:39:23 -0800", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Sat, Jan 25, 2020 at 3:39 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> So, I think from a code review perspective the code in the patches\n> LGTM.\n> As for the EXPLAIN ANALYZE tests--I don't see that many of them in\n> regress, so maybe that's because they aren't normally very useful. In\n> this case, it would only be to protect against regressions in printing\n> the leader instrumentation, I think.\n> The problem with that is, even with all the non-deterministic info\n> redacted, if the leader doesn't participate (which is not guaranteed),\n> then its stats wouldn't be printed at all and that would cause an\n> incorrectly failing test case...okay I just talked myself out of the\n> usefulness of testing this.\n> So, I would move it to \"ready for committer\", but, since it is not\n> applying cleanly, I have changed the status to \"waiting on author\".\n\nHi Melanie,\n\nThanks for the reviews!\n\nI think I'm going to abandon 0002 for now, because that stuff is being\nrefactored independently over here, so rebasing would be futile:\n\nhttps://www.postgresql.org/message-id/flat/CAOtHd0AvAA8CLB9Xz0wnxu1U%3DzJCKrr1r4QwwXi_kcQsHDVU%3DQ%40mail.gmail.com\n\nOn that basis, I've set it to ready for committer (meaning 0001 only).\nThanks for the rebase. I'll let that sit for a couple of days and see\nif anything conflicting comes out of that other thread. It's a fair\ncomplaint that we lack tests that show the new output; I'll think\nabout adding one too.\n\n\n", "msg_date": "Sat, 25 Jan 2020 16:45:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I think I'm going to abandon 0002 for now, because that stuff is being\n> refactored independently over here, so rebasing would be futile:\n> https://www.postgresql.org/message-id/flat/CAOtHd0AvAA8CLB9Xz0wnxu1U%3DzJCKrr1r4QwwXi_kcQsHDVU%3DQ%40mail.gmail.com\n\nYeah, your 0002 needs some rethinking. I kind of like the proposed\nchange in the text-format output:\n\n Workers Launched: 4\n -> Sort (actual rows=2000 loops=15)\n Sort Key: tenk1.ten\n- Sort Method: quicksort Memory: xxx\n+ Leader: Sort Method: quicksort Memory: xxx\n Worker 0: Sort Method: quicksort Memory: xxx\n Worker 1: Sort Method: quicksort Memory: xxx\n Worker 2: Sort Method: quicksort Memory: xxx\n\nbut it's quite unclear to me how that translates into non-text\nformats, especially if we're not to break invariants about which\nfields are present in a non-text output structure (cf [1]).\n\nI've occasionally wondered whether we'd be better off presenting\nthis info as if the leader were \"worker 0\" and then the N workers\nare workers 1 to N. I've not worked out the implications of that\nin any detail though. It's fairly easy to see what to do for\nfields that can be aggregated (the numbers printed for the node\nas a whole are totals), but it doesn't help us any with something\nlike Sort Method.\n\nOn a narrower note, I'm not at all happy with the fact that 0001\nadds yet another field to *every* PlanState. I think this is\ndoubling down on a fundamentally wrong decision to have\nExecParallelRetrieveInstrumentation do some aggregation immediately.\nI think we should abandon that and just say that it returns the raw\nleader and per-worker data, and then explain.c can aggregate as it\nwishes.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/19416.1580069629%40sss.pgh.pa.us\n\n\n", "msg_date": "Sun, 26 Jan 2020 17:49:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Mon, Jan 27, 2020 at 11:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've occasionally wondered whether we'd be better off presenting\n> this info as if the leader were \"worker 0\" and then the N workers\n> are workers 1 to N. I've not worked out the implications of that\n> in any detail though. It's fairly easy to see what to do for\n> fields that can be aggregated (the numbers printed for the node\n> as a whole are totals), but it doesn't help us any with something\n> like Sort Method.\n\nYeah, in the 0001 patch (which no longer applies and probably just\nneeds to be rewritten now), I used \"Leader:\" in the text format, but\nworker number -1 in the structured formats, which I expected some\nblowback on. I also thought about adding one to all the numbers as\nyou suggest.\n\nIn PHJ I had a related problem: I had to +1 the worker number to get a\nzero-based \"participant number\" so that the leader would have a slot\nin various data structures, and I wondered if we shouldn't just do\nthat to the whole system (eg not just in explain's output or in\nlocalised bits of PHJ code).\n\n> On a narrower note, I'm not at all happy with the fact that 0001\n> adds yet another field to *every* PlanState. I think this is\n> doubling down on a fundamentally wrong decision to have\n> ExecParallelRetrieveInstrumentation do some aggregation immediately.\n> I think we should abandon that and just say that it returns the raw\n> leader and per-worker data, and then explain.c can aggregate as it\n> wishes.\n\nFair point. I will look into that.\n\n\n", "msg_date": "Mon, 27 Jan 2020 13:03:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "Hi Thomas,\n\nOn 1/26/20 7:03 PM, Thomas Munro wrote:\n> \n> Fair point. I will look into that.\n\nAre you still planning on looking at this patch for PG13?\n\nBased on the current state (002 abandoned, 001 needs total rework) I'd \nsay it should just be Returned with Feedback or Closed for now.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 16 Mar 2020 09:39:46 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Tue, Mar 17, 2020 at 2:39 AM David Steele <david@pgmasters.net> wrote:\n> On 1/26/20 7:03 PM, Thomas Munro wrote:\n> > Fair point. I will look into that.\n>\n> Are you still planning on looking at this patch for PG13?\n>\n> Based on the current state (002 abandoned, 001 needs total rework) I'd\n> say it should just be Returned with Feedback or Closed for now.\n\nWhen you put it like that, yeah :-) I marked it returned with\nfeedback. Thanks Melanie and Rafia for the reviews so far, and I'll\nbe back with a new version for PG14.\n\n\n", "msg_date": "Tue, 17 Mar 2020 13:19:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel leader process info in EXPLAIN" }, { "msg_contents": "On Thu, Nov 7, 2019 at 9:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Nov 7, 2019 at 11:37 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > ...\n> > Also, I noticed that the worker details are displayed for sort node even without verbose, but for scans it is only with verbose. Am I missing something or there is something behind? However, I am not sure if this is the introduced by this patch-set.\n>\n> Yeah, it's a pre-existing thing, but I agree it's an interesting\n> difference. We currently don't have a way to show a 'combined'\n> version of a parallel (oblivious) sort: we always show the per-process\n> version, and all this patch changes is how we label the leader's\n> stats. I suppose someone could argue that in non-VERBOSE mode we\n> should show the total memory usage (sum from all processes). I suppose\n> it's possible they use different sort types (one worker runs out of\n> work_mem and another doesn't), and I'm not sure how how you'd\n> aggregate that.\n\nOver at [1] (incremental sort patch) I had a similar question, since\neach sort node (even non-parallel) can execute multiple tuplesorts.\nThe approach I took was to show both average and max for both disk and\nmemory usage as well as all sort strategies used. It looks like this:\n\n -> Incremental Sort\n Sort Key: a, b\n Presorted Key: a\n Full-sort Groups: 4 (Methods: quicksort) Memory: 26kB (avg), 26kB (max)\n -> Index Scan using idx_t_a...\n\nIt'd be great if that had a use here too :)\n\nJames\n\n[1]: https://www.postgresql.org/message-id/CAAaqYe_ctGqQsauuYS5StPULkES7%3Dt8vNwvEPyzXQdbjAuZ6vA%40mail.gmail.com\n\n\n", "msg_date": "Tue, 17 Mar 2020 18:21:58 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel leader process info in EXPLAIN" } ]
[ { "msg_contents": "Hi all ,\n\nTemporal table is one of the main new features added in sql standard 2011.\n From that I will like to implement system versioned temporal table which\nallows to keep past and present data so old data can be queried. Am propose\nto implement it like below\n\nCREATE\n\nIn create table only one table is create and both historical and current\ndata will be store in it. In order to make history and current data\nco-exist row end time column will be added implicitly to primary key.\nRegarding performance one can partition the table by row end time column\norder to make history data didn't slowed performance.\n\nINSERT\n\nIn insert row start time column and row end time column behave like a kind\nof generated stored column except they store current transaction time and\nhighest value supported by the data type which is +infinity respectively.\n\nDELETE and UPDATE\n\nThe old data is inserted with row end time column seated to current\ntransaction time\n\nSELECT\n\nIf the query didn’t contain a filter condition that include system time\ncolumn, a filter condition will be added in early optimization that filter\nhistory data.\n\nAttached is WIP patch that implemented just the above and done on top of\ncommit b8e19b932a99a7eb5a. Temporal clause didn’t implemented yet so one\ncan use regular filter condition for the time being\n\nNOTE: I implement sql standard syntax except it is PERIOD FOR SYSTEM TIME\nrather than PERIOD FOR SYSTEM_TIME in CREATE TABLE statement and system\ntime is not selected unless explicitly asked\n\nAny enlightenment?\n\nregards\n\nSurafel", "msg_date": "Wed, 23 Oct 2019 18:56:50 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "WIP: System Versioned Temporal Table" }, { "msg_contents": "On 23/10/2019 17:56, Surafel Temesgen wrote:\n>\n> Hi all ,\n>\n> Temporal table is one of the main new features added in sql standard\n> 2011. From that I will like to implement system versioned temporal\n> table which allows to keep past and present data so old data can be\n> queried.\n>\n\nExcellent!  I've been wanting this feature for a long time now.  We're\nthe last major database to not have it.\n\n\nI tried my hand at doing it in core, but ended up having better success\nat an extension: https://github.com/xocolatl/periods/\n\n\n> Am propose to implement it like below\n>\n> CREATE\n>\n> In create table only one table is create and both historical and\n> current data will be store in it. In order to make history and current\n> data co-exist row end time column will be added implicitly to primary\n> key. Regarding performance one can partition the table by row end time\n> column order to make history data didn't slowed performance.\n>\n\nIf we're going to be implicitly adding stuff to the PK, we also need to\nadd that stuff to the other unique constraints, no?  And I think it\nwould be better to add both the start and the end column to these keys. \nMost of the temporal queries will be accessing both.\n\n\n> INSERT\n>\n> In insert row start time column and row end time column behave like a\n> kind of generated stored column except they store current transaction\n> time and highest value supported by the data type which is +infinity\n> respectively.\n>\n\nYou're forcing these columns to be timestamp without time zone.  If\nyou're going to force a datatype here, it should absolutely be timestamp\nwith time zone.  However, I would like to see it handle both kinds of\ntimestamps as well as a simple date.\n\n\n> DELETE and UPDATE\n>\n> The old data is inserted with row end time column seated to current\n> transaction time\n>\n\nI don't see any error handling for transaction anomalies.  In READ\nCOMMITTED, you can easily end up with a case where the end time comes\nbefore the start time.  I don't even see anything constraining start\ntime to be strictly inferior to the end time.  Such a constraint will be\nnecessary for application-time periods (which your patch doesn't address\nat all but that's okay).\n\n\n> SELECT\n>\n> If the query didn’t contain a filter condition that include system\n> time column, a filter condition will be added in early optimization\n> that filter history data.\n>\n> Attached is WIP patch that implemented just the above and done on top\n> of commit b8e19b932a99a7eb5a. Temporal clause didn’t implemented yet\n> so one can use regular filter condition for the time being\n>\n> NOTE: I implement sql standard syntax except it is PERIOD FOR SYSTEM\n> TIME rather than PERIOD FOR SYSTEM_TIME in CREATE TABLE statement and\n> system time is not selected unless explicitly asked\n>\n\nWhy aren't you following the standard syntax here?\n\n\n> Any enlightenment?\n>\n\nThere are quite a lot of typos and other things that aren't written \"the\nPostgres way\". But before I comment on any of that, I'd like to see the\nfeatures be implemented correctly according to the SQL standard.\n\n\n\n", "msg_date": "Wed, 23 Oct 2019 22:02:50 +0200", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "hi Vik,\nOn Wed, Oct 23, 2019 at 9:02 PM Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n\n>\n> If we're going to be implicitly adding stuff to the PK, we also need to\n> add that stuff to the other unique constraints, no? And I think it\n> would be better to add both the start and the end column to these keys.\n> Most of the temporal queries will be accessing both.\n>\n>\nyes it have to be added to other constraint too but adding both system time\nto PK will violate constraint because it allow multiple data in current\ndata\n\n\n>\n> Why aren't you following the standard syntax here?\n>\n>\n>\nbecause we do have TIME and SYSTEM_P as a key word and am not sure of\nwhether\nits a right thing to add other keyword that contain those two word\nconcatenated\n\n\n> > Any enlightenment?\n> >\n>\n> There are quite a lot of typos and other things that aren't written \"the\n> Postgres way\". But before I comment on any of that, I'd like to see the\n> features be implemented correctly according to the SQL standard.\n>\n\nit is almost in sql standard syntax except the above small difference. i\ncan correct it\nand post more complete patch soon.\n\nregards\nSurafel\n\nhi Vik,On Wed, Oct 23, 2019 at 9:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote: \n\nIf we're going to be implicitly adding stuff to the PK, we also need to\nadd that stuff to the other unique constraints, no?  And I think it\nwould be better to add both the start and the end column to these keys. \nMost of the temporal queries will be accessing both.\n yes it have to be added to other constraint too but adding both system time to PK will violate constraint because it allow multiple data in current data  \nWhy aren't you following the standard syntax here?\n\nbecause we do have TIME and SYSTEM_P as a key word and am not sure of whether its a right thing to add other keyword that contain those two word concatenated  \n> Any enlightenment?\n>\n\nThere are quite a lot of typos and other things that aren't written \"the\nPostgres way\". But before I comment on any of that, I'd like to see the\nfeatures be implemented correctly according to the SQL standard.it is almost in sql standard syntax except the above small difference. i can correct it and post more complete patch soon. regards Surafel", "msg_date": "Thu, 24 Oct 2019 15:54:33 +0100", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 24/10/2019 16:54, Surafel Temesgen wrote:\n>\n> hi Vik,\n> On Wed, Oct 23, 2019 at 9:02 PM Vik Fearing\n> <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n>  \n>\n>\n> If we're going to be implicitly adding stuff to the PK, we also\n> need to\n> add that stuff to the other unique constraints, no?  And I think it\n> would be better to add both the start and the end column to these\n> keys. \n> Most of the temporal queries will be accessing both.\n>\n>  \n> yes it have to be added to other constraint too but adding both system\n> time \n> to PK will violate constraint because it allow multiple data in\n> current data\n\n\nI don't understand what you mean by this.\n\n\n>  \n>\n>\n> Why aren't you following the standard syntax here?\n>\n>\n>\n> because we do have TIME and SYSTEM_P as a key word and am not sure of\n> whether\n> its a right thing to add other keyword that contain those two word\n> concatenated\n\n\nYes, we have to do that.\n\n\n>  \n>  \n>\n> > Any enlightenment?\n> >\n>\n> There are quite a lot of typos and other things that aren't\n> written \"the\n> Postgres way\". But before I comment on any of that, I'd like to\n> see the\n> features be implemented correctly according to the SQL standard.\n>\n>\n> it is almost in sql standard syntax except the above small difference.\n> i can correct it \n> and post more complete patch soon.\n\n\nI don't mean just the SQL syntax, I also mean the behavior.\n\n\n\n", "msg_date": "Thu, 24 Oct 2019 17:49:32 +0200", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Oct 24, 2019 at 6:49 PM Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n> On 24/10/2019 16:54, Surafel Temesgen wrote:\n> >\n> > hi Vik,\n> > On Wed, Oct 23, 2019 at 9:02 PM Vik Fearing\n> > <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>>\n> wrote:\n> >\n> >\n> >\n> > If we're going to be implicitly adding stuff to the PK, we also\n> > need to\n> > add that stuff to the other unique constraints, no? And I think it\n> > would be better to add both the start and the end column to these\n> > keys.\n> > Most of the temporal queries will be accessing both.\n> >\n> >\n> > yes it have to be added to other constraint too but adding both system\n> > time\n> > to PK will violate constraint because it allow multiple data in\n> > current data\n>\n>\n> I don't understand what you mean by this.\n>\n>\n\nThe primary purpose of adding row end time to primary key is to allow\nduplicate value to be inserted into a table with keeping constraint in\ncurrent data but it can be duplicated in history data. Adding row start\ntime column to primary key will eliminate this uniqueness for current data\nwhich is not correct\n\nregards\nSurafel\n\nOn Thu, Oct 24, 2019 at 6:49 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:On 24/10/2019 16:54, Surafel Temesgen wrote:\n>\n> hi Vik,\n> On Wed, Oct 23, 2019 at 9:02 PM Vik Fearing\n> <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n>  \n>\n>\n>     If we're going to be implicitly adding stuff to the PK, we also\n>     need to\n>     add that stuff to the other unique constraints, no?  And I think it\n>     would be better to add both the start and the end column to these\n>     keys. \n>     Most of the temporal queries will be accessing both.\n>\n>  \n> yes it have to be added to other constraint too but adding both system\n> time \n> to PK will violate constraint because it allow multiple data in\n> current data\n\n\nI don't understand what you mean by this.\n\nThe primary purpose of adding row end\ntime to primary key is to allow duplicate value to be inserted into a\ntable with keeping constraint in current data but it can be\nduplicated in history data. Adding row start time column to primary\nkey will eliminate this uniqueness for current data which is not\ncorrect  regards Surafel", "msg_date": "Fri, 25 Oct 2019 12:56:04 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 25/10/2019 11:56, Surafel Temesgen wrote:\n>\n>\n> On Thu, Oct 24, 2019 at 6:49 PM Vik Fearing\n> <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n>\n> On 24/10/2019 16:54, Surafel Temesgen wrote:\n> >\n> > hi Vik,\n> > On Wed, Oct 23, 2019 at 9:02 PM Vik Fearing\n> > <vik.fearing@2ndquadrant.com\n> <mailto:vik.fearing@2ndquadrant.com>\n> <mailto:vik.fearing@2ndquadrant.com\n> <mailto:vik.fearing@2ndquadrant.com>>> wrote:\n> >  \n> >\n> >\n> >     If we're going to be implicitly adding stuff to the PK, we also\n> >     need to\n> >     add that stuff to the other unique constraints, no?  And I\n> think it\n> >     would be better to add both the start and the end column to\n> these\n> >     keys. \n> >     Most of the temporal queries will be accessing both.\n> >\n> >  \n> > yes it have to be added to other constraint too but adding both\n> system\n> > time \n> > to PK will violate constraint because it allow multiple data in\n> > current data\n>\n>\n> I don't understand what you mean by this.\n>\n>\n>\n> The primary purpose of adding row end time to primary key is to allow\n> duplicate value to be inserted into a table with keeping constraint in\n> current data but it can be duplicated in history data. Adding row\n> start time column to primary key will eliminate this uniqueness for\n> current data which is not correct  \n\n\nHow?  The primary/unique keys must always be unique at every point in time.\n\n\n\n", "msg_date": "Fri, 25 Oct 2019 21:45:46 +0200", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Oct 25, 2019 at 10:45 PM Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n> >\n> > I don't understand what you mean by this.\n> >\n> >\n> >\n> > The primary purpose of adding row end time to primary key is to allow\n> > duplicate value to be inserted into a table with keeping constraint in\n> > current data but it can be duplicated in history data. Adding row\n> > start time column to primary key will eliminate this uniqueness for\n> > current data which is not correct\n>\n>\n> How? The primary/unique keys must always be unique at every point in time.\n>\n\n From user prospect it is acceptable to delete and reinsert a record with\nthe same key value multiple time which means there will be multiple record\nwith the same key value in a history data but there is only one values in\ncurrent data as a table without system versioning do .I add row end time\ncolumn to primary key to allow user supplied primary key values to be\nduplicated in history data which is acceptable\n\nregards\nSurafel\n\nOn Fri, Oct 25, 2019 at 10:45 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:>\n>     I don't understand what you mean by this.\n>\n>\n>\n> The primary purpose of adding row end time to primary key is to allow\n> duplicate value to be inserted into a table with keeping constraint in\n> current data but it can be duplicated in history data. Adding row\n> start time column to primary key will eliminate this uniqueness for\n> current data which is not correct  \n\n\nHow?  The primary/unique keys must always be unique at every point in time.\n\nFrom user prospect\nit is acceptable to delete and reinsert a record with the same key\nvalue multiple time which means there will be multiple record with\nthe same key value in a history data but there is only one values in\ncurrent data as a table without system versioning do .I add row end\ntime column to primary key to allow user supplied primary key values\nto be duplicated in history data which is acceptable regards Surafel", "msg_date": "Mon, 28 Oct 2019 15:48:08 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 28/10/2019 13:48, Surafel Temesgen wrote:\n>\n>\n> On Fri, Oct 25, 2019 at 10:45 PM Vik Fearing\n> <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n>\n> >\n> >     I don't understand what you mean by this.\n> >\n> >\n> >\n> > The primary purpose of adding row end time to primary key is to\n> allow\n> > duplicate value to be inserted into a table with keeping\n> constraint in\n> > current data but it can be duplicated in history data. Adding row\n> > start time column to primary key will eliminate this uniqueness for\n> > current data which is not correct  \n>\n>\n> How?  The primary/unique keys must always be unique at every point\n> in time.\n>\n>\n> From user prospect it is acceptable to delete and reinsert a record\n> with the same key value multiple time which means there will be\n> multiple record with the same key value in a history data but there is\n> only one values in current data as a table without system versioning\n> do .I add row end time column to primary key to allow user supplied\n> primary key values to be duplicated in history data which is acceptable\n>\n\nYes, I understand that.  I'm saying you should also add the row start\ntime column.\n\n\n\n", "msg_date": "Mon, 28 Oct 2019 16:36:09 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi,\nAttached is a complete patch and also contain a fix for your comments\n\nregards\nSurafel", "msg_date": "Wed, 1 Jan 2020 13:50:34 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 01/01/2020 11:50, Surafel Temesgen wrote:\n>\n>\n> Hi,\n> Attached is a complete patch and also contain a fix for your comments\n>\n\nThis does not compile against current head (0ce38730ac).\n\n\ngram.y: error: shift/reduce conflicts: 6 found, 0 expected\n\n-- \n\nVik Fearing\n\n\n\n", "msg_date": "Wed, 1 Jan 2020 22:12:09 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Jan 2, 2020 at 12:12 AM Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n> This does not compile against current head (0ce38730ac).\n>\n>\n> gram.y: error: shift/reduce conflicts: 6 found, 0 expected\n>\n>\nRebased and conflict resolved i hope it build clean this time\n\nregards\nSurafel", "msg_date": "Fri, 3 Jan 2020 13:57:15 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 03/01/2020 11:57, Surafel Temesgen wrote:\n>\n>\n> On Thu, Jan 2, 2020 at 12:12 AM Vik Fearing\n> <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n>\n> This does not compile against current head (0ce38730ac).\n>\n>\n> gram.y: error: shift/reduce conflicts: 6 found, 0 expected\n>\n>\n> Rebased and conflict resolved i hope it build clean this time\n>\n\nIt does but you haven't included your tests file so `make check` fails.\n\n\nIt seems clear to me that you haven't tested it at all anyway.  The\ntemporal conditions do not return the correct results, and the syntax is\nwrong, too.  Also, none of my previous comments have been addressed\nexcept for \"system versioning\" instead of \"system_versioning\".  Why?\n\n-- \n\nVik Fearing\n\n\n\n", "msg_date": "Fri, 3 Jan 2020 14:22:34 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 3, 2020 at 4:22 PM Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n> >\n> > Rebased and conflict resolved i hope it build clean this time\n> >\n>\n> It does but you haven't included your tests file so `make check` fails.\n>\n>\n>\nwhat tests file? i add system_versioned_table.sql and\nsystem_versioned_table.out\ntest files and it tested and pass on appveyor[1] only failed on travis\nbecause of warning. i will add more test\n\n\n> It seems clear to me that you haven't tested it at all anyway. The\n> temporal conditions do not return the correct results, and the syntax is\n> wrong, too. Also, none of my previous comments have been addressed\n> except for \"system versioning\" instead of \"system_versioning\". Why?\n>\n>\nI also correct typo and add row end column time to unique\nkey that make it unique for current data. As you mentioned\nother comment is concerning about application-time periods\nwhich the patch not addressing . i refer sql 2011 standard for\nsyntax can you tell me which syntax you find it wrong?\n[1].\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.73247\n\nregards\nSurafel\n\nOn Fri, Jan 3, 2020 at 4:22 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>\n> Rebased and conflict resolved i hope it build clean this time\n>\n\nIt does but you haven't included your tests file so `make check` fails.\n\nwhat tests file? i add system_versioned_table.sql and system_versioned_table.outtest files and it tested and pass on appveyor[1] only failed on travis because of warning. i will add more test  \nIt seems clear to me that you haven't tested it at all anyway.  The\ntemporal conditions do not return the correct results, and the syntax is\nwrong, too.  Also, none of my previous comments have been addressed\nexcept for \"system versioning\" instead of \"system_versioning\".  Why?I also correct typo and add row end column time to uniquekey that make it unique for current data. As you mentionedother comment is concerning about application-time periodswhich the patch not addressing . i refer sql 2011 standard forsyntax can you tell me which syntax you find it wrong? [1]. https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.73247 regards Surafel", "msg_date": "Sun, 5 Jan 2020 13:16:34 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Mon, Oct 28, 2019 at 6:36 PM Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n> On 28/10/2019 13:48, Surafel Temesgen wrote:\n> >\n> >\n> > On Fri, Oct 25, 2019 at 10:45 PM Vik Fearing\n> > <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>>\n> wrote:\n> >\n> > >\n> > > I don't understand what you mean by this.\n> > >\n> > >\n> > >\n> > > The primary purpose of adding row end time to primary key is to\n> > allow\n> > > duplicate value to be inserted into a table with keeping\n> > constraint in\n> > > current data but it can be duplicated in history data. Adding row\n> > > start time column to primary key will eliminate this uniqueness for\n> > > current data which is not correct\n> >\n> >\n> > How? The primary/unique keys must always be unique at every point\n> > in time.\n> >\n> >\n> > From user prospect it is acceptable to delete and reinsert a record\n> > with the same key value multiple time which means there will be\n> > multiple record with the same key value in a history data but there is\n> > only one values in current data as a table without system versioning\n> > do .I add row end time column to primary key to allow user supplied\n> > primary key values to be duplicated in history data which is acceptable\n> >\n>\n> Yes, I understand that. I'm saying you should also add the row start\n> time column.\n>\n>\nthat allow the same primary key value row to be insert as long\nas insertion time is different\n\nregards\nSurafel\n\nOn Mon, Oct 28, 2019 at 6:36 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:On 28/10/2019 13:48, Surafel Temesgen wrote:\n>\n>\n> On Fri, Oct 25, 2019 at 10:45 PM Vik Fearing\n> <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n>\n>     >\n>     >     I don't understand what you mean by this.\n>     >\n>     >\n>     >\n>     > The primary purpose of adding row end time to primary key is to\n>     allow\n>     > duplicate value to be inserted into a table with keeping\n>     constraint in\n>     > current data but it can be duplicated in history data. Adding row\n>     > start time column to primary key will eliminate this uniqueness for\n>     > current data which is not correct  \n>\n>\n>     How?  The primary/unique keys must always be unique at every point\n>     in time.\n>\n>\n> From user prospect it is acceptable to delete and reinsert a record\n> with the same key value multiple time which means there will be\n> multiple record with the same key value in a history data but there is\n> only one values in current data as a table without system versioning\n> do .I add row end time column to primary key to allow user supplied\n> primary key values to be duplicated in history data which is acceptable\n>\n\nYes, I understand that.  I'm saying you should also add the row start\ntime column.\nthat allow the same primary key value row to be insert as longas insertion time is different regards Surafel", "msg_date": "Sun, 5 Jan 2020 13:26:33 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 05/01/2020 11:16, Surafel Temesgen wrote:\n>\n>\n> On Fri, Jan 3, 2020 at 4:22 PM Vik Fearing\n> <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n>\n> >\n> > Rebased and conflict resolved i hope it build clean this time\n> >\n>\n> It does but you haven't included your tests file so `make check`\n> fails.\n>\n>\n>\n> what tests file?\n\n\nExactly.\n\n\n> i add system_versioned_table.sql and system_versioned_table.out\n> test files\n\n\nThose are not included in the patch.\n\n\n<checks again>\n\n\nOkay, that was user error on my side.  I apologize.\n\n\n>  \n>\n> It seems clear to me that you haven't tested it at all anyway.  The\n> temporal conditions do not return the correct results, and the\n> syntax is\n> wrong, too.  Also, none of my previous comments have been addressed\n> except for \"system versioning\" instead of \"system_versioning\".  Why?\n>\n>\n> I also correct typo and add row end column time to unique\n> key that make it unique for current data. As you mentioned\n> other comment is concerning about application-time periods\n> which the patch not addressing .\n\n\n- For performance, you must put the start column in the indexes also.\n\n- You only handle timestamp when you should also handle timestamptz and\ndate.\n\n- You don't throw 2201H for anomalies\n\n\n> i refer sql 2011 standard for\n> syntax can you tell me which syntax you find it wrong?\n\n\nOkay, now that I see your tests, I understand why everything is broken. \nYou only test FROM-TO and with a really wide interval.  There are no\ntests for AS OF and no tests for BETWEEN-AND.\n\n\nAs for the syntax, you have:\n\n\nselect a from for stest0 system_time from '2000-01-01 00:00:00.00000' to\n'infinity' ORDER BY a;\n\n\nwhen you should have:\n\n\nselect a from stest0 for system_time from '2000-01-01 00:00:00.00000' to\n'infinity' ORDER BY a;\n\n\nThat is, the FOR should be on the other side of the table name.\n\n\nIn addition, there are many rules in the standard that are not respected\nhere.  For example, this query works and should not:\n\n\nCREATE TABLE t (system_time integer) WITH SYSTEM VERSIONING;\n\n\nThis syntax is not supported:\n\n\nALTER TABLE t\n    ADD PERIOD FOR SYSTEM_TIME (s, e)\n        ADD COLUMN s timestamp\n        ADD COLUMN e timestamp;\n\n\npsql's \\d does not show that the table is system versioned, and doesn't\nshow the columns of the system_time period.\n\n\nI can drop columns used in the period.\n\n\nPlease don't hesitate to take inspiration from my extension that does\nthis.  The extension is under the PostgreSQL license for that reason. \nTake from it whatever you need.\n\nhttps://github.com/xocolatl/periods/\n\n-- \n\nVik Fearing\n\n\n\n", "msg_date": "Sun, 5 Jan 2020 13:50:36 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Vik Fearing-4 wrote\n> On 05/01/2020 11:16, Surafel Temesgen wrote:\n>>\n>>\n>> On Fri, Jan 3, 2020 at 4:22 PM Vik Fearing\n>> &lt;\n\n> vik.fearing@\n\n> &lt;mailto:\n\n> vik.fearing@\n\n> &gt;> wrote:\n>>\n> \n> [...]\n> \n> You only test FROM-TO and with a really wide interval.  There are no\n> tests for AS OF and no tests for BETWEEN-AND.\n> \n> \n> As for the syntax, you have:\n> \n> \n> select a from for stest0 system_time from '2000-01-01 00:00:00.00000' to\n> 'infinity' ORDER BY a;\n> \n> \n> when you should have:\n> \n> \n> select a from stest0 for system_time from '2000-01-01 00:00:00.00000' to\n> 'infinity' ORDER BY a;\n> \n> \n> That is, the FOR should be on the other side of the table name.\n> \n> [...] \n> \n> Vik Fearing\n\nHello,\n\nI though that standard syntax was \"AS OF SYSTEM TIME\"\nas discussed here\nhttps://www.postgresql.org/message-id/flat/A254CDC3-D308-4822-8928-8CC584E0CC71%40elusive.cx#06c5dbffd5cfb9a20cdeec7a54dc657f\n, also explaining how to parse such a syntax .\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Sun, 5 Jan 2020 08:01:44 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 05/01/2020 16:01, legrand legrand wrote:\n>\n>> As for the syntax, you have:\n>>\n>>\n>> select a from for stest0 system_time from '2000-01-01 00:00:00.00000' to\n>> 'infinity' ORDER BY a;\n>>\n>>\n>> when you should have:\n>>\n>>\n>> select a from stest0 for system_time from '2000-01-01 00:00:00.00000' to\n>> 'infinity' ORDER BY a;\n>>\n>>\n>> That is, the FOR should be on the other side of the table name.\n>>\n>> [...] \n>>\n>> Vik Fearing\n> Hello,\n>\n> I though that standard syntax was \"AS OF SYSTEM TIME\"\n> as discussed here\n> https://www.postgresql.org/message-id/flat/A254CDC3-D308-4822-8928-8CC584E0CC71%40elusive.cx#06c5dbffd5cfb9a20cdeec7a54dc657f\n> , also explaining how to parse such a syntax .\n\n\nNo, that is incorrect.  The standard syntax is:\n\n\n    FROM tablename FOR SYSTEM_TIME AS OF '...'\n\n    FROM tablename FOR SYSTEM_TIME BETWEEN '...' AND '...'\n\n    FROM tablename FOR SYSTEM_TIME FROM '...' TO '...'\n\n-- \n\nVik Fearing\n\n\n\n", "msg_date": "Sun, 5 Jan 2020 16:22:56 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Vik Fearing-4 wrote\n> On 05/01/2020 16:01, legrand legrand wrote:\n> \n> \n> No, that is incorrect.  The standard syntax is:\n> \n> \n>     FROM tablename FOR SYSTEM_TIME AS OF '...'\n> \n>     FROM tablename FOR SYSTEM_TIME BETWEEN '...' AND '...'\n> \n>     FROM tablename FOR SYSTEM_TIME FROM '...' TO '...'\n> \n> -- \n> \n> Vik Fearing\n\noups, I re-read links and docs and I'm wrong.\nSorry for the noise\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Sun, 5 Jan 2020 12:26:06 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hello.\n\nIsn't this patch somehow broken?\n\nAt Mon, 28 Oct 2019 16:36:09 +0100, Vik Fearing <vik.fearing@2ndquadrant.com> wrote in \n> On 28/10/2019 13:48, Surafel Temesgen wrote:\n> >\n> >\n> > On Fri, Oct 25, 2019 at 10:45 PM Vik Fearing\n> > <vik.fearing@2ndquadrant.com <mailto:vik.fearing@2ndquadrant.com>> wrote:\n> >\n> > >\n> > >     I don't understand what you mean by this.\n> > >\n> > >\n> > >\n> > > The primary purpose of adding row end time to primary key is to\n> > allow\n> > > duplicate value to be inserted into a table with keeping\n> > constraint in\n> > > current data but it can be duplicated in history data. Adding row\n> > > start time column to primary key will eliminate this uniqueness for\n> > > current data which is not correct  \n> >\n> >\n> > How?  The primary/unique keys must always be unique at every point\n> > in time.\n> >\n> >\n> > From user prospect it is acceptable to delete and reinsert a record\n> > with the same key value multiple time which means there will be\n> > multiple record with the same key value in a history data but there is\n> > only one values in current data as a table without system versioning\n> > do .I add row end time column to primary key to allow user supplied\n> > primary key values to be duplicated in history data which is acceptable\n> >\n> \n> Yes, I understand that.  I'm saying you should also add the row start\n> time column.\n\nI think that the start and end timestamps represent the period where\nthat version of the row was active. So UPDATE should modify the start\ntimestamp of the new version to the same value with the end timestamp\nof the old version to the updated time. Thus, I don't think adding\nstart timestamp to PK doesn't work as expected. That hinders us from\nrejecting rows with the same user-defined unique key because start\ntimestams is different each time of inserts. I think what Surafel is\ndoing in the current patch is correct. Only end_timestamp = +infinity\nrejects another non-historical (= live) version with the same\nuser-defined unique key.\n\nI'm not sure why the patch starts from \"0002, but anyway it applied\non e369f37086. Then I ran make distclean, ./configure then make all,\nmake install, initdb and started server after all of them.\n\nFirst, I tried to create a temporal table.\n\nWhen I used timestamptz as the type of versioning columns, ALTER,\nCREATE commands ended with server crash. \n\n \"CREATE TABLE t (a int, s timestamptz GENERATED ALWAYS AS ROW START, e timestamptz GENERATED ALWAYS AS ROW END);\"\n (CREATE TABLE t (a int);)\n \"ALTER TABLE t ADD COLUMN s timestamptz GENERATED ALWAYS AS ROW START\"\n \"ALTER TABLE t ADD COLUMN s timestamptz GENERATED ALWAYS AS ROW START, ADD COLUMN e timestamptz GENERATED ALWAYS AS ROW END\"\n\nIf I added the start/end timestamp columns to an existing table, it\nreturns uncertain error.\n\n ALTER TABLE t ADD COLUMN s timestamp(6) GENERATED ALWAYS AS ROW START;\n ERROR: column \"s\" contains null values\n ALTER TABLE t ADD COLUMN s timestamp(6) GENERATED ALWAYS AS ROW START, ADD COLUMN e timestamp(6) GENERATED ALWAYS AS ROW END;\n ERROR: column \"s\" contains null values\n\n\nWhen I defined only start column, SELECT on the table crashed.\n\n \"CREATE TABLE t (s timestamp(6) GENERATED ALWAYS AS ROW START);\"\n \"SELECT * from t;\"\n (crashed)\n\nThe following command ended with ERROR which I cannot understand the\ncause, but I expected the command to be accepted.\n\n ALTER TABLE t ADD COLUMN start timestamp(6) GENERATED ALWAYS AS ROW START, ADD COLUMN end timestamp(6) GENERATED ALWAYS AS ROW END;\n ERROR: syntax error at or near \"end\"\n\nI didin't examined further but the syntax part doesn't seem designed\nwell, and the body part seems vulnerable to unexpected input.\n\n\nI ran a few queries:\n\nSELECT * shows the timestamp columns, don't we need to hide the period\ntimestamp columns from this query?\n\nI think UPDATE needs to update the start timestamp, but it doesn't. As\nthe result the timestamps doesn't represent the correct lifetime of\nthe row version and we wouldn't be able to pick up correct versions of\na row that exprerienced updates. (I didn't confirmed that because I\ncouldn't do \"FOR SYSTEM_TIME AS OF\" query due to syntax error..)\n\n(Sorry in advance for possible pointless comments due to my lack of\naccess to the SQL11 standard.) If we have the period-timestamp\ncolumns before the last two columns, INSERT in a common way on the\ntable fails, which doesn't seem to me to be expected behavior:\n\n CREATE TABLE t (s timestamp(6) GENERATED ALWAYS AS ROW START, e timestamp(6) GENERATED ALWAYS AS ROW END, a int) WITH SYSTEM VERSIONING;\n INSERT INTO t (SELECT a FROM generate_series(0, 99) a);\n ERROR: column \"s\" is of type timestamp without time zone but expression is of type integer\n\nSome queries using SYSTEM_TIME which I think should be accepted ends\nwith error. Is the grammar part missing something?\n\n SELECT * FROM t FOR SYSTEM_TIME AS OF '2020-01-07 09:57:55';\n ERROR: syntax error at or near \"system_time\"\n LINE 1: SELECT * FROM t FOR SYSTEM_TIME AS OF '2020-01-07 09:57:55';\n\n SELECT * FROM t FOR SYSTEM_TIME BETWEEN '2020-01-07 0:00:00' AND '2020-01-08 0:00:00';\n ERROR: syntax error at or near \"system_time\"\n LINE 1: SELECT * FROM t FOR SYSTEM_TIME BETWEEN '2020-01-07 0:00:00'...\n\n\nOther random comments (sorry for it not being comprehensive):\n\nThe patch at worst loops over all columns at every parse time. It is\nquite ineffecient if there are many columns. We can store the column\nindexes in relcache.\n\nIf I'm not missing anything, alter table doesn't properly modify\nexisting data in the target table. AddSystemVersioning should fill in\nstart/end_timestamp with proper values and DropSystemVersioning shuold\nremove rows no longer useful.\n\n\n+makeAndExpr(Node *lexpr, Node *rexpr, int location)\n\n I believe that planner flattenes nested AND/ORs in\n eval_const_expressions(). Shouldn't we leave the work to the planner?\n\n\n+makeConstraint(ConstrType type)\n\nWe didn't use such a function to make that kind of nodes. Maybe we\nshould use makeNode directly, or we need to change similar coding into\nthat using the function. Addition to that, setting .location to 1 is\nwrong. \"Unknown\" location is -1.\n\nSeparately from that, temporal clauses is not restriction of a\ntable. So it seems wrong to me to use constraint mechamism for this\npurpose.\n\n+makeSystemColumnDef(char *name)\n\n\"system column (or attribute)\" is a column specially treated outside\nof tuple descriptor. The temporal-period columns are not system\ncolumns in that sense.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 07 Jan 2020 19:32:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 05/01/2020 13:50, Vik Fearing wrote:\n> Okay, now that I see your tests, I understand why everything is broken. \n> You only test FROM-TO and with a really wide interval.  There are no\n> tests for AS OF and no tests for BETWEEN-AND.\n\n\nI have started working on some better test cases for you.  The attached\n.sql and .out tests should pass, and they are some of the tests that\nI'll be putting your next version through.  There are many more tests\nthat need to be added.\n\n\nOnce all the desired functionality is there, I'll start reviewing the\ncode itself.\n\n\nKeep up the good work, and let me know if I can do anything to help you.\n\n-- \n\nVik Fearing", "msg_date": "Wed, 8 Jan 2020 22:23:56 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Surafel,\n\nOn 1/3/20 5:57 AM, Surafel Temesgen wrote:\n> Rebased and conflict resolved i hope it build clean this time\n\nThis patch no longer applies according to cfbot and there are a number \nof review comments that don't seem to have been addressed yet.\n\nThe patch is not exactly new for this CF but since the first version was \nposted 2020-01-01 and there have been no updates (except a rebase) since \nthen it comes pretty close.\n\nWere you planning to work on this for PG13? If so we'll need to see a \nrebased and updated patch pretty soon. My recommendation is that we \nmove this patch to PG14.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 3 Mar 2020 13:33:46 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 03/03/2020 19:33, David Steele wrote:\n> Hi Surafel,\n> \n> On 1/3/20 5:57 AM, Surafel Temesgen wrote:\n>> Rebased and conflict resolved i hope it build clean this time\n> \n> This patch no longer applies according to cfbot and there are a number\n> of review comments that don't seem to have been addressed yet.\n> \n> The patch is not exactly new for this CF but since the first version was\n> posted 2020-01-01 and there have been no updates (except a rebase) since\n> then it comes pretty close.\n> \n> Were you planning to work on this for PG13?  If so we'll need to see a\n> rebased and updated patch pretty soon.  My recommendation is that we\n> move this patch to PG14.\n\nI strongly second that motion.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 3 Mar 2020 19:45:14 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi,\nThank you very much looking at it\nOn Tue, Jan 7, 2020 at 1:33 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> Hello.\n>\n> Isn't this patch somehow broken?\n>\n>\n> First, I tried to create a temporal table.\n>\n> When I used timestamptz as the type of versioning columns, ALTER,\n> CREATE commands ended with server crash.\n>\n> \"CREATE TABLE t (a int, s timestamptz GENERATED ALWAYS AS ROW START, e\n> timestamptz GENERATED ALWAYS AS ROW END);\"\n> (CREATE TABLE t (a int);)\n> \"ALTER TABLE t ADD COLUMN s timestamptz GENERATED ALWAYS AS ROW START\"\n> \"ALTER TABLE t ADD COLUMN s timestamptz GENERATED ALWAYS AS ROW START,\n> ADD COLUMN e timestamptz GENERATED ALWAYS AS ROW END\"\n>\n> If I added the start/end timestamp columns to an existing table, it\n> returns uncertain error.\n>\n> ALTER TABLE t ADD COLUMN s timestamp(6) GENERATED ALWAYS AS ROW START;\n> ERROR: column \"s\" contains null values\n> ALTER TABLE t ADD COLUMN s timestamp(6) GENERATED ALWAYS AS ROW START,\n> ADD COLUMN e timestamp(6) GENERATED ALWAYS AS ROW END;\n> ERROR: column \"s\" contains null values\n>\n>\n> When I defined only start column, SELECT on the table crashed.\n>\n> \"CREATE TABLE t (s timestamp(6) GENERATED ALWAYS AS ROW START);\"\n> \"SELECT * from t;\"\n> (crashed)\n>\n> The following command ended with ERROR which I cannot understand the\n> cause, but I expected the command to be accepted.\n>\n>\nFixed\n\n ALTER TABLE t ADD COLUMN start timestamp(6) GENERATED ALWAYS AS ROW\n> START, ADD COLUMN end timestamp(6) GENERATED ALWAYS AS ROW END;\n> ERROR: syntax error at or near \"end\"\n>\n>\nend is a keyword\n\n\n> I didin't examined further but the syntax part doesn't seem designed\n> well, and the body part seems vulnerable to unexpected input.\n>\n>\n> I ran a few queries:\n>\n> SELECT * shows the timestamp columns, don't we need to hide the period\n> timestamp columns from this query?\n>\n>\nThe sql standard didn't dictate hiding the column but i agree hiding it by\ndefault is good thing because this columns are used by the system\nto classified the data and not needed in user side frequently. I can\nchange to that if we have consensus\n\n\n> I think UPDATE needs to update the start timestamp, but it doesn't. As\n> the result the timestamps doesn't represent the correct lifetime of\n> the row version and we wouldn't be able to pick up correct versions of\n> a row that exprerienced updates. (I didn't confirmed that because I\n> couldn't do \"FOR SYSTEM_TIME AS OF\" query due to syntax error..)\n>\n>\nRight. It have to set both system time for inserted row and set row end\ntime for\ndeleted row. I fix it\n\n\n> (Sorry in advance for possible pointless comments due to my lack of\n> access to the SQL11 standard.) If we have the period-timestamp\n> columns before the last two columns, INSERT in a common way on the\n> table fails, which doesn't seem to me to be expected behavior:\n>\n> CREATE TABLE t (s timestamp(6) GENERATED ALWAYS AS ROW START, e\n> timestamp(6) GENERATED ALWAYS AS ROW END, a int) WITH SYSTEM VERSIONING;\n> INSERT INTO t (SELECT a FROM generate_series(0, 99) a);\n> ERROR: column \"s\" is of type timestamp without time zone but expression\n> is of type integer\n>\n>\nIts the same without the patch too\nCREATE TABLE t (s timestamptz , e timestamptz, a int);\nINSERT INTO t (SELECT a FROM generate_series(0, 99) a);\nERROR: column \"s\" is of type timestamp with time zone but expression is of\ntype integer\nLINE 1: INSERT INTO t (SELECT a FROM generate_series(0, 99) a);\n\n\n> Some queries using SYSTEM_TIME which I think should be accepted ends\n> with error. Is the grammar part missing something?\n>\n> SELECT * FROM t FOR SYSTEM_TIME AS OF '2020-01-07 09:57:55';\n> ERROR: syntax error at or near \"system_time\"\n> LINE 1: SELECT * FROM t FOR SYSTEM_TIME AS OF '2020-01-07 09:57:55';\n>\n> SELECT * FROM t FOR SYSTEM_TIME BETWEEN '2020-01-07 0:00:00' AND\n> '2020-01-08 0:00:00';\n> ERROR: syntax error at or near \"system_time\"\n> LINE 1: SELECT * FROM t FOR SYSTEM_TIME BETWEEN '2020-01-07 0:00:00'...\n>\n>\nfixed\n\n\n> Other random comments (sorry for it not being comprehensive):\n>\n> The patch at worst loops over all columns at every parse time. It is\n> quite ineffecient if there are many columns. We can store the column\n> indexes in relcache.\n>\n>\nbut its only for system versioned table.\n\n\n> If I'm not missing anything, alter table doesn't properly modify\n> existing data in the target table. AddSystemVersioning should fill in\n> start/end_timestamp with proper values and DropSystemVersioning shuold\n> remove rows no longer useful.\n>\n>\nfixed\n\n\n> +makeAndExpr(Node *lexpr, Node *rexpr, int location)\n>\n> I believe that planner flattenes nested AND/ORs in\n> eval_const_expressions(). Shouldn't we leave the work to the planner?\n>\n>\n>\nfilter clause is added using makeAndExpr and planner can flat that if it\nsees fit\n\n\n> +makeConstraint(ConstrType type)\n>\n> We didn't use such a function to make that kind of nodes. Maybe we\n> should use makeNode directly, or we need to change similar coding into\n> that using the function. Addition to that, setting .location to 1 is\n> wrong. \"Unknown\" location is -1.\n>\n\ndone\n\n\n> Separately from that, temporal clauses is not restriction of a\n> table. So it seems wrong to me to use constraint mechamism for this\n> purpose.\n>\n>\nwe use constraint mechanism for similar thing like default value and\ngenerated column\n\n\n> +makeSystemColumnDef(char *name)\n>\n> \"system column (or attribute)\" is a column specially treated outside\n> of tuple descriptor. The temporal-period columns are not system\n> columns in that sense.\n>\n\nchanged to makeTemporalColumnDef and use timestamptz for all\nversioning purpose.\n\nAttach is the patch that fix the above and uses Vik's regression test\n\nregards\nSurafel", "msg_date": "Tue, 10 Mar 2020 15:58:41 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 3, 2020 at 9:33 PM David Steele <david@pgmasters.net> wrote:\n\n> Hi Surafel,\n>\n> On 1/3/20 5:57 AM, Surafel Temesgen wrote:\n> > Rebased and conflict resolved i hope it build clean this time\n>\n> This patch no longer applies according to cfbot and there are a number\n> of review comments that don't seem to have been addressed yet.\n>\n> The patch is not exactly new for this CF but since the first version was\n> posted 2020-01-01 and there have been no updates (except a rebase) since\n> then it comes pretty close.\n>\n> Were you planning to work on this for PG13? If so we'll need to see a\n> rebased and updated patch pretty soon. My recommendation is that we\n> move this patch to PG14.\n>\n>\nI agree with moving to PG14 . Its hard to make it to PG13.\n\nregards\nSurafel\n\nHi,On Tue, Mar 3, 2020 at 9:33 PM David Steele <david@pgmasters.net> wrote:Hi Surafel,\n\nOn 1/3/20 5:57 AM, Surafel Temesgen wrote:\n> Rebased and conflict resolved i hope it build clean this time\n\nThis patch no longer applies according to cfbot and there are a number \nof review comments that don't seem to have been addressed yet.\n\nThe patch is not exactly new for this CF but since the first version was \nposted 2020-01-01 and there have been no updates (except a rebase) since \nthen it comes pretty close.\n\nWere you planning to work on this for PG13?  If so we'll need to see a \nrebased and updated patch pretty soon.  My recommendation is that we \nmove this patch to PG14.\nI agree with moving to  PG14 . Its hard to make it to PG13.regards Surafel", "msg_date": "Tue, 10 Mar 2020 16:00:26 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 3/10/20 9:00 AM, Surafel Temesgen wrote:\n> On Tue, Mar 3, 2020 at 9:33 PM David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> The patch is not exactly new for this CF but since the first version\n> was\n> posted 2020-01-01 and there have been no updates (except a rebase)\n> since\n> then it comes pretty close.\n> \n> Were you planning to work on this for PG13?  If so we'll need to see a\n> rebased and updated patch pretty soon.  My recommendation is that we\n> move this patch to PG14.\n> \n> I agree with moving to  PG14 . Its hard to make it to PG13.\n\nThe target version is now PG14.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 10 Mar 2020 10:07:02 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Surafel and the rest,\n\nI'm the owner of the Israeli meetup group of PostgreSQL, and I'm interested\nin Temporality and have been trying for several years a few ways to add it\nto PostgreSQL\n(all of them through extensions and external ways).\nI'm happy that this is done by you internally (and a little bit\ndisappointed that it's delayed again and again, but that's life...).\nI'll be happy to join this effort.\nI can't promise that I'll succeed to contribute anything, but first I want\nto play with it a little.\nTo save me several hours, can you advise me what is the best way to install\nit?\nWhich exact version of PG should I apply this patch to?\n\nThanks in advance, and thanks for your great work!\nEli\n\nOn Tue, Mar 31, 2020 at 10:04 PM Surafel Temesgen <surafel3000@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Tue, Mar 3, 2020 at 9:33 PM David Steele <david@pgmasters.net> wrote:\n>\n>> Hi Surafel,\n>>\n>> On 1/3/20 5:57 AM, Surafel Temesgen wrote:\n>> > Rebased and conflict resolved i hope it build clean this time\n>>\n>> This patch no longer applies according to cfbot and there are a number\n>> of review comments that don't seem to have been addressed yet.\n>>\n>> The patch is not exactly new for this CF but since the first version was\n>> posted 2020-01-01 and there have been no updates (except a rebase) since\n>> then it comes pretty close.\n>>\n>> Were you planning to work on this for PG13? If so we'll need to see a\n>> rebased and updated patch pretty soon. My recommendation is that we\n>> move this patch to PG14.\n>>\n>>\n> I agree with moving to PG14 . Its hard to make it to PG13.\n>\n> regards\n> Surafel\n>\n\nHi Surafel and the rest,I'm the owner of the Israeli meetup group of PostgreSQL, and I'm interested in Temporality and have been trying for several years a few ways to add it to PostgreSQL(all of them through extensions and external ways).I'm happy that this is done by you internally (and a little bit disappointed that it's delayed again and again, but that's life...).I'll be happy to join this effort.I can't promise that I'll succeed to contribute anything, but first I want to play with it a little.To save me several hours, can you advise me what is the best way to install it?Which exact version of PG should I apply this patch to?Thanks in advance, and thanks for your great work!EliOn Tue, Mar 31, 2020 at 10:04 PM Surafel Temesgen <surafel3000@gmail.com> wrote:Hi,On Tue, Mar 3, 2020 at 9:33 PM David Steele <david@pgmasters.net> wrote:Hi Surafel,\n\nOn 1/3/20 5:57 AM, Surafel Temesgen wrote:\n> Rebased and conflict resolved i hope it build clean this time\n\nThis patch no longer applies according to cfbot and there are a number \nof review comments that don't seem to have been addressed yet.\n\nThe patch is not exactly new for this CF but since the first version was \nposted 2020-01-01 and there have been no updates (except a rebase) since \nthen it comes pretty close.\n\nWere you planning to work on this for PG13?  If so we'll need to see a \nrebased and updated patch pretty soon.  My recommendation is that we \nmove this patch to PG14.\nI agree with moving to  PG14 . Its hard to make it to PG13.regards Surafel", "msg_date": "Tue, 31 Mar 2020 22:12:25 +0300", "msg_from": "Eli Marmor <eli@netmask.it>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Tue, Mar 31, 2020 at 10:12 PM Eli Marmor <eli@netmask.it> wrote:\n\n> Hi Surafel and the rest,\n>\n> I'm the owner of the Israeli meetup group of PostgreSQL, and I'm\n> interested in Temporality and have been trying for several years a few ways\n> to add it to PostgreSQL\n> (all of them through extensions and external ways).\n> I'm happy that this is done by you internally (and a little bit\n> disappointed that it's delayed again and again, but that's life...).\n> I'll be happy to join this effort.\n> I can't promise that I'll succeed to contribute anything, but first I want\n> to play with it a little.\n> To save me several hours, can you advise me what is the best way to\n> install it?\n> Which exact version of PG should I apply this patch to?\n>\n> Thanks in advance, and thanks for your great work!\n> Eli\n>\n>\n\nHey Eli,\nSorry for my late reply. reviewing it is greatly appreciated. I attach\nrebased patch.\nPlease use git repo and it will work on current HEAD\n\nregards\nSurafel", "msg_date": "Fri, 17 Jul 2020 17:18:20 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi, thanks for working on this. I had planned to work on it and I’m looking forward to this natively in Postgres.\n\nThe patch builds with the following warnings:\n\nplancat.c:2368:18: warning: variable 'name' is used uninitialized whenever 'for' loop exits because its condition is false [-Wsometimes-uninitialized]\n for (int i = 0; i < natts; i++)\n ^~~~~~~~~\nplancat.c:2379:9: note: uninitialized use occurs here\n return name;\n ^~~~\nplancat.c:2368:18: note: remove the condition if it is always true\n for (int i = 0; i < natts; i++)\n ^~~~~~~~~\nplancat.c:2363:15: note: initialize the variable 'name' to silence this warning\n char *name;\n ^\n = NULL\nplancat.c:2396:18: warning: variable 'name' is used uninitialized whenever 'for' loop exits because its condition is false [-Wsometimes-uninitialized]\n for (int i = 0; i < natts; i++)\n ^~~~~~~~~\nplancat.c:2407:9: note: uninitialized use occurs here\n return name;\n ^~~~\nplancat.c:2396:18: note: remove the condition if it is always true\n for (int i = 0; i < natts; i++)\n ^~~~~~~~~\nplancat.c:2391:15: note: initialize the variable 'name' to silence this warning\n char *name;\n ^\n = NULL\n2 warnings generated.\n\n\nmake check pass without issues, but make check-world fails for postgres_fdw, the diff is attached.\n\n\nBefore going further in the review, I’m a bit surprised by the quantity of code needed here. In https://github.com/xocolatl/periods there is far less code and I would have expected the same here. For example, are the changes to copy necessary or would it be possible to have a first patch the only implement the minimal changes required for this feature?\n\n\n\nThanks a lot!\n\nRémi", "msg_date": "Sat, 18 Jul 2020 18:05:00 +0200", "msg_from": "=?utf-8?Q?R=C3=A9mi_Lapeyre?= <remi.lapeyre@lenstra.fr>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hey Rémi,\nThank you for looking at it\n\nOn Sat, Jul 18, 2020 at 7:05 PM Rémi Lapeyre <remi.lapeyre@lenstra.fr>\nwrote:\n\n> Hi, thanks for working on this. I had planned to work on it and I’m\n> looking forward to this natively in Postgres.\n>\n> The patch builds with the following warnings:\n>\n> plancat.c:2368:18: warning: variable 'name' is used uninitialized whenever\n> 'for' loop exits because its condition is false [-Wsometimes-uninitialized]\n> for (int i = 0; i < natts; i++)\n> ^~~~~~~~~\n> plancat.c:2379:9: note: uninitialized use occurs here\n> return name;\n> ^~~~\n> plancat.c:2368:18: note: remove the condition if it is always true\n> for (int i = 0; i < natts; i++)\n> ^~~~~~~~~\n> plancat.c:2363:15: note: initialize the variable 'name' to silence this\n> warning\n> char *name;\n> ^\n> = NULL\n> plancat.c:2396:18: warning: variable 'name' is used uninitialized whenever\n> 'for' loop exits because its condition is false [-Wsometimes-uninitialized]\n> for (int i = 0; i < natts; i++)\n> ^~~~~~~~~\n> plancat.c:2407:9: note: uninitialized use occurs here\n> return name;\n> ^~~~\n> plancat.c:2396:18: note: remove the condition if it is always true\n> for (int i = 0; i < natts; i++)\n> ^~~~~~~~~\n> plancat.c:2391:15: note: initialize the variable 'name' to silence this\n> warning\n> char *name;\n> ^\n> = NULL\n> 2 warnings generated.\n>\n>\n>\n\nI wonder why my compiler didn't show me this\n\nmake check pass without issues, but make check-world fails for\n> postgres_fdw, the diff is attached.\n>\n>\n Okay thanks the attached patch contains a fix for both issue\n\n\n> Before going further in the review, I’m a bit surprised by the quantity of\n> code needed here. In https://github.com/xocolatl/periods there is far\n> less code and I would have expected the same here. For example, are the\n> changes to copy necessary or would it be possible to have a first patch the\n> only implement the minimal changes required for this feature?\n>\n>\nYes there not many c code in there because most of the logice is written\nin SQL\n\nregards\nSurafel", "msg_date": "Tue, 21 Jul 2020 17:32:44 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Tue, Jul 21, 2020 at 05:32:44PM +0300, Surafel Temesgen wrote:\n> Thank you for looking at it\n\nThe patch is failing to apply. Could you send a rebase please?\n--\nMichael", "msg_date": "Tue, 29 Sep 2020 14:44:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Michael,\n\nOn Tue, Sep 29, 2020 at 8:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> The patch is failing to apply. Could you send a rebase please?\n>\n\nAttached is a rebased one.\n\nregards\nSurafel", "msg_date": "Tue, 29 Sep 2020 12:54:52 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi,\r\n\r\njust a quick comment that this patch fails on the cfbot.\r\n\r\nCheers,\r\n//Georgios", "msg_date": "Tue, 10 Nov 2020 15:18:22 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Attached is a rebased one.\nregards\nSurafel", "msg_date": "Thu, 19 Nov 2020 21:03:50 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Nov 19, 2020 at 11:04 AM Surafel Temesgen <surafel3000@gmail.com>\nwrote:\n\n>\n> Attached is a rebased one.\n> regards\n> Surafel\n>\n\nThank you for your work on this! The v7 patch fails on the current master\nbranch. Error from make:\n\ngram.y:16695:1: error: static declaration of ‘makeAndExpr’ follows\nnon-static declaration\n makeAndExpr(Node *lexpr, Node *rexpr, int location)\n ^~~~~~~~~~~\nIn file included from gram.y:58:0:\n../../../src/include/nodes/makefuncs.h:108:14: note: previous declaration\nof ‘makeAndExpr’ was here\n extern Node *makeAndExpr(Node *lexpr, Node *rexpr, int location);\n ^~~~~~~~~~~\ngram.y:16695:1: warning: ‘makeAndExpr’ defined but not used\n[-Wunused-function]\n makeAndExpr(Node *lexpr, Node *rexpr, int location)\n ^~~~~~~~~~~\n\n\n\nThe docs have two instances of \"EndtTime\" that should be \"EndTime\".\n\nRyan Lambert\n\nOn Thu, Nov 19, 2020 at 11:04 AM Surafel Temesgen <surafel3000@gmail.com> wrote:Attached is a rebased one. regardsSurafelThank you for your work on this!  The v7 patch fails on the current master branch.  Error from make:gram.y:16695:1: error: static declaration of ‘makeAndExpr’ follows non-static declaration makeAndExpr(Node *lexpr, Node *rexpr, int location) ^~~~~~~~~~~In file included from gram.y:58:0:../../../src/include/nodes/makefuncs.h:108:14: note: previous declaration of ‘makeAndExpr’ was here extern Node *makeAndExpr(Node *lexpr, Node *rexpr, int location);              ^~~~~~~~~~~gram.y:16695:1: warning: ‘makeAndExpr’ defined but not used [-Wunused-function] makeAndExpr(Node *lexpr, Node *rexpr, int location) ^~~~~~~~~~~The docs have two instances of \"EndtTime\" that should be \"EndTime\". Ryan Lambert", "msg_date": "Fri, 18 Dec 2020 12:28:40 -0700", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Ryan,\n\nOn Fri, Dec 18, 2020 at 10:28 PM Ryan Lambert <ryan@rustprooflabs.com>\nwrote:\n\n> On Thu, Nov 19, 2020 at 11:04 AM Surafel Temesgen <surafel3000@gmail.com>\n> wrote:\n>\n> The docs have two instances of \"EndtTime\" that should be \"EndTime\".\n>\n\nSince my first language is not english i'm glad you find only this error\non doc. I will send rebased pach soon\n\nregards\nSurafel\n\nHi Ryan,On Fri, Dec 18, 2020 at 10:28 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:On Thu, Nov 19, 2020 at 11:04 AM Surafel Temesgen <surafel3000@gmail.com> wrote:The docs have two instances of \"EndtTime\" that should be \"EndTime\". Since my first language is not english i'm glad you find only this erroron doc. I will send rebased pach soon regards Surafel", "msg_date": "Mon, 21 Dec 2020 21:01:23 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Surafel,\n\nOn Tue, Dec 22, 2020 at 3:01 AM Surafel Temesgen <surafel3000@gmail.com> wrote:\n>\n> Hi Ryan,\n>\n> On Fri, Dec 18, 2020 at 10:28 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>>\n>> On Thu, Nov 19, 2020 at 11:04 AM Surafel Temesgen <surafel3000@gmail.com> wrote:\n>>\n>> The docs have two instances of \"EndtTime\" that should be \"EndTime\".\n>\n>\n> Since my first language is not english i'm glad you find only this error\n> on doc. I will send rebased pach soon\n>\n\nThe patch is not submitted yet. Are you planning to submit the updated\npatch? Please also note the v7 patch cannot be applied to the current\nHEAD. I'm switching the patch as Waiting on Author.\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 4 Jan 2021 23:23:41 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Mon, Jan 4, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Please also note the v7 patch cannot be applied to the current HEAD. I'm switching the patch as Waiting on Author.\n\nSurafel, please say whether you are working on this or not. If you\nneed help, let us know.\n\nOn Tue, 7 Jan 2020 at 10:33, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> I think that the start and end timestamps represent the period where\n> that version of the row was active. So UPDATE should modify the start\n> timestamp of the new version to the same value with the end timestamp\n> of the old version to the updated time. Thus, I don't think adding\n> start timestamp to PK doesn't work as expected. That hinders us from\n> rejecting rows with the same user-defined unique key because start\n> timestamps is different each time of inserts. I think what Surafel is\n> doing in the current patch is correct. Only end_timestamp = +infinity\n> rejects another non-historical (= live) version with the same\n> user-defined unique key.\n\nThe end_time needs to be updated when a row is updated, so it cannot\nform part of the PK. If you try to force that to happen, then logical\nreplication will not work with system versioned tables, which would be\na bad thing. So *only* start_time should be added to the PK to make\nthis work. (A later comment also says the start_time needs to be\nupdated, which makes no sense!)\n\nOn Wed, 23 Oct 2019 at 21:03, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n> I don't see any error handling for transaction anomalies. In READ\n> COMMITTED, you can easily end up with a case where the end time comes\n> before the start time. I don't even see anything constraining start\n> time to be strictly inferior to the end time. Such a constraint will be\n> necessary for application-time periods (which your patch doesn't address\n> at all but that's okay).\n\nI don't see how it can have meaning to have an end_time earlier than a\nstart_time, so yes that should be checked. Having said that, if we use\na statement timestamp on row insertion then, yes, the end_time could\nbe earlier than start time, so that is just wrong. Ideally we would\nuse commit timestamp and fill the values in later. So the only thing\nthat makes sense for me is to use the dynamic time of insertion while\nwe hold the buffer lock, otherwise we will get anomalies.\n\nThe work looks interesting and I will be doing a longer review.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 7 Jan 2021 17:59:39 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Jan 7, 2021 at 5:59 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Mon, Jan 4, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Please also note the v7 patch cannot be applied to the current HEAD. I'm switching the patch as Waiting on Author.\n>\n> Surafel, please say whether you are working on this or not. If you\n> need help, let us know.\n\nI've minimally rebased the patch to current head so that it compiles\nand passes current make check.\n\n From here, I will add further docs and tests to enhance review and\ndiscover issues.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 8 Jan 2021 07:13:40 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 8, 2021 at 7:13 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> I've minimally rebased the patch to current head so that it compiles\n> and passes current make check.\n\nFull version attached\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 8 Jan 2021 07:34:43 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 8, 2021 at 7:34 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 7:13 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > I've minimally rebased the patch to current head so that it compiles\n> > and passes current make check.\n>\n> Full version attached\n\nNew version attached with improved error messages, some additional\ndocs and a review of tests.\n\n* UPDATE doesn't set EndTime correctly, so err... the patch doesn't\nwork on this aspect.\nEverything else does actually work, AFAICS, so we \"just\" need a way to\nupdate the END_TIME column in place...\nSo input from other Hackers/Committers needed on this point to see\nwhat is acceptable.\n\n* Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved\n\n* No discussion, comments or tests around freezing and whether that\ncauses issues here\n\n* What happens if you ask for a future time?\nIt will give an inconsistent result as it scans, so we should refuse a\nquery for time > current_timestamp.\n\n* ALTER TABLE needs some work, it's a bit klugey at the moment and\nneeds extra tests.\nShould refuse DROP COLUMN on a system time column\n\n* Do StartTime and EndTime show in SELECT *? Currently, yes. Would\nguess we wouldn't want them to, not sure what standard says.\n\n* The syntax changes in gram.y probably need some coralling\n\nOverall, it's a pretty good patch and worth working on more. I will\nconsider a recommendation on what to do with this.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 8 Jan 2021 12:33:59 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "\nOn 1/8/21 7:33 AM, Simon Riggs wrote:\n>\n> * What happens if you ask for a future time?\n> It will give an inconsistent result as it scans, so we should refuse a\n> query for time > current_timestamp.\n\n\nThat seems like a significant limitation. Can we fix it instead of\nrefusing the query?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 8 Jan 2021 08:38:42 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 8, 2021 at 5:34 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Fri, Jan 8, 2021 at 7:34 AM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n> >\n> > On Fri, Jan 8, 2021 at 7:13 AM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n> >\n> > > I've minimally rebased the patch to current head so that it compiles\n> > > and passes current make check.\n> >\n> > Full version attached\n>\n> New version attached with improved error messages, some additional\n> docs and a review of tests.\n>\n>\nThe v10 patch fails to make on the current master branch (15b824da). Error:\n\n\nmake[2]: Entering directory\n'/var/lib/postgresql/git/postgresql/src/backend/parser'\n'/usr/bin/perl' ./check_keywords.pl gram.y\n../../../src/include/parser/kwlist.h\n/usr/bin/bison -Wno-deprecated -d -o gram.c gram.y\ngram.y:3685.55-56: error: $4 of ‘ColConstraintElem’ has no declared type\n n->contype = ($4)->contype;\n ^^\ngram.y:3687.56-57: error: $4 of ‘ColConstraintElem’ has no declared type\n n->raw_expr = ($4)->raw_expr;\n ^^\ngram.y:3734.41-42: error: $$ of ‘generated_type’ has no declared type\n $$ = n;\n ^^\ngram.y:3741.41-42: error: $$ of ‘generated_type’ has no declared type\n $$ = n;\n ^^\ngram.y:3748.41-42: error: $$ of ‘generated_type’ has no declared type\n $$ = n;\n ^^\n../../../src/Makefile.global:750: recipe for target 'gram.c' failed\nmake[2]: *** [gram.c] Error 1\nmake[2]: Leaving directory\n'/var/lib/postgresql/git/postgresql/src/backend/parser'\nMakefile:137: recipe for target 'parser/gram.h' failed\nmake[1]: *** [parser/gram.h] Error 2\nmake[1]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend'\nsrc/Makefile.global:389: recipe for target 'submake-generated-headers'\nfailed\nmake: *** [submake-generated-headers] Error 2\n\n\n* UPDATE doesn't set EndTime correctly, so err... the patch doesn't\n> work on this aspect.\n> Everything else does actually work, AFAICS, so we \"just\" need a way to\n> update the END_TIME column in place...\n> So input from other Hackers/Committers needed on this point to see\n> what is acceptable.\n>\n> * Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved\n>\n> * No discussion, comments or tests around freezing and whether that\n> causes issues here\n>\n> * What happens if you ask for a future time?\n> It will give an inconsistent result as it scans, so we should refuse a\n> query for time > current_timestamp.\n\n* ALTER TABLE needs some work, it's a bit klugey at the moment and\n> needs extra tests.\n> Should refuse DROP COLUMN on a system time column\n>\n> * Do StartTime and EndTime show in SELECT *? Currently, yes. Would\n> guess we wouldn't want them to, not sure what standard says.\n>\n>\nI prefer to have them hidden by default. This was mentioned up-thread with\nno decision, it seems the standard is ambiguous. MS SQL appears to\nhave flip-flopped on this decision [1].\n\n> SELECT * shows the timestamp columns, don't we need to hide the period\n> > timestamp columns from this query?\n> >\n> >\n> The sql standard didn't dictate hiding the column but i agree hiding it by\n> default is good thing because this columns are used by the system\n> to classified the data and not needed in user side frequently. I can\n> change to that if we have consensus\n\n\n\n\n\n> * The syntax changes in gram.y probably need some coralling\n>\n> Overall, it's a pretty good patch and worth working on more. I will\n> consider a recommendation on what to do with this.\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n\n\nI am increasingly interested in this feature and have heard others asking\nfor this type of functionality. I'll do my best to continue reviewing and\ntesting.\n\nThanks,\n\nRyan Lambert\n\n[1] https://bornsql.ca/blog/temporal-tables-hidden-columns/\n\nOn Fri, Jan 8, 2021 at 5:34 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Fri, Jan 8, 2021 at 7:34 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 7:13 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > I've minimally rebased the patch to current head so that it compiles\n> > and passes current make check.\n>\n> Full version attached\n\nNew version attached with improved error messages, some additional\ndocs and a review of tests.\nThe v10 patch fails to make on the current master branch (15b824da).  Error:make[2]: Entering directory '/var/lib/postgresql/git/postgresql/src/backend/parser''/usr/bin/perl' ./check_keywords.pl gram.y ../../../src/include/parser/kwlist.h/usr/bin/bison -Wno-deprecated  -d -o gram.c gram.ygram.y:3685.55-56: error: $4 of ‘ColConstraintElem’ has no declared type \t\t\t\t\tn->contype = ($4)->contype;                                                       ^^gram.y:3687.56-57: error: $4 of ‘ColConstraintElem’ has no declared type \t\t\t\t\tn->raw_expr = ($4)->raw_expr;                                                        ^^gram.y:3734.41-42: error: $$ of ‘generated_type’ has no declared type \t\t\t\t\t$$ = n;                                         ^^gram.y:3741.41-42: error: $$ of ‘generated_type’ has no declared type \t\t\t\t\t$$ = n;                                         ^^gram.y:3748.41-42: error: $$ of ‘generated_type’ has no declared type \t\t\t\t\t$$ = n;                                         ^^../../../src/Makefile.global:750: recipe for target 'gram.c' failedmake[2]: *** [gram.c] Error 1make[2]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend/parser'Makefile:137: recipe for target 'parser/gram.h' failedmake[1]: *** [parser/gram.h] Error 2make[1]: Leaving directory '/var/lib/postgresql/git/postgresql/src/backend'src/Makefile.global:389: recipe for target 'submake-generated-headers' failedmake: *** [submake-generated-headers] Error 2\n* UPDATE doesn't set EndTime correctly, so err... the patch doesn't\nwork on this aspect.\nEverything else does actually work, AFAICS, so we \"just\" need a way to\nupdate the END_TIME column in place...\nSo input from other Hackers/Committers needed on this point to see\nwhat is acceptable.\n\n* Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved\n\n* No discussion, comments or tests around freezing and whether that\ncauses issues here\n\n* What happens if you ask for a future time?\nIt will give an inconsistent result as it scans, so we should refuse a\nquery for time > current_timestamp. \n* ALTER TABLE needs some work, it's a bit klugey at the moment and\nneeds extra tests.\nShould refuse DROP COLUMN on a system time column\n\n* Do StartTime and EndTime show in SELECT *? Currently, yes. Would\nguess we wouldn't want them to, not sure what standard says.\nI prefer to have them hidden by default.  This was mentioned up-thread with no decision, it seems the standard is ambiguous.  MS SQL appears to have flip-flopped on this decision [1].> SELECT * shows the timestamp columns, don't we need to hide the period> timestamp columns from this query?>>The sql standard didn't dictate hiding the column but i agree hiding it bydefault is good thing because this columns are used by the systemto classified the data and not needed in user side frequently. I canchange to that if we have consensus \n* The syntax changes in gram.y probably need some coralling\n\nOverall, it's a pretty good patch and worth working on more. I will\nconsider a recommendation on what to do with this.\n\n-- \nSimon Riggs                http://www.EnterpriseDB.com/I am increasingly interested in this feature and have heard others asking for this type of functionality.  I'll do my best to continue reviewing and testing.Thanks,Ryan Lambert[1] https://bornsql.ca/blog/temporal-tables-hidden-columns/", "msg_date": "Fri, 8 Jan 2021 09:50:03 -0700", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 8, 2021 at 4:50 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 5:34 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>\n>> On Fri, Jan 8, 2021 at 7:34 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>> >\n>> > On Fri, Jan 8, 2021 at 7:13 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>> >\n>> > > I've minimally rebased the patch to current head so that it compiles\n>> > > and passes current make check.\n>> >\n>> > Full version attached\n>>\n>> New version attached with improved error messages, some additional\n>> docs and a review of tests.\n>>\n>\n> The v10 patch fails to make on the current master branch (15b824da). Error:\n\nUpdated v11 with additional docs and some rewording of messages/tests\nto use \"system versioning\" correctly.\n\nNo changes on the points previously raised.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 8 Jan 2021 18:38:29 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 8, 2021 at 11:38 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Fri, Jan 8, 2021 at 4:50 PM Ryan Lambert <ryan@rustprooflabs.com>\n> wrote:\n> >\n> > On Fri, Jan 8, 2021 at 5:34 AM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n> >>\n> >> On Fri, Jan 8, 2021 at 7:34 AM Simon Riggs <\n> simon.riggs@enterprisedb.com> wrote:\n> >> >\n> >> > On Fri, Jan 8, 2021 at 7:13 AM Simon Riggs <\n> simon.riggs@enterprisedb.com> wrote:\n> >> >\n> >> > > I've minimally rebased the patch to current head so that it compiles\n> >> > > and passes current make check.\n> >> >\n> >> > Full version attached\n> >>\n> >> New version attached with improved error messages, some additional\n> >> docs and a review of tests.\n> >>\n> >\n> > The v10 patch fails to make on the current master branch (15b824da).\n> Error:\n>\n> Updated v11 with additional docs and some rewording of messages/tests\n> to use \"system versioning\" correctly.\n>\n> No changes on the points previously raised.\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n\n\n\nThank you! The v11 applies and installs. I tried a simple test,\nunfortunately it appears the versioning is not working. The initial value\nis not preserved through an update and a new row does not appear to be\ncreated.\n\nCREATE TABLE t\n(\n id BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,\n v BIGINT NOT NULL\n)\nWITH SYSTEM VERSIONING\n;\n\nVerify start/end time columns created.\n\nt=# \\d t\n Table \"public.t\"\n Column | Type | Collation | Nullable |\nDefault\n-----------+--------------------------+-----------+----------+----------------------------------\n id | bigint | | not null | generated by\ndefault as identity\n v | bigint | | not null |\n StartTime | timestamp with time zone | | not null | generated\nalways as row start\n EndTime | timestamp with time zone | | not null | generated\nalways as row end\nIndexes:\n \"t_pkey\" PRIMARY KEY, btree (id, \"EndTime\")\n\n\nAdd a row and check the timestamps set as expected.\n\n\nINSERT INTO t (v) VALUES (1);\n\n SELECT * FROM t;\n id | v | StartTime | EndTime\n----+---+-------------------------------+----------\n 1 | 1 | 2021-01-08 20:56:20.848097+00 | infinity\n\nUpdate the row.\n\nUPDATE t SET v = -1;\n\nThe value for v updated but StartTime is the same.\n\n\nSELECT * FROM t;\n id | v | StartTime | EndTime\n----+----+-------------------------------+----------\n 1 | -1 | 2021-01-08 20:56:20.848097+00 | infinity\n\n\nQuerying the table for all versions only returns the single updated row (v\n= -1) with the original row StartTime. The original value has disappeared\nentirely it seems.\n\nSELECT * FROM t\nFOR SYSTEM_TIME FROM '-infinity' TO 'infinity';\n\n\nI also created a non-versioned table and later added the columns using\nALTER TABLE and encountered the same behavior.\n\n\nRyan Lambert\n\nOn Fri, Jan 8, 2021 at 11:38 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Fri, Jan 8, 2021 at 4:50 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 5:34 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>\n>> On Fri, Jan 8, 2021 at 7:34 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>> >\n>> > On Fri, Jan 8, 2021 at 7:13 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>> >\n>> > > I've minimally rebased the patch to current head so that it compiles\n>> > > and passes current make check.\n>> >\n>> > Full version attached\n>>\n>> New version attached with improved error messages, some additional\n>> docs and a review of tests.\n>>\n>\n> The v10 patch fails to make on the current master branch (15b824da).  Error:\n\nUpdated v11 with additional docs and some rewording of messages/tests\nto use \"system versioning\" correctly.\n\nNo changes on the points previously raised.\n\n-- \nSimon Riggs                http://www.EnterpriseDB.com/Thank you!  The v11 applies and installs.  I tried a simple test, unfortunately it appears the versioning is not working. The initial value is not preserved through an update and a new row does not appear to be created.CREATE TABLE t (    id BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,    v BIGINT NOT NULL)WITH SYSTEM VERSIONING;Verify start/end time columns created.t=# \\d t                                        Table \"public.t\"  Column   |           Type           | Collation | Nullable |             Default              -----------+--------------------------+-----------+----------+---------------------------------- id        | bigint                   |           | not null | generated by default as identity v         | bigint                   |           | not null |  StartTime | timestamp with time zone |           | not null | generated always as row start EndTime   | timestamp with time zone |           | not null | generated always as row endIndexes:    \"t_pkey\" PRIMARY KEY, btree (id, \"EndTime\")Add a row and check the timestamps set as expected.INSERT INTO t (v) VALUES (1); SELECT * FROM t; id | v |           StartTime           | EndTime  ----+---+-------------------------------+----------  1 | 1 | 2021-01-08 20:56:20.848097+00 | infinityUpdate the row.UPDATE t SET v = -1;The value for v updated but StartTime is the same.SELECT * FROM t; id | v  |           StartTime           | EndTime  ----+----+-------------------------------+----------  1 | -1 | 2021-01-08 20:56:20.848097+00 | infinityQuerying the table for all versions only returns the single updated row (v = -1) with the original row StartTime.  The original value has disappeared entirely it seems.SELECT * FROM tFOR SYSTEM_TIME FROM '-infinity' TO 'infinity';I also created a non-versioned table and later added the columns using ALTER TABLE and encountered the same behavior.Ryan Lambert", "msg_date": "Fri, 8 Jan 2021 14:19:16 -0700", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 8, 2021 at 9:19 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n\n>> Updated v11 with additional docs and some rewording of messages/tests\n>> to use \"system versioning\" correctly.\n>>\n>> No changes on the points previously raised.\n>>\n> Thank you! The v11 applies and installs. I tried a simple test, unfortunately it appears the versioning is not working. The initial value is not preserved through an update and a new row does not appear to be created.\n\nAgreed. I already noted this in my earlier review comments.\n\nI will send in a new version with additional tests so we can see more\nclearly that the tests fail on the present patch.\n\nI will post more on this next week.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 9 Jan 2021 10:39:15 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Sat, Jan 9, 2021 at 10:39 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 9:19 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>\n> >> Updated v11 with additional docs and some rewording of messages/tests\n> >> to use \"system versioning\" correctly.\n> >>\n> >> No changes on the points previously raised.\n> >>\n> > Thank you! The v11 applies and installs. I tried a simple test, unfortunately it appears the versioning is not working. The initial value is not preserved through an update and a new row does not appear to be created.\n>\n> Agreed. I already noted this in my earlier review comments.\n\nI'm pleased to note that UPDATE-not-working was a glitch, possibly in\nan earlier patch merge. That now works as advertised.\n\nI've added fairly clear SGML docs to explain how the current patch\nworks, which should assist wider review.\n\nAlso moved test SQL around a bit, renamed some things in code for\nreadability, but not done any structural changes.\n\nThis is looking much better now... with the TODO/issues list now\nlooking like this...\n\n* Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved.\nProbably need to add a test that end_timestamp > start_timestamp or ERROR,\nwhich effectively enforces serializability.\n\n* No discussion, comments or tests around freezing and whether that\ncauses issues here\n\n* What happens if you ask for a future time?\nIt will give an inconsistent result as it scans, so we should refuse a\nquery for time > current_timestamp.\n\n* ALTER TABLE needs some work, it's a bit klugey at the moment and\nneeds extra tests.\nShould refuse DROP COLUMN on a system time column, but currently doesn't\n\n* UPDATE foo SET start_timestamp = DEFAULT should fail but currently doesn't\n\n* Do StartTime and EndTime show in SELECT *? Currently, yes. Would\nguess we wouldn't want them to, not sure what standard says.\n\n From here, the plan would be to set this to \"Ready For Committer\" in\nabout a week. That is not the same thing as me saying it is\nready-for-commit, but we need some more eyes on this patch to decide\nif it is something we want and, if so, are the code changes cool.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Mon, 11 Jan 2021 14:02:18 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Mon, Jan 11, 2021 at 7:02 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Sat, Jan 9, 2021 at 10:39 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Fri, Jan 8, 2021 at 9:19 PM Ryan Lambert <ryan@rustprooflabs.com>\n> wrote:\n> >\n> > >> Updated v11 with additional docs and some rewording of messages/tests\n> > >> to use \"system versioning\" correctly.\n> > >>\n> > >> No changes on the points previously raised.\n> > >>\n> > > Thank you! The v11 applies and installs. I tried a simple test,\n> unfortunately it appears the versioning is not working. The initial value\n> is not preserved through an update and a new row does not appear to be\n> created.\n> >\n> > Agreed. I already noted this in my earlier review comments.\n>\n> I'm pleased to note that UPDATE-not-working was a glitch, possibly in\n> an earlier patch merge. That now works as advertised.\n>\n\nIt is working as expected now, Thank you!\n\n\n> I've added fairly clear SGML docs to explain how the current patch\n> works, which should assist wider review.\n>\n\nThe default column names changed to start_timestamp and end_timestamp. A\nnumber of places in the docs still refer to StartTime and EndTime. I\nprefer the new names without MixedCase.\n\n\n>\n> Also moved test SQL around a bit, renamed some things in code for\n> readability, but not done any structural changes.\n>\n> This is looking much better now... with the TODO/issues list now\n> looking like this...\n>\n> * Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved.\n> Probably need to add a test that end_timestamp > start_timestamp or ERROR,\n> which effectively enforces serializability.\n>\n> * No discussion, comments or tests around freezing and whether that\n> causes issues here\n>\n> * What happens if you ask for a future time?\n> It will give an inconsistent result as it scans, so we should refuse a\n> query for time > current_timestamp.\n\n* ALTER TABLE needs some work, it's a bit klugey at the moment and\n> needs extra tests.\n> Should refuse DROP COLUMN on a system time column, but currently doesn't\n>\n> * UPDATE foo SET start_timestamp = DEFAULT should fail but currently\n> doesn't\n>\n> * Do StartTime and EndTime show in SELECT *? Currently, yes. Would\n> guess we wouldn't want them to, not sure what standard says.\n>\n> From here, the plan would be to set this to \"Ready For Committer\" in\n> about a week. That is not the same thing as me saying it is\n> ready-for-commit, but we need some more eyes on this patch to decide\n> if it is something we want and, if so, are the code changes cool.\n>\n>\nShould I invest time now into further testing with more production-like\nscenarios on this patch? Or would it be better to wait on putting effort\ninto that until it has had more review? I don't have much to offer for\nhelp on your current todo list.\n\nThanks,\n\nRyan Lambert\n\nOn Mon, Jan 11, 2021 at 7:02 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Sat, Jan 9, 2021 at 10:39 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 9:19 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>\n> >> Updated v11 with additional docs and some rewording of messages/tests\n> >> to use \"system versioning\" correctly.\n> >>\n> >> No changes on the points previously raised.\n> >>\n> > Thank you!  The v11 applies and installs.  I tried a simple test, unfortunately it appears the versioning is not working. The initial value is not preserved through an update and a new row does not appear to be created.\n>\n> Agreed. I already noted this in my earlier review comments.\n\nI'm pleased to note that UPDATE-not-working was a glitch, possibly in\nan earlier patch merge. That now works as advertised.It is working as expected now, Thank you!   \nI've added fairly clear SGML docs to explain how the current patch\nworks, which should assist wider review.The default column names changed to start_timestamp and end_timestamp.  A number of places in the docs still refer to StartTime and EndTime.  I prefer the new names without MixedCase. \n\nAlso moved test SQL around a bit, renamed some things in code for\nreadability, but not done any structural changes.\n\nThis is looking much better now... with the TODO/issues list now\nlooking like this...\n\n* Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved.\nProbably need to add a test that end_timestamp > start_timestamp or ERROR,\nwhich effectively enforces serializability.\n\n* No discussion, comments or tests around freezing and whether that\ncauses issues here\n\n* What happens if you ask for a future time?\nIt will give an inconsistent result as it scans, so we should refuse a\nquery for time > current_timestamp.\n* ALTER TABLE needs some work, it's a bit klugey at the moment and\nneeds extra tests.\nShould refuse DROP COLUMN on a system time column, but currently doesn't\n\n* UPDATE foo SET start_timestamp = DEFAULT should fail but currently doesn't\n\n* Do StartTime and EndTime show in SELECT *? Currently, yes. Would\nguess we wouldn't want them to, not sure what standard says.\n\n From here, the plan would be to set this to \"Ready For Committer\" in\nabout a week. That is not the same thing as me saying it is\nready-for-commit, but we need some more eyes on this patch to decide\nif it is something we want and, if so, are the code changes cool.\n Should I invest time now into further testing with more production-like scenarios on this patch?  Or would it be better to wait on putting effort into that until it has had more review?  I don't have much to offer for help on your current todo list.Thanks, Ryan Lambert", "msg_date": "Tue, 12 Jan 2021 16:14:13 -0700", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Andrew,\nOn Fri, Jan 8, 2021 at 4:38 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 1/8/21 7:33 AM, Simon Riggs wrote:\n> >\n> > * What happens if you ask for a future time?\n> > It will give an inconsistent result as it scans, so we should refuse a\n> > query for time > current_timestamp.\n>\n>\n> That seems like a significant limitation. Can we fix it instead of\n> refusing the query?\n>\n>\n\nQuerying a table without system versioning with a value of non existent\ndata returns no record rather than error out or have other behavior. i\ndon't\nunderstand the needs for special treatment here\n\nregards\nSurafel\n\nHi Andrew,On Fri, Jan 8, 2021 at 4:38 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 1/8/21 7:33 AM, Simon Riggs wrote:\n>\n> * What happens if you ask for a future time?\n> It will give an inconsistent result as it scans, so we should refuse a\n> query for time > current_timestamp.\n\n\nThat seems like a significant limitation. Can we fix it instead of\nrefusing the query?\nQuerying  a table without system versioning with a value of non existent data returns no record rather than error out or have other behavior. i don't understand the needs for special treatment hereregards Surafel", "msg_date": "Thu, 14 Jan 2021 20:03:16 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Simon,\nThank you for all the work you does\n\nOn Mon, Jan 11, 2021 at 5:02 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n>\n>\n> * Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved.\n> Probably need to add a test that end_timestamp > start_timestamp or ERROR,\n> which effectively enforces serializability.\n>\n>\n\nThis scenario doesn't happen. There are no possibility of a record being\ndeleted or updated before inserting\n\n\n> * No discussion, comments or tests around freezing and whether that\n> causes issues here\n>\n>\nThis feature introduced no new issue regarding freezing. Adding\nthe doc about the table size growth because of a retention of old record\nseems\nenough for me\n\n\n>\n> * ALTER TABLE needs some work, it's a bit klugey at the moment and\n> needs extra tests.\n> Should refuse DROP COLUMN on a system time column, but currently doesn't\n>\n> * UPDATE foo SET start_timestamp = DEFAULT should fail but currently\n> doesn't\n>\n>\nokay i will fix it\n\nregards\nSurafel\n\nHi Simon,Thank you for all the work you doesOn Mon, Jan 11, 2021 at 5:02 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n* Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved.\nProbably need to add a test that end_timestamp > start_timestamp or ERROR,\nwhich effectively enforces serializability.\nThis scenario doesn't happen. There are no possibility of a record being deleted or updated before inserting \n* No discussion, comments or tests around freezing and whether that\ncauses issues here\nThis feature introduced no new issue regarding freezing. Addingthe doc about the table size growth because of a retention of old record seems enough for me  \n* ALTER TABLE needs some work, it's a bit klugey at the moment and\nneeds extra tests.\nShould refuse DROP COLUMN on a system time column, but currently doesn't\n\n* UPDATE foo SET start_timestamp = DEFAULT should fail but currently doesn't\nokay i will fix it  regardsSurafel", "msg_date": "Thu, 14 Jan 2021 20:42:21 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi Ryan\n\nOn Fri, Jan 8, 2021 at 7:50 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n\n> I prefer to have them hidden by default. This was mentioned up-thread\n> with no decision, it seems the standard is ambiguous. MS SQL appears to\n> have flip-flopped on this decision [1].\n>\n>\nI will change it to hidden by default if there are no objection\n\nregards\nSurafel\n\nHi RyanOn Fri, Jan 8, 2021 at 7:50 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:I prefer to have them hidden by default.  This was mentioned up-thread with no decision, it seems the standard is ambiguous.  MS SQL appears to have flip-flopped on this decision [1].I will change it to hidden by default if there are no objectionregards Surafel", "msg_date": "Thu, 14 Jan 2021 20:46:41 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Jan 14, 2021 at 5:46 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n\n> On Fri, Jan 8, 2021 at 7:50 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>>\n>> I prefer to have them hidden by default. This was mentioned up-thread with no decision, it seems the standard is ambiguous. MS SQL appears to have flip-flopped on this decision [1].\n\nI think the default should be like this:\n\nSELECT * FROM foo FOR SYSTEM_TIME AS OF ...\nshould NOT include the Start and End timestamp columns\nbecause this acts like a normal query just with a different snapshot timestamp\n\nSELECT * FROM foo FOR SYSTEM_TIME BETWEEN x AND y\nSHOULD include the Start and End timestamp columns\nsince this form of query can include multiple row versions for the\nsame row, so it makes sense to see the validity times\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 14 Jan 2021 21:22:26 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Jan 14, 2021 at 5:42 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n>\n> Hi Simon,\n> Thank you for all the work you does\n\nNo problem.\n\n> On Mon, Jan 11, 2021 at 5:02 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>\n>>\n>>\n>> * Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved.\n>> Probably need to add a test that end_timestamp > start_timestamp or ERROR,\n>> which effectively enforces serializability.\n>>\n>\n>\n> This scenario doesn't happen.\n\nYes, I think it can. The current situation is that the Start or End is\nset to the Transaction Start Timestamp.\nSo if t2 starts before t1, then if t1 creates a row and t2 deletes it\nthen we will have start=t1 end=t2, but t2<t1\nYour tests don't show that because it must happen concurrently.\nWe need to add an isolation test to show this, or to prove it doesn't happen.\n\n> There are no possibility of a record being deleted or updated before inserting\n\nAgreed, but that was not the point.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 14 Jan 2021 21:27:24 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Jan 14, 2021 at 2:22 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Thu, Jan 14, 2021 at 5:46 PM Surafel Temesgen <surafel3000@gmail.com>\n> wrote:\n>\n> > On Fri, Jan 8, 2021 at 7:50 PM Ryan Lambert <ryan@rustprooflabs.com>\n> wrote:\n> >>\n> >> I prefer to have them hidden by default. This was mentioned up-thread\n> with no decision, it seems the standard is ambiguous. MS SQL appears to\n> have flip-flopped on this decision [1].\n>\n> I think the default should be like this:\n>\n> SELECT * FROM foo FOR SYSTEM_TIME AS OF ...\n> should NOT include the Start and End timestamp columns\n> because this acts like a normal query just with a different snapshot\n> timestamp\n>\n> SELECT * FROM foo FOR SYSTEM_TIME BETWEEN x AND y\n> SHOULD include the Start and End timestamp columns\n> since this form of query can include multiple row versions for the\n> same row, so it makes sense to see the validity times\n>\n\n+1\n\nRyan Lambert\n\nOn Thu, Jan 14, 2021 at 2:22 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Thu, Jan 14, 2021 at 5:46 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n\n> On Fri, Jan 8, 2021 at 7:50 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>>\n>> I prefer to have them hidden by default.  This was mentioned up-thread with no decision, it seems the standard is ambiguous.  MS SQL appears to have flip-flopped on this decision [1].\n\nI think the default should be like this:\n\nSELECT * FROM foo FOR SYSTEM_TIME AS OF ...\nshould NOT include the Start and End timestamp columns\nbecause this acts like a normal query just with a different snapshot timestamp\n\nSELECT * FROM foo FOR SYSTEM_TIME BETWEEN x AND y\nSHOULD include the Start and End timestamp columns\nsince this form of query can include multiple row versions for the\nsame row, so it makes sense to see the validity times+1Ryan Lambert", "msg_date": "Thu, 14 Jan 2021 16:01:18 -0700", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 15, 2021 at 12:27 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n>\n> Yes, I think it can. The current situation is that the Start or End is\n> set to the Transaction Start Timestamp.\n> So if t2 starts before t1, then if t1 creates a row and t2 deletes it\n> then we will have start=t1 end=t2, but t2<t1\n> Your tests don't show that because it must happen concurrently.\n> We need to add an isolation test to show this, or to prove it doesn't\n> happen.\n>\n>\n\nDoes MVCC allow that? i am not expert on MVCC but i don't\nthink t2 can see the row create by translation started before\nitself\n\nregards\nSurafel\n\nOn Fri, Jan 15, 2021 at 12:27 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\nYes, I think it can. The current situation is that the Start or End is\nset to the Transaction Start Timestamp.\nSo if t2 starts before t1, then if t1 creates a row and t2 deletes it\nthen we will have start=t1 end=t2, but t2<t1\nYour tests don't show that because it must happen concurrently.\nWe need to add an isolation test to show this, or to prove it doesn't happen.\nDoes MVCC allow that? i am not expert on MVCC but i don't think t2 can see the row create by translation started beforeitself regards Surafel", "msg_date": "Fri, 15 Jan 2021 19:46:16 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 15, 2021 at 4:46 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n>\n>\n>\n> On Fri, Jan 15, 2021 at 12:27 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>\n>>\n>> Yes, I think it can. The current situation is that the Start or End is\n>> set to the Transaction Start Timestamp.\n>> So if t2 starts before t1, then if t1 creates a row and t2 deletes it\n>> then we will have start=t1 end=t2, but t2<t1\n>> Your tests don't show that because it must happen concurrently.\n>> We need to add an isolation test to show this, or to prove it doesn't happen.\n>>\n>\n>\n> Does MVCC allow that? i am not expert on MVCC but i don't\n> think t2 can see the row create by translation started before\n> itself\n\nYeh. Read Committed mode can see anything committed prior to the start\nof the current statement, but UPDATEs always update the latest version\neven if they can't see it.\n\nAnyway, isolationtester spec file needed to test this. See\nsrc/test/isolation and examples in specs/ directory\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 15 Jan 2021 16:50:24 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 15, 2021 at 12:22 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> SELECT * FROM foo FOR SYSTEM_TIME AS OF ...\n> should NOT include the Start and End timestamp columns\n> because this acts like a normal query just with a different snapshot\n> timestamp\n>\n> SELECT * FROM foo FOR SYSTEM_TIME BETWEEN x AND y\n> SHOULD include the Start and End timestamp columns\n> since this form of query can include multiple row versions for the\n> same row, so it makes sense to see the validity times\n>\n>\nOne disadvantage of returning system time columns is it\nbreaks upward compatibility. if an existing application wants to\nswitch to system versioning it will break.\n\nregards\nSurafel\n\nOn Fri, Jan 15, 2021 at 12:22 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:SELECT * FROM foo FOR SYSTEM_TIME AS OF ...\nshould NOT include the Start and End timestamp columns\nbecause this acts like a normal query just with a different snapshot timestamp\n\nSELECT * FROM foo FOR SYSTEM_TIME BETWEEN x AND y\nSHOULD include the Start and End timestamp columns\nsince this form of query can include multiple row versions for the\nsame row, so it makes sense to see the validity times\nOne disadvantage of returning system time columns is itbreaks upward compatibility. if an existing application wants toswitch to system versioning it will break.regards Surafel", "msg_date": "Fri, 15 Jan 2021 19:56:42 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 15, 2021 at 4:56 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n>\n>\n>\n> On Fri, Jan 15, 2021 at 12:22 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>\n>> SELECT * FROM foo FOR SYSTEM_TIME AS OF ...\n>> should NOT include the Start and End timestamp columns\n>> because this acts like a normal query just with a different snapshot timestamp\n>>\n>> SELECT * FROM foo FOR SYSTEM_TIME BETWEEN x AND y\n>> SHOULD include the Start and End timestamp columns\n>> since this form of query can include multiple row versions for the\n>> same row, so it makes sense to see the validity times\n>>\n>\n> One disadvantage of returning system time columns is it\n> breaks upward compatibility. if an existing application wants to\n> switch to system versioning it will break.\n\nThere are no existing applications, so for PostgreSQL, it wouldn't be an issue.\n\nIf you mean compatibility with other databases, that might be an\nargument to do what others have done. What have other databases done\nfor SELECT * ?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 15 Jan 2021 17:01:57 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hello,\n\nit seems that Oracle (11R2) doesn't add the Start and End timestamp columns \nand permit statement like\n\nselect * from tt\nunion\nselect * from tt\nAS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '6' SECOND)\nminus \nselect * from tt\nVERSIONS BETWEEN TIMESTAMP (SYSTIMESTAMP - INTERVAL '6' second) and\nSYSTIMESTAMP;\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Fri, 15 Jan 2021 12:26:05 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 1/14/21 10:22 PM, Simon Riggs wrote:\n> On Thu, Jan 14, 2021 at 5:46 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n> \n>> On Fri, Jan 8, 2021 at 7:50 PM Ryan Lambert <ryan@rustprooflabs.com> wrote:\n>>>\n>>> I prefer to have them hidden by default. This was mentioned up-thread with no decision, it seems the standard is ambiguous. MS SQL appears to have flip-flopped on this decision [1].\n> \n> I think the default should be like this:\n> \n> SELECT * FROM foo FOR SYSTEM_TIME AS OF ...\n> should NOT include the Start and End timestamp columns\n> because this acts like a normal query just with a different snapshot timestamp\n> \n> SELECT * FROM foo FOR SYSTEM_TIME BETWEEN x AND y\n> SHOULD include the Start and End timestamp columns\n> since this form of query can include multiple row versions for the\n> same row, so it makes sense to see the validity times\n\n\nI don't read the standard as being ambiguous about this at all. The\ncolumns should be shown just like any other column of the table.\n\nI am not opposed to being able to set an attribute on columns allowing\nthem to be excluded from \"*\" but that is irrelevant to this patch.\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 16 Jan 2021 15:03:49 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 1/14/21 6:42 PM, Surafel Temesgen wrote:\n> Hi Simon,\n> Thank you for all the work you does\n> \n> On Mon, Jan 11, 2021 at 5:02 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n> \n>>\n>>\n>> * Anomalies around use of CURRENT_TIMESTAMP are not discussed or resolved.\n>> Probably need to add a test that end_timestamp > start_timestamp or ERROR,\n>> which effectively enforces serializability.\n>>\n>>\n> \n> This scenario doesn't happen.\n\nIt *does* happen and the standard even provides a specific error code\nfor it (2201H).\n\nPlease look at my extension for this feature which implements all the\nrequirements of the standard (except syntax grammar, of course).\nhttps://github.com/xocolatl/periods/\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 16 Jan 2021 15:08:48 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, Jan 15, 2021 at 8:02 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n>\n> There are no existing applications, so for PostgreSQL, it wouldn't be an\n> issue.\n>\n>\nYes we don't have but the main function of ALTER TABLE foo ADD SYSTEM\nVERSIONING\nis to add system versioning functionality to existing application\n\nregards\nSurafel\n\nOn Fri, Jan 15, 2021 at 8:02 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\nThere are no existing applications, so for PostgreSQL, it wouldn't be an issue.\nYes we don't have but the main function of ALTER TABLE foo ADD SYSTEM VERSIONINGis to add system versioning functionality to existing applicationregards Surafel", "msg_date": "Sat, 16 Jan 2021 21:39:38 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 1/16/21 7:39 PM, Surafel Temesgen wrote:\n> On Fri, Jan 15, 2021 at 8:02 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n> \n>>\n>> There are no existing applications, so for PostgreSQL, it wouldn't be an\n>> issue.\n>>\n>>\n> Yes we don't have but the main function of ALTER TABLE foo ADD SYSTEM\n> VERSIONING\n> is to add system versioning functionality to existing application\n\nI haven't looked at this patch in a while, but I hope that ALTER TABLE t\nADD SYSTEM VERSIONING is not adding any columns. That is a bug if it does.\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 16 Jan 2021 20:12:30 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Sat, Jan 16, 2021 at 10:12 PM Vik Fearing <vik@postgresfriends.org>\nwrote:\n\n>\n> I haven't looked at this patch in a while, but I hope that ALTER TABLE t\n> ADD SYSTEM VERSIONING is not adding any columns. That is a bug if it does.\n>\n>\nYes, that is how I implement it. I don't understand how it became a bug?\n\nregards\nSurafel\n\nOn Sat, Jan 16, 2021 at 10:12 PM Vik Fearing <vik@postgresfriends.org> wrote:\nI haven't looked at this patch in a while, but I hope that ALTER TABLE t\nADD SYSTEM VERSIONING is not adding any columns.  That is a bug if it does.\nYes, that is how I implement it. I don't understand how it became a bug? regards Surafel", "msg_date": "Sun, 17 Jan 2021 19:46:35 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 1/17/21 5:46 PM, Surafel Temesgen wrote:\n> On Sat, Jan 16, 2021 at 10:12 PM Vik Fearing <vik@postgresfriends.org>\n> wrote:\n> \n>>\n>> I haven't looked at this patch in a while, but I hope that ALTER TABLE t\n>> ADD SYSTEM VERSIONING is not adding any columns. That is a bug if it does.\n>>\n>>\n> Yes, that is how I implement it. I don't understand how it became a bug?\n\nThis is not good, and I see that DROP SYSTEM VERSIONING also removes\nthese columns which is even worse. Please read the standard that you\nare trying to implement!\n\nI will do a more thorough review of the functionalities in this patch\n(not necessarily the code) this week.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 17 Jan 2021 23:42:59 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Mon, Jan 18, 2021 at 1:43 AM Vik Fearing <vik@postgresfriends.org> wrote:\n\n>\n> This is not good, and I see that DROP SYSTEM VERSIONING also removes\n> these columns which is even worse. Please read the standard that you\n> are trying to implement!\n>\n>\nThe standard states the function of ALTER TABLE ADD SYSTEM VERSIONING\nas \"Alter a regular persistent base table to a system-versioned table\" and\nsystem versioned table is described in the standard by two generated\nstored constraint columns and implemented as such.\n\n\n> I will do a more thorough review of the functionalities in this patch\n> (not necessarily the code) this week.\n>\n>\nPlease do\n\nregards\nSurafel\n\nOn Mon, Jan 18, 2021 at 1:43 AM Vik Fearing <vik@postgresfriends.org> wrote:\nThis is not good, and I see that DROP SYSTEM VERSIONING also removes\nthese columns which is even worse.  Please read the standard that you\nare trying to implement!\nThe standard states the function of ALTER TABLE ADD SYSTEM VERSIONINGas  \"Alter a regular persistent base table to a system-versioned table\" andsystem versioned table is described in the standard by two generatedstored constraint columns and implemented as such. \nI will do a more thorough review of the functionalities in this patch\n(not necessarily the code) this week.Please do regardsSurafel", "msg_date": "Mon, 18 Jan 2021 21:56:13 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 1/11/21 3:02 PM, Simon Riggs wrote:\n> * UPDATE foo SET start_timestamp = DEFAULT should fail but currently doesn't\n\nI'm still in the weeds of reviewing this patch, but why should this\nfail? It should not fail.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 26 Jan 2021 12:33:06 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Tue, Jan 26, 2021 at 11:33 AM Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> On 1/11/21 3:02 PM, Simon Riggs wrote:\n> > * UPDATE foo SET start_timestamp = DEFAULT should fail but currently doesn't\n>\n> I'm still in the weeds of reviewing this patch, but why should this\n> fail? It should not fail.\n\nIt should not be possible for the user to change the start or end\ntimestamp of a system_time time range, by definition.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 26 Jan 2021 12:16:48 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 1/26/21 1:16 PM, Simon Riggs wrote:\n> On Tue, Jan 26, 2021 at 11:33 AM Vik Fearing <vik@postgresfriends.org> wrote:\n>>\n>> On 1/11/21 3:02 PM, Simon Riggs wrote:\n>>> * UPDATE foo SET start_timestamp = DEFAULT should fail but currently doesn't\n>>\n>> I'm still in the weeds of reviewing this patch, but why should this\n>> fail? It should not fail.\n> \n> It should not be possible for the user to change the start or end\n> timestamp of a system_time time range, by definition.\n\nCorrect, but setting it to DEFAULT is not changing it.\n\nSee also SQL:2016 11.5 <default clause> General Rule 3.a.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 26 Jan 2021 13:51:13 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Tue, Jan 26, 2021 at 12:51 PM Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> On 1/26/21 1:16 PM, Simon Riggs wrote:\n> > On Tue, Jan 26, 2021 at 11:33 AM Vik Fearing <vik@postgresfriends.org> wrote:\n> >>\n> >> On 1/11/21 3:02 PM, Simon Riggs wrote:\n> >>> * UPDATE foo SET start_timestamp = DEFAULT should fail but currently doesn't\n> >>\n> >> I'm still in the weeds of reviewing this patch, but why should this\n> >> fail? It should not fail.\n> >\n> > It should not be possible for the user to change the start or end\n> > timestamp of a system_time time range, by definition.\n>\n> Correct, but setting it to DEFAULT is not changing it.\n>\n> See also SQL:2016 11.5 <default clause> General Rule 3.a.\n\nThanks for pointing this out. Identity columns don't currently work that way.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 26 Jan 2021 13:26:06 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Tue, Jan 26, 2021 at 2:33 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> I'm still in the weeds of reviewing this patch, but why should this\n> fail? It should not fail.\n>\n\nAttached is rebased patch that include isolation test\n\nregards\nSurafel", "msg_date": "Tue, 26 Jan 2021 19:39:18 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Jan 27, 2021, at 12:39 AM, Surafel Temesgen <surafel3000@gmail.com<mailto:surafel3000@gmail.com>> wrote:\n\n\n\nOn Tue, Jan 26, 2021 at 2:33 PM Vik Fearing <vik@postgresfriends.org<mailto:vik@postgresfriends.org>> wrote:\nI'm still in the weeds of reviewing this patch, but why should this\nfail? It should not fail.\n\nAttached is rebased patch that include isolation test\n\n\nThanks for updating the patch. However it cannot apply to master (e5d8a9990).\n\nHere are some comments on system-versioning-temporal-table_2021_v13.patch.\n\n+</programlisting>\n+ When system versioning is specified two columns are added which\n+ record the start timestamp and end timestamp of each row verson.\n\nverson -> version\n\n+ By default, the column names will be StartTime and EndTime, though\n+ you can specify different names if you choose.\n\nIn fact, it is start_time and end_time, not StartTime and EndTime.\nI think it's better to use <literal> label around start_time and end_time.\n\n+ column will be automatically added to the Primary Key of the\n+ table.\n\nShould we mention the unique constraints?\n\n+ The system versioning period end column will be added to the\n+ Primary Key of the table as a way of ensuring that concurrent\n+ INSERTs conflict correctly.\n\nSame as above.\n\nSince the get_row_start_time_col_name() and get_row_end_time_col_name()\nare similar, IMO we can pass a flag to get StartTime/EndTime column name,\nthought?\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nOn Jan 27, 2021, at 12:39 AM, Surafel Temesgen <surafel3000@gmail.com> wrote:\n\n\n\n\n\n\n\nOn Tue, Jan 26, 2021 at 2:33 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n\nI'm still in the weeds of reviewing this patch, but why should this\nfail?  It should not fail.\n\n\n\nAttached is rebased patch that include isolation test\n\n\n\n\n\n\n\n\nThanks for updating the patch.  However it cannot apply to master (e5d8a9990).\n\n\nHere are some comments on system-versioning-temporal-table_2021_v13.patch.\n\n\n\n+</programlisting>\n+    When system versioning is specified two columns are added which\n+    record the start timestamp and end timestamp of each row verson.\n\n\nverson -> version\n\n\n+    By default, the column names will be StartTime and EndTime, though\n+    you can specify different names if you choose.\n\n\nIn fact, it is start_time and end_time, not StartTime and EndTime.\nI think it's better to use <literal> label around start_time and end_time.\n\n\n+    column will be automatically added to the Primary Key of the\n+    table.\n\n\nShould we mention the unique constraints?\n\n\n+    The system versioning period end column will be added to the\n+    Primary Key of the table as a way of ensuring that concurrent\n+    INSERTs conflict correctly.\n\n\nSame as above.\n\n\nSince the get_row_start_time_col_name() and get_row_end_time_col_name()\nare similar, IMO we can pass a flag to get StartTime/EndTime column name,\nthought?\n\n\n\n\n\n\n--\n\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Thu, 25 Feb 2021 10:28:28 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Thu, Feb 25, 2021 at 3:28 PM Li Japin <japinli@hotmail.com> wrote:\n\n>\n> On Jan 27, 2021, at 12:39 AM, Surafel Temesgen <surafel3000@gmail.com>\n> wrote:\n>\n>\n>\n> On Tue, Jan 26, 2021 at 2:33 PM Vik Fearing <vik@postgresfriends.org>\n> wrote:\n>\n>> I'm still in the weeds of reviewing this patch, but why should this\n>> fail? It should not fail.\n>>\n>\n> Attached is rebased patch that include isolation test\n>\n>\n> Thanks for updating the patch. However it cannot apply to master\n> (e5d8a9990).\n>\n> Here are some comments on system-versioning-temporal-table_2021_v13.patch.\n>\n> +</programlisting>\n> + When system versioning is specified two columns are added which\n> + record the start timestamp and end timestamp of each row verson.\n>\n> verson -> version\n>\n> + By default, the column names will be StartTime and EndTime, though\n> + you can specify different names if you choose.\n>\n> In fact, it is start_time and end_time, not StartTime and EndTime.\n> I think it's better to use <literal> label around start_time and end_time.\n>\n> + column will be automatically added to the Primary Key of the\n> + table.\n>\n> Should we mention the unique constraints?\n>\n> + The system versioning period end column will be added to the\n> + Primary Key of the table as a way of ensuring that concurrent\n> + INSERTs conflict correctly.\n>\n> Same as above.\n>\n> Since the get_row_start_time_col_name() and get_row_end_time_col_name()\n> are similar, IMO we can pass a flag to get StartTime/EndTime column name,\n> thought?\n>\n> --\n> Regrads,\n> Japin Li.\n> ChengDu WenWu Information Technology Co.,Ltd.\n>\n> The patch (system-versioning-temporal-table_2021_v13.patch) does not apply\nsuccessfully.\n\nhttp://cfbot.cputube.org/patch_32_2316.log\n\nHunk #1 FAILED at 80.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/test/regress/parallel_schedule.rej\npatching file src/test/regress/serial_schedule\nHunk #1 succeeded at 126 (offset -1 lines).\n\n\nTherefore it is a minor change so I rebased the patch, please take a look\nat that.\n\n-- \nIbrar Ahmed", "msg_date": "Mon, 8 Mar 2021 22:33:24 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "hi Ibrar,\nthank you for rebasing\n\nOn Mon, Mar 8, 2021 at 9:34 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>> Since the get_row_start_time_col_name() and get_row_end_time_col_name()\n>> are similar, IMO we can pass a flag to get StartTime/EndTime column name,\n>> thought?\n>>\n>>\nFor me your option is better. i will change to it in my next\npatch if no objection\n\n\nregards\nSurafel\n\nhi Ibrar,thank you for rebasingOn Mon, Mar 8, 2021 at 9:34 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\nSince the get_row_start_time_col_name() and get_row_end_time_col_name()\nare similar, IMO we can pass a flag to get StartTime/EndTime column name,\nthought?\nFor me your option is better.  i will change to it \n\nin my nextpatch if no objection  regards Surafel", "msg_date": "Wed, 10 Mar 2021 08:49:16 -0800", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 3/10/21 5:49 PM, Surafel Temesgen wrote:\n> hi Ibrar,\n> thank you for rebasing\n> \n> On Mon, Mar 8, 2021 at 9:34 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> \n>>\n>>> Since the get_row_start_time_col_name() and get_row_end_time_col_name()\n>>> are similar, IMO we can pass a flag to get StartTime/EndTime column name,\n>>> thought?\n>>>\n>>>\n> For me your option is better. i will change to it in my next\n> patch if no objection\n\nI have plenty of objection. I'm sorry that I am taking so long with my\nreview. I am still working on it and it is coming soon, I promise.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 10 Mar 2021 18:02:53 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Wed, Mar 10, 2021 at 9:02 AM Vik Fearing <vik@postgresfriends.org> wrote:\n\n>\n> I have plenty of objection. I'm sorry that I am taking so long with my\n> review. I am still working on it and it is coming soon, I promise.\n>\n>\nokay take your time\n\nregards\nSurafel\n\nOn Wed, Mar 10, 2021 at 9:02 AM Vik Fearing <vik@postgresfriends.org> wrote:\nI have plenty of objection.  I'm sorry that I am taking so long with my\nreview.  I am still working on it and it is coming soon, I promise.okay take your time  regards Surafel", "msg_date": "Thu, 11 Mar 2021 06:14:50 -0800", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Mon, Mar 8, 2021 at 11:04 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Feb 25, 2021 at 3:28 PM Li Japin <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Jan 27, 2021, at 12:39 AM, Surafel Temesgen <surafel3000@gmail.com> wrote:\n>>\n>>\n>>\n>> On Tue, Jan 26, 2021 at 2:33 PM Vik Fearing <vik@postgresfriends.org> wrote:\n>>>\n>>> I'm still in the weeds of reviewing this patch, but why should this\n>>> fail? It should not fail.\n>>\n>>\n>> Attached is rebased patch that include isolation test\n>>\n>>\n>> Thanks for updating the patch. However it cannot apply to master (e5d8a9990).\n>>\n>> Here are some comments on system-versioning-temporal-table_2021_v13.patch.\n>>\n>> +</programlisting>\n>> + When system versioning is specified two columns are added which\n>> + record the start timestamp and end timestamp of each row verson.\n>>\n>> verson -> version\n>>\n>> + By default, the column names will be StartTime and EndTime, though\n>> + you can specify different names if you choose.\n>>\n>> In fact, it is start_time and end_time, not StartTime and EndTime.\n>> I think it's better to use <literal> label around start_time and end_time.\n>>\n>> + column will be automatically added to the Primary Key of the\n>> + table.\n>>\n>> Should we mention the unique constraints?\n>>\n>> + The system versioning period end column will be added to the\n>> + Primary Key of the table as a way of ensuring that concurrent\n>> + INSERTs conflict correctly.\n>>\n>> Same as above.\n>>\n>> Since the get_row_start_time_col_name() and get_row_end_time_col_name()\n>> are similar, IMO we can pass a flag to get StartTime/EndTime column name,\n>> thought?\n>>\n>> --\n>> Regrads,\n>> Japin Li.\n>> ChengDu WenWu Information Technology Co.,Ltd.\n>>\n> The patch (system-versioning-temporal-table_2021_v13.patch) does not apply successfully.\n>\n> http://cfbot.cputube.org/patch_32_2316.log\n>\n> Hunk #1 FAILED at 80.\n> 1 out of 1 hunk FAILED -- saving rejects to file src/test/regress/parallel_schedule.rej\n> patching file src/test/regress/serial_schedule\n> Hunk #1 succeeded at 126 (offset -1 lines).\n>\n>\n> Therefore it is a minor change so I rebased the patch, please take a look at that.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 14 Jul 2021 17:18:46 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Wed, 14 Jul 2021 at 12:48, vignesh C <vignesh21@gmail.com> wrote:\n\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nOK, so I've rebased the patch against current master to take it to v15.\n\nI've then worked on the patch some myself to make v16 (attached),\nadding these things:\n\n* Add code, docs and test to remove the potential anomaly where\nendtime < starttime, using the sqlstate 2201H as pointed out by Vik\n* Add code and tests to handle multiple changes in a transaction\ncorrectly, according to SQL Std\n* Add code and tests to make Foreign Keys behave correctly, according to SQL Std\n* Fixed nascent bug in relcache setup code\n* Various small fixes from Japin's review - thanks! I've used\nstarttime and endtime as default column names\n* Additional tests and docs to show that the functionality works with\nor without PKs on the table\n\nI am now satisfied that the patch does not have any implementation\nanomalies in behavioral design, but it is still a long way short in\ncode architecture.\n\nThere are various aspects still needing work. This is not yet ready\nfor Commit, but it is appropriate now to ask for initial design\nguidance on architecture and code placement by a Committer, so I am\nsetting this to Ready For Committer, in the hope that we get the\nreview in SeptCF and a later version can be submitted for later commit\nin JanCF. With the right input, this patch is about a person-month\naway from being ready, assuming we don't hit any blocking issues.\n\nMajor Known Issues\n* SQLStd says that it should not be possible to update historical\nrows, but those tests show we fail to prevent that and there is code\nmarked NOT_USED in those areas\n* The code is structured poorly around\nparse-analyze/rewriter/optimizer/executor and that needs positive\ndesign recommendations, rather than critical review\n* Joins currently fail because of the botched way WHERE clauses are\nadded, resulting in duplicate names\n* Views probably don't work, but there are no tests\n* CREATE TABLE (LIKE foo) doesn't correctly copy across all features -\ntest for that added, with test failure accepted for now\n* ALTER TABLE is still incomplete and also broken; I suggest we remove\nthat for the first version of the patch to reduce patch size for an\ninitial commit.\n\nMinor Known Issues\n* Logical replication needs some minor work, no tests yet\n* pg_dump support looks like it exists and might work easily, but\nthere are no tests yet\n* Correlation names don't work in FROM clause - shift/reduce errors\nfrom double use of AS\n* Add test and code to prevent triggers referencing period cols in the\nWHEN clause\n* No tests yet to prove you can't set various parameters/settings on\nthe period time start/end cols\n* Code needs some cleanup in a few places\n* Not really sure what value is added by\nlock-update-delete-system-versioned.spec\n\n* IMHO we should make the PK definition use \"endtime DESC\", so that\nthe current version is always the first row found in the PK for any\nkey, since historical indexes will grow bigger over time\n\nThere are no expected issues with integration with MERGE, since SQLStd\nexplains how to handle that.\n\nOther reviews are welcome.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Tue, 10 Aug 2021 13:20:14 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi,\r\n\r\nquick note: The documentation for this patch mentions:\r\n\r\n The <literal>starttime</literal> column\r\n+ will be automatically added to the Primary Key of the table.\r\n\r\nA quick tests shows that the endtime column is added instead:\r\n\r\npostgres=# create table t1 ( a int primary key generated always as identity, b text ) with system versioning;\r\nCREATE TABLE\r\npostgres=# \\d t1\r\n Table \"public.t1\"\r\n Column | Type | Collation | Nullable | Default \r\n-----------+--------------------------+-----------+----------+-------------------------------\r\n a | integer | | not null | generated always as identity\r\n b | text | | | \r\n starttime | timestamp with time zone | | not null | generated always as row start\r\n endtime | timestamp with time zone | | not null | generated always as row end\r\nIndexes:\r\n \"t1_pkey\" PRIMARY KEY, btree (a, endtime)\r\n\r\nRegards\r\nDaniel", "msg_date": "Tue, 24 Aug 2021 13:18:34 +0000", "msg_from": "Daniel Westermann <dwe@dbi-services.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Hi,\n\nThis doesn't pass tests because of lack of some file. Can we fix that\nplease and send the patch again?\n\nOn Tue, Aug 10, 2021 at 7:20 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Wed, 14 Jul 2021 at 12:48, vignesh C <vignesh21@gmail.com> wrote:\n>\n> > The patch does not apply on Head anymore, could you rebase and post a\n> > patch. I'm changing the status to \"Waiting for Author\".\n>\n> OK, so I've rebased the patch against current master to take it to v15.\n>\n> I've then worked on the patch some myself to make v16 (attached),\n> adding these things:\n>\n> * Add code, docs and test to remove the potential anomaly where\n> endtime < starttime, using the sqlstate 2201H as pointed out by Vik\n> * Add code and tests to handle multiple changes in a transaction\n> correctly, according to SQL Std\n> * Add code and tests to make Foreign Keys behave correctly, according to\n> SQL Std\n> * Fixed nascent bug in relcache setup code\n> * Various small fixes from Japin's review - thanks! I've used\n> starttime and endtime as default column names\n> * Additional tests and docs to show that the functionality works with\n> or without PKs on the table\n>\n> I am now satisfied that the patch does not have any implementation\n> anomalies in behavioral design, but it is still a long way short in\n> code architecture.\n>\n> There are various aspects still needing work. This is not yet ready\n> for Commit, but it is appropriate now to ask for initial design\n> guidance on architecture and code placement by a Committer, so I am\n> setting this to Ready For Committer, in the hope that we get the\n> review in SeptCF and a later version can be submitted for later commit\n> in JanCF. With the right input, this patch is about a person-month\n> away from being ready, assuming we don't hit any blocking issues.\n>\n> Major Known Issues\n> * SQLStd says that it should not be possible to update historical\n> rows, but those tests show we fail to prevent that and there is code\n> marked NOT_USED in those areas\n> * The code is structured poorly around\n> parse-analyze/rewriter/optimizer/executor and that needs positive\n> design recommendations, rather than critical review\n> * Joins currently fail because of the botched way WHERE clauses are\n> added, resulting in duplicate names\n> * Views probably don't work, but there are no tests\n> * CREATE TABLE (LIKE foo) doesn't correctly copy across all features -\n> test for that added, with test failure accepted for now\n> * ALTER TABLE is still incomplete and also broken; I suggest we remove\n> that for the first version of the patch to reduce patch size for an\n> initial commit.\n>\n> Minor Known Issues\n> * Logical replication needs some minor work, no tests yet\n> * pg_dump support looks like it exists and might work easily, but\n> there are no tests yet\n> * Correlation names don't work in FROM clause - shift/reduce errors\n> from double use of AS\n> * Add test and code to prevent triggers referencing period cols in the\n> WHEN clause\n> * No tests yet to prove you can't set various parameters/settings on\n> the period time start/end cols\n> * Code needs some cleanup in a few places\n> * Not really sure what value is added by\n> lock-update-delete-system-versioned.spec\n>\n> * IMHO we should make the PK definition use \"endtime DESC\", so that\n> the current version is always the first row found in the PK for any\n> key, since historical indexes will grow bigger over time\n>\n> There are no expected issues with integration with MERGE, since SQLStd\n> explains how to handle that.\n>\n> Other reviews are welcome.\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n\n\n--\n\nHi,This doesn't pass tests because of lack of some file. Can we fix that please and send the patch again?On Tue, Aug 10, 2021 at 7:20 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Wed, 14 Jul 2021 at 12:48, vignesh C <vignesh21@gmail.com> wrote:\n\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nOK, so I've rebased the patch against current master to take it to v15.\n\nI've then worked on the patch some myself to make v16 (attached),\nadding these things:\n\n* Add code, docs and test to remove the potential anomaly where\nendtime < starttime, using the sqlstate 2201H as pointed out by Vik\n* Add code and tests to handle multiple changes in a transaction\ncorrectly, according to SQL Std\n* Add code and tests to make Foreign Keys behave correctly, according to SQL Std\n* Fixed nascent bug in relcache setup code\n* Various small fixes from Japin's review - thanks! I've used\nstarttime and endtime as default column names\n* Additional tests and docs to show that the functionality works with\nor without PKs on the table\n\nI am now satisfied that the patch does not have any implementation\nanomalies in behavioral design, but it is still a long way short in\ncode architecture.\n\nThere are various aspects still needing work. This is not yet ready\nfor Commit, but it is appropriate now to ask for initial design\nguidance on architecture and code placement by a Committer, so I am\nsetting this to Ready For Committer, in the hope that we get the\nreview in SeptCF and a later version can be submitted for later commit\nin JanCF. With the right input, this patch is about a person-month\naway from being ready, assuming we don't hit any blocking issues.\n\nMajor Known Issues\n* SQLStd says that it should not be possible to update historical\nrows, but those tests show we fail to prevent that and there is code\nmarked NOT_USED in those areas\n* The code is structured poorly around\nparse-analyze/rewriter/optimizer/executor and that needs positive\ndesign recommendations, rather than critical review\n* Joins currently fail because of the botched way WHERE clauses are\nadded, resulting in duplicate names\n* Views probably don't work, but there are no tests\n* CREATE TABLE (LIKE foo) doesn't correctly copy across all features -\ntest for that added, with test failure accepted for now\n* ALTER TABLE is still incomplete and also broken; I suggest we remove\nthat for the first version of the patch to reduce patch size for an\ninitial commit.\n\nMinor Known Issues\n* Logical replication needs some minor work, no tests yet\n* pg_dump support looks like it exists and might work easily, but\nthere are no tests yet\n* Correlation names don't work in FROM clause - shift/reduce errors\nfrom double use of AS\n* Add test and code to prevent triggers referencing period cols in the\nWHEN clause\n* No tests yet to prove you can't set various parameters/settings on\nthe period time start/end cols\n* Code needs some cleanup in a few places\n* Not really sure what value is added by\nlock-update-delete-system-versioned.spec\n\n* IMHO we should make the PK definition use \"endtime DESC\", so that\nthe current version is always the first row found in the PK for any\nkey, since historical indexes will grow bigger over time\n\nThere are no expected issues with integration with MERGE, since SQLStd\nexplains how to handle that.\n\nOther reviews are welcome.\n\n-- \nSimon Riggs                http://www.EnterpriseDB.com/\n--", "msg_date": "Wed, 1 Sep 2021 11:05:18 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Tue, Aug 10, 2021 at 01:20:14PM +0100, Simon Riggs wrote:\n> On Wed, 14 Jul 2021 at 12:48, vignesh C <vignesh21@gmail.com> wrote:\n> \n> > The patch does not apply on Head anymore, could you rebase and post a\n> > patch. I'm changing the status to \"Waiting for Author\".\n> \n> OK, so I've rebased the patch against current master to take it to v15.\n> \n> I've then worked on the patch some myself to make v16 (attached),\n> adding these things:\n> \n\nHi Simon,\n\nThis one doesn't apply nor compile anymore.\nCan we expect a rebase soon?\n\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Fri, 10 Sep 2021 13:30:23 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Fri, 10 Sept 2021 at 19:30, Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Tue, Aug 10, 2021 at 01:20:14PM +0100, Simon Riggs wrote:\n> > On Wed, 14 Jul 2021 at 12:48, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > > The patch does not apply on Head anymore, could you rebase and post a\n> > > patch. I'm changing the status to \"Waiting for Author\".\n> >\n> > OK, so I've rebased the patch against current master to take it to v15.\n> >\n> > I've then worked on the patch some myself to make v16 (attached),\n> > adding these things:\n> >\n>\n> Hi Simon,\n>\n> This one doesn't apply nor compile anymore.\n> Can we expect a rebase soon?\n\nHi Jaime,\n\nSorry for not replying.\n\nYes, I will rebase again to assist the design input I have requested.\nPlease expect that on Sep 15.\n\nCheers\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 12 Sep 2021 17:02:31 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Sun, Sep 12, 2021 at 12:02 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Fri, 10 Sept 2021 at 19:30, Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> >\n> > On Tue, Aug 10, 2021 at 01:20:14PM +0100, Simon Riggs wrote:\n> > > On Wed, 14 Jul 2021 at 12:48, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > > The patch does not apply on Head anymore, could you rebase and post a\n> > > > patch. I'm changing the status to \"Waiting for Author\".\n> > >\n> > > OK, so I've rebased the patch against current master to take it to v15.\n> > >\n> > > I've then worked on the patch some myself to make v16 (attached),\n> > > adding these things:\n> > >\n> >\n> > Hi Simon,\n> >\n> > This one doesn't apply nor compile anymore.\n> > Can we expect a rebase soon?\n>\n> Hi Jaime,\n>\n> Sorry for not replying.\n>\n> Yes, I will rebase again to assist the design input I have requested.\n> Please expect that on Sep 15.\n>\n> Cheers\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n>\n>\nI've been interested in this patch, especially with how it will\ninteroperate with the work on application periods in\nhttps://www.postgresql.org/message-id/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com\n. I've written up a few observations and questions in that thread, and\nwanted to do the same here, as the questions are a bit narrower but no less\ninteresting.\n\n1. Much of what I have read about temporal tables seemed to imply or almost\nassume that system temporal tables would be implemented as two actual\nseparate tables. Indeed, SQLServer appears to do it that way [1] with\nsyntax like\n\nWITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));\n\n\nQ 1.1. Was that implementation considered and if so, what made this\nimplementation more appealing?\n\n2. The endtime column constraint which enforces GENERATED ALWAYS AS ROW END\nseems like it would have appeal outside of system versioning, as a lot of\ntables have a last_updated column, and it would be nice if it could handle\nitself and not rely on fallible application programmers or require trigger\noverhead.\n\nQ 2.1. Is that something we could break out into its own patch?\n\n3. It is possible to have bi-temporal tables (having both a system_time\nperiod and a named application period) as described in [2], the specific\nexample was\n\nCREATE TABLE Emp(\n ENo INTEGER,\n EStart DATE,\n EEnd DATE,\n EDept INTEGER,\n PERIOD FOR EPeriod (EStart, EEnd),\n Sys_start TIMESTAMP(12) GENERATED ALWAYS AS ROW START,\n Sys_end TIMESTAMP(12) GENERATED ALWAYS AS ROW END,\n EName VARCHAR(30),\n PERIOD FOR SYSTEM_TIME(Sys_start, Sys_end),\n PRIMARY KEY (ENo, EPeriod WITHOUT OVERLAPS),\n FOREIGN KEY (Edept, PERIOD EPeriod) REFERENCES Dept (DNo, PERIOD DPeriod)\n) WITH SYSTEM VERSIONING\n\n\nWhat's interesting here is that in the case of a bitemporal table, it was\nthe application period that got the defined primary key. The paper went on\nthat only the _current_ rows of the table needed to be unique for, as it\nwasn't possible to create rows with past system temporal values. This\nsounds like a partial index to me, and luckily postgres can do referential\nintegrity on any unique index, not just primary keys. In light of the\nassumption of a history side-table, I guess I shouldn't be surprised.\n\nQ 3.1. Do you think that it would be possible to implement system\nversioning with just a unique index?\nQ 3.2. Are there any barriers to using a partial index as the hitch for a\nforeign key? Would it be any different than the implied \"and endtime =\n'infinity'\" that's already being done?\n\n4. The choice of 'infinity' seemed like a good one initially - it's not\nnull so it can be used in a primary key, it's not some hackish magic date\nlike SQLServer's '9999-12-31 23:59:59.9999999'. However, it may not jibe as\nwell with application versioning, which is built very heavily upon range\ntypes (and multirange types), and those ranges are capable of saying that a\nrecord is valid for an unbounded amount of time in the future, that's\nrepresented with NULL, not infinity. It could be awkward to have the system\nendtime be infinity and the application period endtime be NULL.\n\nQ 4.1. Do you have any thoughts about how to resolve this?\n\n5. System versioning columns were indicated with additional columns in\npg_attribute.\n\nQ 5.1. If you were to implement application versioning yourself, would you\njust add additional columns to pg_attribute for those?\n\n6. The current effort to implement application versioning creates an\nINFORMATION_SCHEMA view called PERIODS. I wasn't aware of this one before\nbut there seems to be precedent for it existing.\n\nQ 6.1. Would system versioning belong in such a view?\n\n7. This is a trifle, but the documentation is inconsistent about starttime\nvs StartTime and endtime vs EndTime.\n\n8. Overall, I'm really excited about both of these efforts, and I'm looking\nfor ways to combine the efforts, perhaps starting with a patch that\nimplements the SQL syntax, but raises not-implemented errors, and each\neffort could then build off of that.\n\n[1] https://docs.microsoft.com/en-us/azure/azure-sql/temporal-tables\n[2]\nhttps://cs.ulb.ac.be/public/_media/teaching/infoh415/tempfeaturessql2011.pdf\n\nOn Sun, Sep 12, 2021 at 12:02 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Fri, 10 Sept 2021 at 19:30, Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Tue, Aug 10, 2021 at 01:20:14PM +0100, Simon Riggs wrote:\n> > On Wed, 14 Jul 2021 at 12:48, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > > The patch does not apply on Head anymore, could you rebase and post a\n> > > patch. I'm changing the status to \"Waiting for Author\".\n> >\n> > OK, so I've rebased the patch against current master to take it to v15.\n> >\n> > I've then worked on the patch some myself to make v16 (attached),\n> > adding these things:\n> >\n>\n> Hi Simon,\n>\n> This one doesn't apply nor compile anymore.\n> Can we expect a rebase soon?\n\nHi Jaime,\n\nSorry for not replying.\n\nYes, I will rebase again to assist the design input I have requested.\nPlease expect that on Sep 15.\n\nCheers\n\n-- \nSimon Riggs                http://www.EnterpriseDB.com/\n\nI've been interested in this patch, especially with how it will interoperate with the work on application periods in https://www.postgresql.org/message-id/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com . I've written up a few observations and questions in that thread, and wanted to do the same here, as the questions are a bit narrower but no less interesting.1. Much of what I have read about temporal tables seemed to imply or almost assume that system temporal tables would be implemented as two actual separate tables. Indeed, SQLServer appears to do it that way [1] with syntax likeWITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));Q 1.1. Was that implementation considered and if so, what made this implementation more appealing?2. The endtime column constraint which enforces GENERATED ALWAYS AS ROW END seems like it would have appeal outside of system versioning, as a lot of tables have a last_updated column, and it would be nice if it could handle itself and not rely on fallible application programmers or require trigger overhead.Q 2.1. Is that something we could break out into its own patch?3. It is possible to have bi-temporal tables (having both a system_time period and a named application period) as described in [2], the specific example wasCREATE TABLE Emp(  ENo INTEGER,  EStart DATE,  EEnd DATE,  EDept INTEGER,  PERIOD FOR EPeriod (EStart, EEnd),  Sys_start TIMESTAMP(12) GENERATED ALWAYS AS ROW START,  Sys_end TIMESTAMP(12) GENERATED ALWAYS AS ROW END,  EName VARCHAR(30),  PERIOD FOR SYSTEM_TIME(Sys_start, Sys_end),  PRIMARY KEY (ENo, EPeriod WITHOUT OVERLAPS),  FOREIGN KEY (Edept, PERIOD EPeriod) REFERENCES Dept (DNo, PERIOD DPeriod)) WITH SYSTEM VERSIONINGWhat's interesting here is that in the case of a bitemporal table, it was the application period that got the defined primary key. The paper went on that only the _current_ rows of the table needed to be unique for, as it wasn't possible to create rows with past system temporal values. This sounds like a partial index to me, and luckily postgres can do referential integrity on any unique index, not just primary keys. In light of the assumption of a history side-table, I guess I shouldn't be surprised.Q 3.1. Do you think that it would be possible to implement system versioning with just a unique index?Q 3.2. Are there any barriers to using a partial index as the hitch for a foreign key? Would it be any different than the implied \"and endtime = 'infinity'\" that's already being done?4. The choice of 'infinity' seemed like a good one initially - it's not null so it can be used in a primary key, it's not some hackish magic date like SQLServer's '9999-12-31 23:59:59.9999999'. However, it may not jibe as well with application versioning, which is built very heavily upon range types (and multirange types), and those ranges are capable of saying that a record is valid for an unbounded amount of time in the future, that's represented with NULL, not infinity. It could be awkward to have the system endtime be infinity and the application period endtime be NULL.Q 4.1. Do you have any thoughts about how to resolve this?5. System versioning columns were indicated with additional columns in pg_attribute.Q 5.1. If you were to implement application versioning yourself, would you just add additional columns to pg_attribute for those?6. The current effort to implement application versioning creates an INFORMATION_SCHEMA view called PERIODS. I wasn't aware of this one before but there seems to be precedent for it existing.Q 6.1. Would system versioning belong in such a view?7. This is a trifle, but the documentation is inconsistent about starttime vs StartTime and endtime vs EndTime.8. Overall, I'm really excited about both of these efforts, and I'm looking for ways to combine the efforts, perhaps starting with a patch that implements the SQL syntax, but raises not-implemented errors, and each effort could then build off of that.[1] https://docs.microsoft.com/en-us/azure/azure-sql/temporal-tables[2] https://cs.ulb.ac.be/public/_media/teaching/infoh415/tempfeaturessql2011.pdf", "msg_date": "Mon, 13 Sep 2021 02:45:04 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": ">\n>\n>\n> 1. Much of what I have read about temporal tables seemed to imply or\n> almost assume that system temporal tables would be implemented as two\n> actual separate tables. Indeed, SQLServer appears to do it that way [1]\n> with syntax like\n>\n> WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));\n>\n>\n> Q 1.1. Was that implementation considered and if so, what made this\n> implementation more appealing?\n>\n>\nI've been digging some more on this point, and I've reached the conclusion\nthat a separate history table is the better implementation. It would make\nthe act of removing system versioning into little more than a DROP TABLE,\nplus adjusting the base table to reflect that it is no longer system\nversioned.\n\nWhat do you think of this method:\n\n1. The regular table remains unchanged, but a pg_class attribute named\n\"relissystemversioned\" would be set to true\n2. I'm unsure if the standard allows dropping a column from a table while\nit is system versioned, and the purpose behind system versioning makes me\nbelieve the answer is a strong \"no\" and requiring DROP COLUMN to fail\non relissystemversioned = 't' seems pretty straightforward.\n3. The history table would be given a default name of $FOO_history (space\npermitting), but could be overridden with the history_table option.\n4. The history table would have relkind = 'h'\n5. The history table will only have rows that are not current, so it is\ncreated empty.\n6. As such, the table is effectively append-only, in a way that vacuum can\nactually leverage, and likewise the fill factor of such a table should\nnever be less than 100.\n7. The history table could only be updated only via system defined triggers\n(insert,update,delete, alter to add columns), or row migration similar to\nthat found in partitioning. It seems like this would work as the two tables\nworking as partitions of the same table, but presently we can't have\nmulti-parent partitions.\n8. The history table would be indexed the same as the base table, except\nthat all unique indexes would be made non-unique, and an index of pk +\nstart_time + end_time would be added\n9. The primary key of the base table would remain the existing pk vals, and\nwould basically function normally, with triggers to carry forth changes to\nthe history table. The net effect of this is that the end_time value of all\nrows in the main table would always be the chosen \"current\" value\n(infinity, null, 9999-12-31, etc) and as such might not actually _need_ to\nbe stored.\n10. Queries that omit the FOR SYSTEM_TIME clause, as well as ones that use\nFOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP, would simply use the base table\ndirectly with no quals to add.\n11. Queries that use FOR SYSTEM_TIME and not FOR SYSTEM_TIME AS\nOF CURRENT_TIMESTAMP, then the query would do a union of the base table and\nthe history table with quals applied to both.\n12. It's a fair question whether the history table would be something that\ncould be queried directly. I'm inclined to say no, because that allows for\nthings like SELECT FOR UPDATE, which of course we'd have to reject.\n13. If a history table is directly referenceable, then SELECT permission\ncan be granted or revoked as normal, but all insert/update/delete/truncate\noptions would raise an error.\n14. DROP SYSTEM VERSIONING from a table would be quite straightforward -\nthe history table would be dropped along with the triggers that reference\nit, setting relissystemversioned = 'f' on the base table.\n\nI think this would have some key advantages:\n\n1. MVCC bloat is no worse than it was before.\n2. No changes whatsoever to referential integrity.\n3. DROP SYSTEM VERSIONING becomes an O(1) operation.\n\nThoughts?\n\nI'm going to be making a similar proposal to the people doing the\napplication time effort, but I'm very much hoping that we can reach some\nconsensus and combine efforts.\n\n1. Much of what I have read about temporal tables seemed to imply or almost assume that system temporal tables would be implemented as two actual separate tables. Indeed, SQLServer appears to do it that way [1] with syntax likeWITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));Q 1.1. Was that implementation considered and if so, what made this implementation more appealing?I've been digging some more on this point, and I've reached the conclusion that a separate history table is the better implementation. It would make the act of removing system versioning into little more than a DROP TABLE, plus adjusting the base table to reflect that it is no longer system versioned.What do you think of this method:1. The regular table remains unchanged, but a pg_class attribute named \"relissystemversioned\" would be set to true2. I'm unsure if the standard allows dropping a column from a table while it is system versioned, and the purpose behind system versioning makes me believe the answer is a strong \"no\" and requiring DROP COLUMN to fail on relissystemversioned = 't' seems pretty straightforward.3. The history table would be given a default name of $FOO_history (space permitting), but could be overridden with the history_table option.4. The history table would have relkind = 'h'5. The history table will only have rows that are not current, so it is created empty.6. As such, the table is effectively append-only, in a way that vacuum can actually leverage, and likewise the fill factor of such a table should never be less than 100.7. The history table could only be updated only via system defined triggers (insert,update,delete, alter to add columns), or row migration similar to that found in partitioning. It seems like this would work as the two tables working as partitions of the same table, but presently we can't have multi-parent partitions.8. The history table would be indexed the same as the base table, except that all unique indexes would be made non-unique, and an index of pk + start_time + end_time would be added9. The primary key of the base table would remain the existing pk vals, and would basically function normally, with triggers to carry forth changes to the history table. The net effect of this is that the end_time value of all rows in the main table would always be the chosen \"current\" value (infinity, null, 9999-12-31, etc) and as such might not actually _need_ to be stored.10. Queries that omit the FOR SYSTEM_TIME clause, as well as ones that use FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP, would simply use the base table directly with no quals to add.11. Queries that use FOR SYSTEM_TIME and not FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP, then the query would do a union of the base table and the history table with quals applied to both.12. It's a fair question whether the history table would be something that could be queried directly. I'm inclined to say no, because that allows for things like SELECT FOR UPDATE, which of course we'd have to reject.13. If a history table is directly referenceable, then SELECT permission can be granted or revoked as normal, but all insert/update/delete/truncate options would raise an error.14. DROP SYSTEM VERSIONING from a table would be quite straightforward - the history table would be dropped along with the triggers that reference it, setting relissystemversioned = 'f' on the base table.I think this would have some key advantages:1. MVCC bloat is no worse than it was before.2. No changes whatsoever to referential integrity.3. DROP SYSTEM VERSIONING becomes an O(1) operation.Thoughts?I'm going to be making a similar proposal to the people doing the application time effort, but I'm very much hoping that we can reach some consensus and combine efforts.", "msg_date": "Sat, 18 Sep 2021 20:15:52 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Sun, 19 Sept 2021 at 01:16, Corey Huinker <corey.huinker@gmail.com> wrote:\n>>\n>> 1. Much of what I have read about temporal tables seemed to imply or almost assume that system temporal tables would be implemented as two actual separate tables. Indeed, SQLServer appears to do it that way [1] with syntax like\n>>\n>> WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));\n>>\n>>\n>> Q 1.1. Was that implementation considered and if so, what made this implementation more appealing?\n>>\n>\n> I've been digging some more on this point, and I've reached the conclusion that a separate history table is the better implementation. It would make the act of removing system versioning into little more than a DROP TABLE, plus adjusting the base table to reflect that it is no longer system versioned.\n\nThanks for giving this a lot of thought. When you asked the question\nthe first time you hadn't discussed how that might work, but now we\nhave something to discuss.\n\n> 10. Queries that omit the FOR SYSTEM_TIME clause, as well as ones that use FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP, would simply use the base table directly with no quals to add.\n> 11. Queries that use FOR SYSTEM_TIME and not FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP, then the query would do a union of the base table and the history table with quals applied to both.\n\n\n> 14. DROP SYSTEM VERSIONING from a table would be quite straightforward - the history table would be dropped along with the triggers that reference it, setting relissystemversioned = 'f' on the base table.\n>\n> I think this would have some key advantages:\n>\n> 1. MVCC bloat is no worse than it was before.\n\nThe number of row versions stored in the database is the same for\nboth, just it would be split across two tables in this form.\n\n> 2. No changes whatsoever to referential integrity.\n\nThe changes were fairly minor, but I see your thinking about indexes\nas a simplification.\n\n> 3. DROP SYSTEM VERSIONING becomes an O(1) operation.\n\nIt isn't top of mind to make this work well. The whole purpose of the\nhistory is to keep it, not to be able to drop it quickly.\n\n\n> Thoughts?\n\nThere are 3 implementation routes that I see, so let me explain so\nthat others can join the discussion.\n\n1. Putting all data in one table. This makes DROP SYSTEM VERSIONING\neffectively impossible. It requires access to the table to be\nrewritten to add in historical quals for non-historical access and it\nrequires some push-ups around indexes. (The current patch adds the\nhistoric quals by kludging the parser, which is wrong place, since it\ndoesn't work for joins etc.. However, given that issue, the rest seems\nto follow on naturally).\n\n2. Putting data in a side table. This makes DROP SYSTEM VERSIONING\nfairly trivial, but it complicates many DDL commands (please make a\nlist?) and requires the optimizer to know about this and cater to it,\npossibly complicating plans. Neither issue is insurmountable, but it\nbecomes more intrusive.\n\nThe current patch could go in either of the first 2 directions with\nfurther work.\n\n3. Let the Table Access Method handle it. I call this out separately\nsince it avoids making changes to the rest of Postgres, which might be\na good thing, with the right TAM implementation.\n\nMy preferred approach would be to do this \"for free\" in the table\naccess method, but we're a long way from this in terms of actual\nimplementation. When Corey suggested earlier that we just put the\nsyntax in there, this was the direction I was thinking.\n\nAfter waiting a day since I wrote the above, I think we should go with\n(2) as Corey suggests, at least for now, and we can always add (3)\nlater.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 19 Sep 2021 19:32:20 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "A side table has the nice additional benefit that we can very easily\nversion the *table structure* so when we ALTER TABLE and the table\nstructure changes we just make a new side table with now-currents\nstructure.\n\nAlso we may want different set of indexes on historic table(s) for\nwhatever reason\n\nAnd we may even want to partition history tables for speed, storage\ncost or just to drop very ancient history\n\n-----\nHannu Krosing\nGoogle Cloud - We have a long list of planned contributions and we are hiring.\nContact me if interested.\n\nOn Sun, Sep 19, 2021 at 8:32 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Sun, 19 Sept 2021 at 01:16, Corey Huinker <corey.huinker@gmail.com> wrote:\n> >>\n> >> 1. Much of what I have read about temporal tables seemed to imply or almost assume that system temporal tables would be implemented as two actual separate tables. Indeed, SQLServer appears to do it that way [1] with syntax like\n> >>\n> >> WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));\n> >>\n> >>\n> >> Q 1.1. Was that implementation considered and if so, what made this implementation more appealing?\n> >>\n> >\n> > I've been digging some more on this point, and I've reached the conclusion that a separate history table is the better implementation. It would make the act of removing system versioning into little more than a DROP TABLE, plus adjusting the base table to reflect that it is no longer system versioned.\n>\n> Thanks for giving this a lot of thought. When you asked the question\n> the first time you hadn't discussed how that might work, but now we\n> have something to discuss.\n>\n> > 10. Queries that omit the FOR SYSTEM_TIME clause, as well as ones that use FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP, would simply use the base table directly with no quals to add.\n> > 11. Queries that use FOR SYSTEM_TIME and not FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP, then the query would do a union of the base table and the history table with quals applied to both.\n>\n>\n> > 14. DROP SYSTEM VERSIONING from a table would be quite straightforward - the history table would be dropped along with the triggers that reference it, setting relissystemversioned = 'f' on the base table.\n> >\n> > I think this would have some key advantages:\n> >\n> > 1. MVCC bloat is no worse than it was before.\n>\n> The number of row versions stored in the database is the same for\n> both, just it would be split across two tables in this form.\n>\n> > 2. No changes whatsoever to referential integrity.\n>\n> The changes were fairly minor, but I see your thinking about indexes\n> as a simplification.\n>\n> > 3. DROP SYSTEM VERSIONING becomes an O(1) operation.\n>\n> It isn't top of mind to make this work well. The whole purpose of the\n> history is to keep it, not to be able to drop it quickly.\n>\n>\n> > Thoughts?\n>\n> There are 3 implementation routes that I see, so let me explain so\n> that others can join the discussion.\n>\n> 1. Putting all data in one table. This makes DROP SYSTEM VERSIONING\n> effectively impossible. It requires access to the table to be\n> rewritten to add in historical quals for non-historical access and it\n> requires some push-ups around indexes. (The current patch adds the\n> historic quals by kludging the parser, which is wrong place, since it\n> doesn't work for joins etc.. However, given that issue, the rest seems\n> to follow on naturally).\n>\n> 2. Putting data in a side table. This makes DROP SYSTEM VERSIONING\n> fairly trivial, but it complicates many DDL commands (please make a\n> list?) and requires the optimizer to know about this and cater to it,\n> possibly complicating plans. Neither issue is insurmountable, but it\n> becomes more intrusive.\n>\n> The current patch could go in either of the first 2 directions with\n> further work.\n>\n> 3. Let the Table Access Method handle it. I call this out separately\n> since it avoids making changes to the rest of Postgres, which might be\n> a good thing, with the right TAM implementation.\n>\n> My preferred approach would be to do this \"for free\" in the table\n> access method, but we're a long way from this in terms of actual\n> implementation. When Corey suggested earlier that we just put the\n> syntax in there, this was the direction I was thinking.\n>\n> After waiting a day since I wrote the above, I think we should go with\n> (2) as Corey suggests, at least for now, and we can always add (3)\n> later.\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n>\n\n\n", "msg_date": "Sun, 19 Sep 2021 21:12:37 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": ">\n> Thanks for giving this a lot of thought. When you asked the question\n> the first time you hadn't discussed how that might work, but now we\n> have something to discuss.\n>\n\nMy ultimate goal is to unify this effort with the application period\neffort. Step 1 in that was to understand what each was doing and why they\nwere doing it. If you check out the other thread, you'll see a highly\nsimilar message that I sent over there.\n\n\n> There are 3 implementation routes that I see, so let me explain so\n> that others can join the discussion.\n>\n> 1. Putting all data in one table. This makes DROP SYSTEM VERSIONING\n> effectively impossible. It requires access to the table to be\n> rewritten to add in historical quals for non-historical access and it\n> requires some push-ups around indexes. (The current patch adds the\n> historic quals by kludging the parser, which is wrong place, since it\n> doesn't work for joins etc.. However, given that issue, the rest seems\n> to follow on naturally).\n>\n> 2. Putting data in a side table. This makes DROP SYSTEM VERSIONING\n> fairly trivial, but it complicates many DDL commands (please make a\n> list?) and requires the optimizer to know about this and cater to it,\n> possibly complicating plans. Neither issue is insurmountable, but it\n> becomes more intrusive.\n>\n> The current patch could go in either of the first 2 directions with\n> further work.\n>\n> 3. Let the Table Access Method handle it. I call this out separately\n> since it avoids making changes to the rest of Postgres, which might be\n> a good thing, with the right TAM implementation.\n>\n\nI'd like to hear more about this idea number 3.\n\nI could see value in allowing the history table to be a foreign table,\nperhaps writing to csv/parquet/whatever files, and that sort of setup could\nbe persuasive to a regulator who wants extra-double-secret-proof that\nauditing cannot be tampered with. But with that we'd have to give up the\nrelkind idea, which itself was going to be a cheap way to prevent updates\noutside of the system triggers.\n\nThanks for giving this a lot of thought. When you asked the question\nthe first time you hadn't discussed how that might work, but now we\nhave something to discuss.My ultimate goal is to unify this effort with the application period effort. Step 1 in that was to understand what each was doing and why they were doing it. If you check out the other thread, you'll see a highly similar message that I sent over there. There are 3 implementation routes that I see, so let me explain so\nthat others can join the discussion.\n\n1. Putting all data in one table. This makes DROP SYSTEM VERSIONING\neffectively impossible. It requires access to the table to be\nrewritten to add in historical quals for non-historical access and it\nrequires some push-ups around indexes. (The current patch adds the\nhistoric quals by kludging the parser, which is wrong place, since it\ndoesn't work for joins etc.. However, given that issue, the rest seems\nto follow on naturally).\n\n2. Putting data in a side table. This makes DROP SYSTEM VERSIONING\nfairly trivial, but it complicates many DDL commands (please make a\nlist?) and requires the optimizer to know about this and cater to it,\npossibly complicating plans. Neither issue is insurmountable, but it\nbecomes more intrusive.\n\nThe current patch could go in either of the first 2 directions with\nfurther work.\n\n3. Let the Table Access Method handle it. I call this out separately\nsince it avoids making changes to the rest of Postgres, which might be\na good thing, with the right TAM implementation.I'd like to hear more about this idea number 3.I could see value in allowing the history table to be a foreign table, perhaps writing to csv/parquet/whatever files, and that sort of setup could be persuasive to a regulator who wants extra-double-secret-proof that auditing cannot be tampered with. But with that we'd have to give up the relkind idea, which itself was going to be a cheap way to prevent updates outside of the system triggers.", "msg_date": "Mon, 20 Sep 2021 00:57:11 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Sun, Sep 19, 2021 at 3:12 PM Hannu Krosing <hannuk@google.com> wrote:\n\n> A side table has the nice additional benefit that we can very easily\n> version the *table structure* so when we ALTER TABLE and the table\n> structure changes we just make a new side table with now-currents\n> structure.\n>\n\nIt's true that would allow for perfect capture of changes to the table\nstructure, but how would you query the thing?\n\nIf a system versioned table was created with a column foo that is type\nfloat, and then we dropped that column, how would we ever query what the\nvalue of foo was in the past?\n\nWould the columns returned from SELECT * change based on the timeframe\nrequested?\n\nIf we then later added another column that happened to also be named foo\nbut now was type JSONB, would we change the datatype returned based on the\ntime period being queried?\n\nIs the change in structure a system versioning which itself must be\ncaptured?\n\n\n> Also we may want different set of indexes on historic table(s) for\n> whatever reason\n>\n\n+1\n\n\n>\n> And we may even want to partition history tables for speed, storage\n> cost or just to drop very ancient history\n>\n\n+1\n\nOn Sun, Sep 19, 2021 at 3:12 PM Hannu Krosing <hannuk@google.com> wrote:A side table has the nice additional benefit that we can very easily\nversion the *table structure* so when we ALTER TABLE and the table\nstructure changes we just make a new side table with now-currents\nstructure.It's true that would allow for perfect capture of changes to the table structure, but how would you query the thing?If a system versioned table was created with a column foo that is type float, and then we dropped that column, how would we ever query what the value of foo was in the past?Would the columns returned from SELECT * change based on the timeframe requested?If we then later added another column that happened to also be named foo but now was type JSONB, would we change the datatype returned based on the time period being queried? Is the change in structure a system versioning which itself must be captured? Also we may want different set of indexes on historic table(s) for\nwhatever reason+1 \n\nAnd we may even want to partition history tables for speed, storage\ncost  or just to drop very ancient history+1", "msg_date": "Mon, 20 Sep 2021 01:09:46 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Mon, Sep 20, 2021 at 7:09 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n>\n> On Sun, Sep 19, 2021 at 3:12 PM Hannu Krosing <hannuk@google.com> wrote:\n>>\n>> A side table has the nice additional benefit that we can very easily\n>> version the *table structure* so when we ALTER TABLE and the table\n>> structure changes we just make a new side table with now-currents\n>> structure.\n>\n>\n> It's true that would allow for perfect capture of changes to the table structure, but how would you query the thing?\n>\n> If a system versioned table was created with a column foo that is type float, and then we dropped that column, how would we ever query what the value of foo was in the past?\n\n\nWe can query that thing only in tables AS OF the time when the column\nwas still there.\n\nWe probably could get away with pretending the dropped columns to be\nNULL in newer versions (and the versions before the column was added)\n\nEven more tricky case would be changing the column data type.\n\n>\n> Would the columns returned from SELECT * change based on the timeframe requested?\n\n\nIf we want to emulate real table history, then it should.\n\nBut the * thing was not really specified well even for original\nPostgreSQL inheritance.\n\nMaybe we could do SELECT (* AS OF 'yesterday afternoon'::timestamp) FROM ... :)\n\n> If we then later added another column that happened to also be named foo but now was type JSONB, would we change the datatype returned based on the time period being queried?\n\nMany databases do allow returning multiple result sets, and actually\nthe PostgreSQL wire *protocol* also theoretically supports this, just\nthat it is not supported by any current client library.\n\nWith current libraries it would be possible to return a dynamic\nversion of row_to_json(t.*) which changes based on actual historical\ntable structure\n\n> Is the change in structure a system versioning which itself must be captured?\n\nWe do capture it (kind of) for logical decoding. That is, we decode\naccording to the structure in effect at the time of row creation,\nthough we currently miss the actual DDL itself.\n\n\nSo there is a lot to figure out and define, but at least storing the\nhistory in a separate table gives a good foundation to build upon.\n\n\n\n-----\nHannu Krosing\nGoogle Cloud - We have a long list of planned contributions and we are hiring.\nContact me if interested.\n\n\n", "msg_date": "Mon, 20 Sep 2021 11:49:25 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "> On 19 Sep 2021, at 20:32, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> My preferred approach would be to do this \"for free\" in the table\n> access method, but we're a long way from this in terms of actual\n> implementation. When Corey suggested earlier that we just put the\n> syntax in there, this was the direction I was thinking.\n> \n> After waiting a day since I wrote the above, I think we should go with\n> (2) as Corey suggests, at least for now, and we can always add (3)\n> later.\n\nThis patch no longer applies, are there plans on implementing the approaches\ndiscussed above, or should we close this entry and open a new one when a\nfreshly baked pathed is ready?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 15 Nov 2021 10:47:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 11/15/21 10:47 AM, Daniel Gustafsson wrote:\n>> On 19 Sep 2021, at 20:32, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> \n>> My preferred approach would be to do this \"for free\" in the table\n>> access method, but we're a long way from this in terms of actual\n>> implementation. When Corey suggested earlier that we just put the\n>> syntax in there, this was the direction I was thinking.\n>>\n>> After waiting a day since I wrote the above, I think we should go with\n>> (2) as Corey suggests, at least for now, and we can always add (3)\n>> later.\n> \n> This patch no longer applies, are there plans on implementing the approaches\n> discussed above, or should we close this entry and open a new one when a\n> freshly baked pathed is ready?\n\nI spent a lot of time a while ago trying to fix this patch (not just\nrebase it), and I think it should just be rejected, unfortunately.\n\nThe design decisions are just too flawed, and it conflates system_time\nperiods with system versioning which is very wrong.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 15 Nov 2021 10:50:48 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On Mon, 15 Nov 2021 at 09:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 19 Sep 2021, at 20:32, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > My preferred approach would be to do this \"for free\" in the table\n> > access method, but we're a long way from this in terms of actual\n> > implementation. When Corey suggested earlier that we just put the\n> > syntax in there, this was the direction I was thinking.\n> >\n> > After waiting a day since I wrote the above, I think we should go with\n> > (2) as Corey suggests, at least for now, and we can always add (3)\n> > later.\n>\n> This patch no longer applies, are there plans on implementing the approaches\n> discussed above, or should we close this entry and open a new one when a\n> freshly baked pathed is ready?\n\nAs I mentioned upthread, there are at least 2 different ways forward\n(1) and (2), both of which need further work. I don't think that\nadditional work is impossible, but it is weeks of work, not days and\nit needs to be done in collaboration with other thoughts on other\nthreads Corey refers to.\n\nI have no plans on taking this patch further, but will give some help\nto anyone that wishes to do that.\n\nI suggest we Return with Feedback.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 15 Nov 2021 10:50:00 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "> On 15 Nov 2021, at 11:50, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> I have no plans on taking this patch further, but will give some help\n> to anyone that wishes to do that.\n> \n> I suggest we Return with Feedback.\n\nFair enough, done that way.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 16 Nov 2021 13:31:51 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Chiming in as a user, not so much a developer - I've been using system\nversioned tables in MariaDB for about half a year now, would just like\nto add some feedback about what they did right and wrong and how PG\ncould learn from their mistakes & successes.\n\n> 2. Putting data in a side table. This makes DROP SYSTEM VERSIONING\n> fairly trivial, but it complicates many DDL commands (please make a\n> list?) and requires the optimizer to know about this and cater to it,\n> possibly complicating plans. Neither issue is insurmountable, but it\n> becomes more intrusive.\n\nI'd vouch for this being the way to go; you completely sidestep issues\nlike partitioning, unique constraints, optimization, etc. Especially\ntrue when 90% of the time, SELECTs will only be looking at\ncurrently-active data. MDB seems to have gone with the single-table\napproach (unless you partition) and I've run into a bug where I can't\nadd a unique constraint because historical data fails.\n\n#### System versioning & Application versioning\nI saw that there is an intent to harmonize system versioning with\napplication versioning. Haven't read the AV thread so not positive if\nthat meant intending to split tables by application versioning and\nsystem versioning both: to me it seems like maybe it would be good to\nuse a separate table for SV, but keep AV in the same table. Reasons\ninclude:\n\n- ISO states only one AV config per table, but there's no reason this\nalways has to be the case; maybe you're storing products that are\nactive for a period of time, EOL for a period of time, and obsolete\nfor a period of time. If ISO sometime decides >1 AV config is OK,\nthere would be a mess trying to split that into tables.\n- DB users who are allowed to change AV items likely won't be allowed\nto rewrite history by changing SV items. My proposed schema would keep\nthese separate.\n- Table schemas change, and all (SV active) AV items would logically\nneed to fit the active schema or be updated to do so. Different story\nfor SV, nothing there should ever need to be changed.\n- Partitioning for AV tables isn't as clear as with SV and is likely\nbetter to be user-defined\n\nSorry for acronyms, SV=system versioning, AV=application versioning\n\nIn general, I think AV should be treated literally as extra rows in\nthe main DB, plus the extra PK element and shortcut functions. SV\nthough, needs to have a lot more nuance.\n\n#### ALTER TABLE\nOn to ideas about how ALTER TABLE could work. I don't think the\nquestion was ever answered \"Do schema changes need to be tracked?\" I'm\ngenerally in favor of saying that it should be possible to recreate\nthe table exactly as it was, schema and all, at a specific period of\ntime (perhaps for a view) using a fancy combination of SELECT ... AS\nand such - but it doesn't need to be straightforward. In any case, no\ndata should ever be deleted by ALTER TABLE. As someone pointed out\nearlier, speed and storage space of ALTER TABLE are likely low\nconsiderations for system versioned tables.\n\n- ADD COLUMN easy, add the column to both the current and historical\ntable, all null in historical\n- DROP COLUMN delete the column from the current table. Historical is\ndifficult, because what happens if a new column with the same name is\nadded? Maybe `DROP COLUMN col1` would rename col1 to _col1_1642929683\n(epoch time) in the historical table or something like that.\n- RENAME COLUMN is a bit tricky too - from a usability standpoint, the\nhistorical table should be renamed as well. A quick thought is maybe\n`RENAME col1 TO new_name` would perform the rename in the historical\ntable, but also create _col1_1642929683 as an alias to new_name to\ntrack that there was a change. I don't think there would be any name\nviolations in the history table because there would never be a column\nname in history that isn't in current (because of the rename described\nwith DROP).\n- Changing column data type: ouch. This needs to be mainly planned for\ncases where data types are incompatible, possibly optimized for times\nwhen they are compatible. Seems like another _col1_1642929683 rename\nwould be in order, and a new col1 created with the new datatype, and a\nhistorical SELECT would automatically merge the two. Possible\noptimization: if the old type fits into the new type, just change the\ndata type in history and make _col1_1642929683 an alias to it.\n- Change defaults, nullability, constraints, etc: I think these can\nsafely be done for the current table only. Realistically, historical\ntables could probably skip all checks, always (except their tuple PK),\nsince trying to enforce them would just be opening the door to bugs.\nTrying to think of any times this isn't true.\n- FKs: I'm generally in the same boat as above, thinking that these\ndon't need to affect historical tables. Section 2.5 in the paper I\nlink below discusses period joins, but I don't think any special\nbehavior is needed for now. Perhaps references could be kept in\nhistory but not enforced\n- Changing PK / adding/removing more columns to PK: Annoying and not\neasily dealt with. Maybe just disallow\n- Triggers: no affect on historical\n- DROP TABLE bye bye, history & all\n\nThings like row level security add extra complication but can probably\nbe disregarded. Maybe just have a `select history` permission or\nsimilar.\n\nAn interesting idea could be to automatically add system versioning to\ninformation_schema whenever it is added to a table. This would provide\na way to easily query historical DDL. It would also help solve how to\nkeep historical FKs. This would make it possible to perfectly recreate\nsystem versioned parts of your database at any period of time, schema\nand data both.\n\n#### Partitioning\nAllowing for partitioning and automatic rotation seems like a good\nidea, should be possible with current syntax but maybe worth adding\nsome shortcuts like maria has.\n\n#### Permissions\n- MDB has the new 'delete history' schema privilege that defines who\ncan delete historical data before a certain time or drop system\nversioning, seems like a good idea to implement. They also require\n`@@system_versioning_alter_history=keep;` to be set before doing\nanything ALTER TABLE; doesn't do much outside of serving as a reminder\nthat changing system versioned tables can be dangerous.¯\\_(ツ)_/¯\n- This part sucks and goes against everything ISO is going for, but\nIMO there needs to be a way to insert/update/delete historical data.\nMaybe there needs to be a new superduperuser role to do it and you\nneed to type the table name backwards to verify you want to insert,\nbut situations like data migration, fixing incorrectly stored data, or\nremoving accidental sensitive information demand it. This isn't a\npriority though, and basic system versioning can be shipped without\nit.\n\n#### Misc\n- Seems like a good idea to include MDB's option to exclude columns\nfrom versioning (`WITHOUT SYSTEM VERSIONING` as a column argument).\nThis is relatively nuanced and I'm not sure if it's officially part of\nISO, but probably helpful for frequently updating small data in rows\nwith BLOBs. Easy enough to implement, just forget the column in the\nhistorical table.\n- I thought I saw somewhere that somebody was discussing adding both\nrow_start and row_end to the PK. Why would this be? Row_end should be\nall that's needed to keep unique, but maybe I misread.\n\n#### Links\n- I haven't seen it linked here yet but this paper does a phenomenal\ndeep dive into SV and AV\nhttps://sigmodrecord.org/publications/sigmodRecord/1209/pdfs/07.industry.kulkarni.pdf\n- It's not perfect, but MDB's system versioning is pretty well thought\nout. You get a good idea of their thought process going through this\npage, worth a read\nhttps://mariadb.com/kb/en/system-versioned-tables/#excluding-columns-from-versioning\n\n#### Finally, the end\nThere's a heck of a lot of thought that could go into this thing,\nprobably worth making sure there's a formal agreement on what to be\ndone before coding starts (PGEP for postgres enhancement proposal,\nlike PEP? Not sure if something like that exists but it probably\nshould.). Large parts of the existing patch could likely be reused for\nwhatever is decided.\n\nBest,\nTrevor\n\n\nOn Sun, Jan 23, 2022 at 2:47 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 15 Nov 2021, at 11:50, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > I have no plans on taking this patch further, but will give some help\n> > to anyone that wishes to do that.\n> >\n> > I suggest we Return with Feedback.\n>\n> Fair enough, done that way.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n>\n>\n>\n\n\n", "msg_date": "Sun, 23 Jan 2022 05:56:11 -0500", "msg_from": "Trevor Gross <t.gross35@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": ">\n>\n> > 2. Putting data in a side table. This makes DROP SYSTEM VERSIONING\n> > fairly trivial, but it complicates many DDL commands (please make a\n> > list?) and requires the optimizer to know about this and cater to it,\n> > possibly complicating plans. Neither issue is insurmountable, but it\n> > becomes more intrusive.\n>\n> I'd vouch for this being the way to go; you completely sidestep issues\n> like partitioning, unique constraints, optimization, etc. Especially\n> true when 90% of the time, SELECTs will only be looking at\n> currently-active data. MDB seems to have gone with the single-table\n> approach (unless you partition) and I've run into a bug where I can't\n> add a unique constraint because historical data fails.\n>\n> #### System versioning & Application versioning\n> I saw that there is an intent to harmonize system versioning with\n> application versioning. Haven't read the AV thread so not positive if\n> that meant intending to split tables by application versioning and\n> system versioning both: to me it seems like maybe it would be good to\n> use a separate table for SV, but keep AV in the same table. Reasons\n> include:\n>\n\nThe proposed AV uses just one table.\n\n\n> - ISO states only one AV config per table, but there's no reason this\n> always has to be the case; maybe you're storing products that are\n> active for a period of time, EOL for a period of time, and obsolete\n> for a period of time. If ISO sometime decides >1 AV config is OK,\n> there would be a mess trying to split that into tables.\n>\n\nThe proposed AV (so far) allows for that.\n\n\n> - DB users who are allowed to change AV items likely won't be allowed\n> to rewrite history by changing SV items. My proposed schema would keep\n> these separate.\n> - Table schemas change, and all (SV active) AV items would logically\n> need to fit the active schema or be updated to do so. Different story\n> for SV, nothing there should ever need to be changed.\n>\n\nYeah, there's a mess (which you state below) about what happens if you\ncreate a table and then rename a column, or drop a column and add a\nsame-named column back of another type at a later date, etc. In theory,\nthis means that the valid set of columns and their types changes according\nto the time range specified. I may not be remembering correctly, but Vik\nstated that the SQL spec seemed to imply that you had to track all those\nthings.\n\n\n> - Partitioning for AV tables isn't as clear as with SV and is likely\n> better to be user-defined\n>\n\nSo this was something I was asking various parties about at PgConf NYC just\na few weeks ago. I am supposing that the main reason for SV is a regulatory\nconcern, what tolerance to regulators have for the ability to manipulate\nthe SV side-table? Is it possible to directly insert rows into one? If not,\nthen moving rows into a new partition becomes impossible, and you'd be\nstuck with the partitioning strategy (if any) that you defined at SV\ncreation time.\n\nThe feedback I got was \"well, you're already a superuser, if a regulator\nhad a problem with that then they would have required that the SV table's\nstorage be outside the server, either a foreign table, a csv foreign data\nwrapper of some sort, or a trigger writing to a non-db storage (which\nwouldn't even need SV).\n\n From that, I concluded that every single AV partition would have it's own\nSV table, which could in turn be partitioned. In a sense, it might be\nhelpful to think of the SV tables as partitions of the main table, and the\nperiod definition would effectively be the constraint that prunes the SV\npartition.\n\n\n>\n> Sorry for acronyms, SV=system versioning, AV=application versioning\n>\n> In general, I think AV should be treated literally as extra rows in\n> the main DB, plus the extra PK element and shortcut functions. SV\n> though, needs to have a lot more nuance.\n>\n> #### ALTER TABLE\n> On to ideas about how ALTER TABLE could work. I don't think the\n> question was ever answered \"Do schema changes need to be tracked?\" I'm\n> generally in favor of saying that it should be possible to recreate\n> the table exactly as it was, schema and all, at a specific period of\n> time (perhaps for a view) using a fancy combination of SELECT ... AS\n> and such - but it doesn't need to be straightforward. In any case, no\n> data should ever be deleted by ALTER TABLE. As someone pointed out\n> earlier, speed and storage space of ALTER TABLE are likely low\n> considerations for system versioned tables.\n>\n> - ADD COLUMN easy, add the column to both the current and historical\n> table, all null in historical\n> - DROP COLUMN delete the column from the current table. Historical is\n> difficult, because what happens if a new column with the same name is\n> added? Maybe `DROP COLUMN col1` would rename col1 to _col1_1642929683\n> (epoch time) in the historical table or something like that.\n> - RENAME COLUMN is a bit tricky too - from a usability standpoint, the\n> historical table should be renamed as well. A quick thought is maybe\n> `RENAME col1 TO new_name` would perform the rename in the historical\n> table, but also create _col1_1642929683 as an alias to new_name to\n> track that there was a change. I don't think there would be any name\n> violations in the history table because there would never be a column\n> name in history that isn't in current (because of the rename described\n> with DROP).\n> - Changing column data type: ouch. This needs to be mainly planned for\n> cases where data types are incompatible, possibly optimized for times\n> when they are compatible. Seems like another _col1_1642929683 rename\n> would be in order, and a new col1 created with the new datatype, and a\n> historical SELECT would automatically merge the two. Possible\n> optimization: if the old type fits into the new type, just change the\n> data type in history and make _col1_1642929683 an alias to it.\n> - Change defaults, nullability, constraints, etc: I think these can\n> safely be done for the current table only. Realistically, historical\n> tables could probably skip all checks, always (except their tuple PK),\n> since trying to enforce them would just be opening the door to bugs.\n> Trying to think of any times this isn't true.\n> - FKs: I'm generally in the same boat as above, thinking that these\n> don't need to affect historical tables. Section 2.5 in the paper I\n> link below discusses period joins, but I don't think any special\n> behavior is needed for now. Perhaps references could be kept in\n> history but not enforced\n> - Changing PK / adding/removing more columns to PK: Annoying and not\n> easily dealt with. Maybe just disallow\n> - Triggers: no affect on historical\n> - DROP TABLE bye bye, history & all\n>\n\nYou seem to have covered all the bases, and the only way I can think to\nsensibly track all of those things is to allow for multiple SV tables, and\nevery time the main table is altered, you simply start fresh with a new,\nempty SV table. You'd probably also slap a constraint on the previous SV\ntable to reflect the fact that no rows newer than X will ever be entered\nthere, which would further aid constraint exclusion.\n\n\n> Things like row level security add extra complication but can probably\n> be disregarded. Maybe just have a `select history` permission or\n> similar.\n>\n\n+1\n\n\n> An interesting idea could be to automatically add system versioning to\n> information_schema whenever it is added to a table. This would provide\n> a way to easily query historical DDL. It would also help solve how to\n> keep historical FKs. This would make it possible to perfectly recreate\n> system versioned parts of your database at any period of time, schema\n> and data both.\n>\n\nInteresting...\n\n\n> #### Misc\n> - Seems like a good idea to include MDB's option to exclude columns\n> from versioning (`WITHOUT SYSTEM VERSIONING` as a column argument).\n> This is relatively nuanced and I'm not sure if it's officially part of\n> ISO, but probably helpful for frequently updating small data in rows\n> with BLOBs. Easy enough to implement, just forget the column in the\n> historical table.\n>\n\nFirst I've heard of it. Others will know more.\n\n\n> - I thought I saw somewhere that somebody was discussing adding both\n> row_start and row_end to the PK. Why would this be? Row_end should be\n> all that's needed to keep unique, but maybe I misread.\n>\n\nI don't think they need to be part of the PK at all. The main table has the\nPK that it knows of, and the SV tables are indexed independently.\n\nIn fact, I don't think row_end needs to be an actual stored value in the\nmain table, because it will never be anything other than null. How we\nrepresent such an attribute is another question, but the answer to that\npossibly ties into how we implement the virtual side of GENERATED ALWAYS\nAS...\n\n\n>\n> #### Links\n> - I haven't seen it linked here yet but this paper does a phenomenal\n> deep dive into SV and AV\n>\n> https://sigmodrecord.org/publications/sigmodRecord/1209/pdfs/07.industry.kulkarni.pdf\n\n\nMy link is different but this seems to be the same PDF that has been cited\nearlier for both SV and AV.\n\n\n> - It's not perfect, but MDB's system versioning is pretty well thought\n> out. You get a good idea of their thought process going through this\n> page, worth a read\n>\n> https://mariadb.com/kb/en/system-versioned-tables/#excluding-columns-from-versioning\n>\n> #### Finally, the end\n> There's a heck of a lot of thought that could go into this thing,\n> probably worth making sure there's a formal agreement on what to be\n> done before coding starts (PGEP for postgres enhancement proposal,\n> like PEP? Not sure if something like that exists but it probably\n> should.). Large parts of the existing patch could likely be reused for\n> whatever is decided.\n>\n\nThanks for the input, it helps us get some momentum on this.\n\n> 2. Putting data in a side table. This makes DROP SYSTEM VERSIONING\n> fairly trivial, but it complicates many DDL commands (please make a\n> list?) and requires the optimizer to know about this and cater to it,\n> possibly complicating plans. Neither issue is insurmountable, but it\n> becomes more intrusive.\n\nI'd vouch for this being the way to go; you completely sidestep issues\nlike partitioning, unique constraints, optimization, etc. Especially\ntrue when 90% of the time, SELECTs will only be looking at\ncurrently-active data. MDB seems to have gone with the single-table\napproach (unless you partition) and I've run into a bug where I can't\nadd a unique constraint because historical data fails.\n\n#### System versioning & Application versioning\nI saw that there is an intent to harmonize system versioning with\napplication versioning. Haven't read the AV thread so not positive if\nthat meant intending to split tables by application versioning and\nsystem versioning both: to me it seems like maybe it would be good to\nuse a separate table for SV, but keep AV in the same table. Reasons\ninclude:The proposed AV uses just one table. - ISO states only one AV config per table, but there's no reason this\nalways has to be the case; maybe you're storing products that are\nactive for a period of time, EOL for a period of time, and obsolete\nfor a period of time. If ISO sometime decides >1 AV config is OK,\nthere would be a mess trying to split that into tables.The proposed AV (so far) allows for that. \n- DB users who are allowed to change AV items likely won't be allowed\nto rewrite history by changing SV items. My proposed schema would keep\nthese separate.\n- Table schemas change, and all (SV active) AV items would logically\nneed to fit the active schema or be updated to do so. Different story\nfor SV, nothing there should ever need to be changed.Yeah, there's a mess (which you state below) about what happens if you create a table and then rename a column, or drop a column and add a same-named column back of another type at a later date, etc. In theory, this means that the valid set of columns and their types changes according to the time range specified. I may not be remembering correctly, but Vik stated that the SQL spec seemed to imply that you had to track all those things. \n- Partitioning for AV tables isn't as clear as with SV and is likely\nbetter to be user-definedSo this was something I was asking various parties about at PgConf NYC just a few weeks ago. I am supposing that the main reason for SV is a regulatory concern, what tolerance to regulators have for the ability to manipulate the SV side-table? Is it possible to directly insert rows into one? If not, then moving rows into a new partition becomes impossible, and you'd be stuck with the partitioning strategy (if any) that you defined at SV creation time.The feedback I got was \"well, you're already a superuser, if a regulator had a problem with that then they would have required that the SV table's storage be outside the server, either a foreign table, a csv foreign data wrapper of some sort, or a trigger writing to a non-db storage (which wouldn't even need SV).From that, I concluded that every single AV partition would have it's own SV table, which could in turn be partitioned. In a sense, it might be helpful to think of the SV tables as partitions of the main table, and the period definition would effectively be the constraint that prunes the SV partition. \n\nSorry for acronyms, SV=system versioning, AV=application versioning\n\nIn general, I think AV should be treated literally as extra rows in\nthe main DB, plus the extra PK element and shortcut functions. SV\nthough, needs to have a lot more nuance.\n\n#### ALTER TABLE\nOn to ideas about how ALTER TABLE could work. I don't think the\nquestion was ever answered \"Do schema changes need to be tracked?\" I'm\ngenerally in favor of saying that it should be possible to recreate\nthe table exactly as it was, schema and all, at a specific period of\ntime (perhaps for a view) using a fancy combination of SELECT ... AS\nand such - but it doesn't need to be straightforward. In any case, no\ndata should ever be deleted by ALTER TABLE. As someone pointed out\nearlier, speed and storage space of ALTER TABLE are likely low\nconsiderations for system versioned tables.\n\n- ADD COLUMN easy, add the column to both the current and historical\ntable, all null in historical\n- DROP COLUMN delete the column from the current table. Historical is\ndifficult, because what happens if a new column with the same name is\nadded? Maybe `DROP COLUMN col1` would rename col1 to _col1_1642929683\n(epoch time) in the historical table or something like that.\n- RENAME COLUMN is a bit tricky too - from a usability standpoint, the\nhistorical table should be renamed as well. A quick thought is maybe\n`RENAME col1 TO new_name` would perform the rename in the historical\ntable, but also create _col1_1642929683 as an alias to new_name to\ntrack that there was a change. I don't think there would be any name\nviolations in the history table because there would never be a column\nname in history that isn't in current (because of the rename described\nwith DROP).\n- Changing column data type: ouch. This needs to be mainly planned for\ncases where data types are incompatible, possibly optimized for times\nwhen they are compatible. Seems like another _col1_1642929683 rename\nwould be in order, and a new col1 created with the new datatype, and a\nhistorical SELECT would automatically merge the two. Possible\noptimization: if the old type fits into the new type, just change the\ndata type in history and make _col1_1642929683 an alias to it.\n- Change defaults, nullability, constraints, etc: I think these can\nsafely be done for the current table only. Realistically, historical\ntables could probably skip all checks, always (except their tuple PK),\nsince trying to enforce them would just be opening the door to bugs.\nTrying to think of any times this isn't true.\n- FKs: I'm generally in the same boat as above, thinking that these\ndon't need to affect historical tables. Section 2.5 in the paper I\nlink below discusses period joins, but I don't think any special\nbehavior is needed for now. Perhaps references could be kept in\nhistory but not enforced\n- Changing PK / adding/removing more columns to PK: Annoying and not\neasily dealt with. Maybe just disallow\n- Triggers: no affect on historical\n- DROP TABLE bye bye, history & allYou seem to have covered all the bases, and the only way I can think to sensibly track all of those things is to allow for multiple SV tables, and every time the main table is altered, you simply start fresh with a new, empty SV table. You'd probably also slap a constraint on the previous SV table to reflect the fact that no rows newer than X will ever be entered there, which would further aid constraint exclusion. Things like row level security add extra complication but can probably\nbe disregarded. Maybe just have a `select history` permission or\nsimilar.+1  An interesting idea could be to automatically add system versioning to\ninformation_schema whenever it is added to a table. This would provide\na way to easily query historical DDL. It would also help solve how to\nkeep historical FKs. This would make it possible to perfectly recreate\nsystem versioned parts of your database at any period of time, schema\nand data both.Interesting... #### Misc\n- Seems like a good idea to include MDB's option to exclude columns\nfrom versioning (`WITHOUT SYSTEM VERSIONING` as a column argument).\nThis is relatively nuanced and I'm not sure if it's officially part of\nISO, but probably helpful for frequently updating small data in rows\nwith BLOBs. Easy enough to implement, just forget the column in the\nhistorical table.First I've heard of it. Others will know more. \n- I thought I saw somewhere that somebody was discussing adding both\nrow_start and row_end to the PK. Why would this be? Row_end should be\nall that's needed to keep unique, but maybe I misread.I don't think they need to be part of the PK at all. The main table has the PK that it knows of, and the SV tables are indexed independently.In fact, I don't think row_end needs to be an actual stored value in the main table, because it will never be anything other than null. How we represent such an attribute is another question, but the answer to that possibly ties into how we implement the virtual side of GENERATED ALWAYS AS... \n\n#### Links\n- I haven't seen it linked here yet but this paper does a phenomenal\ndeep dive into SV and AV\nhttps://sigmodrecord.org/publications/sigmodRecord/1209/pdfs/07.industry.kulkarni.pdfMy link is different but this seems to be the same PDF that has been cited earlier for both SV and AV. - It's not perfect, but MDB's system versioning is pretty well thought\nout. You get a good idea of their thought process going through this\npage, worth a readhttps://mariadb.com/kb/en/system-versioned-tables/#excluding-columns-from-versioning\n\n#### Finally, the end\nThere's a heck of a lot of thought that could go into this thing,\nprobably worth making sure there's a formal agreement on what to be\ndone before coding starts (PGEP for postgres enhancement proposal,\nlike PEP? Not sure if something like that exists but it probably\nshould.). Large parts of the existing patch could likely be reused for\nwhatever is decided.Thanks for the input, it helps us get some momentum on this.", "msg_date": "Sun, 23 Jan 2022 18:16:53 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "On 1/24/22 00:16, Corey Huinker wrote:\n>> - Table schemas change, and all (SV active) AV items would logically\n>> need to fit the active schema or be updated to do so. Different story\n>> for SV, nothing there should ever need to be changed.\n>>\n> Yeah, there's a mess (which you state below) about what happens if you\n> create a table and then rename a column, or drop a column and add a\n> same-named column back of another type at a later date, etc. In theory,\n> this means that the valid set of columns and their types changes according\n> to the time range specified. I may not be remembering correctly, but Vik\n> stated that the SQL spec seemed to imply that you had to track all those\n> things.\n\nThe spec does not allow schema changes at all on a a system versioned \ntable, except to change the system versioning itself.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 9 Feb 2022 00:27:36 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": "Would these best practices be applicable by PostgreSQL to help avoid\nbreaking changes for temporal tables?\n\nhttps://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.html\n\nThanks\n\nOn Tue, Feb 15, 2022 at 5:08 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 1/24/22 00:16, Corey Huinker wrote:\n> >> - Table schemas change, and all (SV active) AV items would logically\n> >> need to fit the active schema or be updated to do so. Different story\n> >> for SV, nothing there should ever need to be changed.\n> >>\n> > Yeah, there's a mess (which you state below) about what happens if you\n> > create a table and then rename a column, or drop a column and add a\n> > same-named column back of another type at a later date, etc. In theory,\n> > this means that the valid set of columns and their types changes\n> according\n> > to the time range specified. I may not be remembering correctly, but Vik\n> > stated that the SQL spec seemed to imply that you had to track all those\n> > things.\n>\n> The spec does not allow schema changes at all on a a system versioned\n> table, except to change the system versioning itself.\n> --\n> Vik Fearing\n>\n>\n>\n>\n>\n\nWould these best practices be applicable by PostgreSQL to help avoid breaking changes for temporal tables?https://blog.datomic.com/2017/01/the-ten-rules-of-schema-growth.htmlThanksOn Tue, Feb 15, 2022 at 5:08 PM Vik Fearing <vik@postgresfriends.org> wrote:On 1/24/22 00:16, Corey Huinker wrote:\n>> - Table schemas change, and all (SV active) AV items would logically\n>> need to fit the active schema or be updated to do so. Different story\n>> for SV, nothing there should ever need to be changed.\n>>\n> Yeah, there's a mess (which you state below) about what happens if you\n> create a table and then rename a column, or drop a column and add a\n> same-named column back of another type at a later date, etc. In theory,\n> this means that the valid set of columns and their types changes according\n> to the time range specified. I may not be remembering correctly, but Vik\n> stated that the SQL spec seemed to imply that you had to track all those\n> things.\n\nThe spec does not allow schema changes at all on a a system versioned \ntable, except to change the system versioning itself.\n-- \nVik Fearing", "msg_date": "Tue, 15 Feb 2022 17:10:43 -0300", "msg_from": "Jean Baro <jfbaro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" }, { "msg_contents": ">\n>\n> The spec does not allow schema changes at all on a a system versioned\n> table, except to change the system versioning itself.\n>\n>\nThat would greatly simplify things!\n\nThe spec does not allow schema changes at all on a a system versioned \ntable, except to change the system versioning itself.That would greatly simplify things!", "msg_date": "Sun, 20 Feb 2022 19:53:47 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: System Versioned Temporal Table" } ]
[ { "msg_contents": "Hi,\n\nTL;DR: Some performance figures at the end. Lots of details before.\n\n\nFor a while I've been on and off (unfortunately more the latter), been\nhacking on improving expression evaluation further.\n\nThis is motivated by mainly two factors:\na) Expression evaluation is still often a very significant fraction of\n query execution time. Both with and without jit enabled.\nb) Currently caching for JITed queries is not possible, as the generated\n queries contain pointers that change from query to query\n\nbut there are others too (e.g. using less memory, reducing\ninitialization time).\n\n\nThe main reason why the JITed code is not faster, and why it cannot\nreally be cached, is that ExprEvalStep's point to memory that's\n\"outside\" of LLVMs view, e.g. via ExprEvalStep->resvalue and the various\nFunctionCallInfos. That's currently done by just embedding the raw\npointer value in the generated program (which effectively prevents\ncaching). LLVM will not really optimize through these memory references,\nhaving difficulty determining aliasing and lifetimes. The fix for that\nis to move for on-stack allocations for actually temporary stuff, llvm\ncan convert that into SSA form, and optimize properly.\n\n\nIn the attached *prototype* patch series there's a lot of incremental\nimprovements (and some cleanups) (in time, not importance order):\n\n1) A GUC that enables iterating in reverse order over items on a page\n during sequential scans. This is mainly to make profiles easier to\n read, as the cache misses are otherwise swamping out other effects.\n\n2) A number of optimizations of specific expression evaluation steps:\n - reducing the number of aggregate transition steps by \"merging\"\n EEOP_AGG_INIT_TRANS, EEOP_AGG_STRICT_TRANS_CHECK with EEOP_AGG_PLAIN_TRANS{_BYVAL,}\n into special case versions for each combination.\n - introducing special-case expression steps for common combinations\n of steps (EEOP_FUNCEXPR_STRICT_1, EEOP_FUNCEXPR_STRICT_2,\n EEOP_AGG_STRICT_INPUT_CHECK_ARGS_1, EEOP_DONE_NO_RETURN).\n\n3) Use NullableDatum for slots and expression evaluation.\n\n This is a small performance win for expression evaluation, and\n reduces the number of pointers for each step. The latter is important\n for later steps.\n\n4) out-of-line int/float error functions\n\n Right now we have numerous copies of float/int/... error handling\n elog()s. That's unnecessary. Instead add functions that issue the\n error, not allowing them to be inlined. This is a small win without\n jit, and a bigger win with.\n\n5) During expression initialization, compute allocations to be in a\n \"relative\" manner. Each allocation is tracked separately, and\n primarily consists out of an 'offset' that initially starts out at\n zero, and is increased by the size of each allocation.\n\n For interpreted evaluation, all the memory for these different\n allocations is allocated as part of the allocation of the ExprState\n itself, following the steps[] array (which now is also\n inline). During interpretation it is accessed by basically adding the\n offset to a base pointer.\n\n For JIT compiled interpetation the memory is allocated using LLVM's\n alloca instruction, which llvm can optimize into SSA form (using the\n Mem2Reg or SROA passes). In combination with operator inlining that\n enables LLVM to optimize PG function calls away entirely, even\n performing common subexpression elimination in some cases.\n\n\nThere's also a few changes that are mainly done as prerequisites:\nA) expression eval: Decouple PARAM_CALLBACK interface more strongly from execExpr.c\n otherwise too many implementation details are exposed\n\nB) expression eval: Improve ArrayCoerce evaluation implementation.\n\n the recursive evaluation with memory from both the outer and inner\n expression step being referenced at the same time makes improvements\n harder. And it's not particularly fast either.\n\nC) executor: Move per-call information for aggregates out of AggState.\n\n Right now AggState has elements that we set for each transition\n function invocation. That's not particularly fast, requires more\n bookkeeping, and is harder for compilers to optimize. Instead\n introduce a new AggStatePerCallContext that's passed for each\n transition invocation via FunctionCallInfo->context.\n\nD) Add \"builder\" state objects for ExecInitExpr() and\n llvm_compile_expr(). That makes it easier to pass more state around,\n and have different representations for the expression currently being\n built, and once ready. Also makes it more realistic to break up\n llvm_compile_expr() into smaller functions.\n\nE) Improving the naming of JITed basic blocks, including the textual\n ExprEvalOp value. Makes it a lot easier to understand the generated\n code. Should be used to add a function for some minimal printing of\n ExprStates.\n\nF) Add minimal (and very hacky) DWARF output for the JITed\n programs. That's useful for debugging, but more importantly makes it\n a lot easier to interpret perf profiles.\n\n\nThe patchset leaves a lot of further optimization potential for better\ncode generation on the floor, but this seems a good enough intermediate\npoint. The generated code is not *quite* cachable yet,\nFunctionCallInfo->{flinfo, context} still point to a pointer constant. I\nthink this can be solved in the same way as the rest, I just didn't get\nto it yet.\n\nAttached is a graph of tpch query times. branch=master/dev is master\n(with just the seqscan patch applied), jit=0/1 is jit enabled or not,\nseq=0/1 is whether faster seqscan ordering is enabled or not.\n\nThis is just tpch, with scale factor 5, on my laptop. I.e. not to be\ntaken too serious. I've started a scale 10, but I'm not going to wait\nfor the results.\n\nObviously the results are nice for some queries, and meh for others.\n\nFor Q01 we get:\n\ttime\ttime\ttime\ttime\ttime\ttime\ttime\ttime\nbranch\tmaster\tdev\tmaster\tdev\tmaster\tdev\tmaster\tdev\njit\t0\t0\t0\t0\t1\t1\t1\t1\nseq\t0\t0\t1\t1\t0\t0\t1\t1\nquery\nq01\t11965.224\t10434.316\t10309.404\t8205.922\t7918.81\t6661.359\t5653.64\t4573.794\n\n\nWhich imo is pretty nice. And that's with quite some pessimizations in\nthe code, without those (which can be removed with just a bit of elbow\ngrease), the benefit is noticably bigger.\n\nFWIW, for q01 the profile after these changes is:\n- 94.29% 2.16% postgres postgres [.] ExecAgg\n - 98.97% ExecAgg\n - 35.61% lookup_hash_entries\n - 95.08% LookupTupleHashEntry\n - 60.44% TupleHashTableHash.isra.0\n - 99.91% FunctionCall1Coll\n + hashchar\n + 23.34% evalexpr_0_4\n + 11.67% ExecStoreMinimalTuple\n + 4.49% MemoryContextReset\n 3.64% tts_minimal_clear\n 1.22% ExecStoreVirtualTuple\n + 34.17% evalexpr_0_7\n - 29.38% fetch_input_tuple\n - 99.98% ExecSeqScanQual\n - 58.15% heap_getnextslot\n - 72.70% heapgettup_pagemode\n - 99.25% heapgetpage\n + 79.08% ReadBufferExtended\n + 7.08% LockBuffer\n + 6.78% CheckForSerializableConflictOut\n + 3.26% UnpinBuffer.constprop.0\n + 1.94% heap_page_prune_opt\n 1.80% ReleaseBuffer\n + 0.66% ss_report_location\n + 27.22% ExecStoreBufferHeapTuple\n + 33.00% evalexpr_0_0\n + 5.16% ExecRunCompiledExpr\n + 3.65% MemoryContextReset\n + 0.84% MemoryContextReset\n\nI.e. we spend a significant fraction of the time doing hash computations\n(TupleHashTableHash, which is implemented very inefficiently), hash\nequality checks (evalexpr_0_4, which is inefficiently done because we do\nnot cary NOT NULL upwards), the aggregate transition (evalexpr_0_7, now most\nbottlenecked by float8_combine()), and fetching/filtering the tuple\n(with buffer lookups taking the majority of the time, followed by qual\nevaluation (evalexpr_0_0)).\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 23 Oct 2019 09:38:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "WIP: expression evaluation improvements" }, { "msg_contents": "Hey Andres,\n\nAfter looking at\nv2-0006-jit-Reference-functions-by-name-in-IOCOERCE-steps.patch, I was\nwondering\nabout other places in the code where we have const pointers to functions\noutside\nLLVM's purview: specially EEOP_FUNCEXPR* for any function call expressions,\nEEOP_DISTINCT and EEOP_NULLIF which involve operator specific comparison\nfunction call invocations, deserialization and trans functions for\naggregates\netc. All of the above cases involve to some degree some server functions\nthat\ncan be inlined/optimized.\n\nIf we do go down this road, the most immediate solution that comes to mind\nwould\nbe to populate referenced_functions[] with these. Also, we can replace all\nl_ptr_const() calls taking function addresses with calls to\nllvm_function_reference() (this is safe as it falls back to a l_pt_const()\ncall). We could do the l_ptr_const() -> llvm_function_reference() even if we\ndon't go down this road.\n\nOne con with the approach above would be bloating of llvmjit_types.bc but we\nwould be introducing @declares instead of @defines in the IR...so I think\nthat\nis fine.\n\nLet me know your thoughts. I would like to submit a patch here in this\nthread or\nelsewhere.\n\n--\nSoumyadeep\n\nHey Andres,After looking atv2-0006-jit-Reference-functions-by-name-in-IOCOERCE-steps.patch, I was wonderingabout other places in the code where we have const pointers to functions outsideLLVM's purview: specially EEOP_FUNCEXPR* for any function call expressions,EEOP_DISTINCT and EEOP_NULLIF which involve operator specific comparisonfunction call invocations, deserialization and trans functions for aggregatesetc. All of the above cases involve to some degree some server functions thatcan be inlined/optimized.If we do go down this road, the most immediate solution that comes to mind wouldbe to populate referenced_functions[] with these. Also, we can replace alll_ptr_const() calls taking function addresses with calls tollvm_function_reference() (this is safe as it falls back to a l_pt_const()call). We could do the l_ptr_const() -> llvm_function_reference() even if wedon't go down this road.One con with the approach above would be bloating of llvmjit_types.bc but wewould be introducing @declares instead of @defines in the IR...so I think thatis fine.Let me know your thoughts. I would like to submit a patch here in this thread orelsewhere.--Soumyadeep", "msg_date": "Thu, 24 Oct 2019 14:59:21 -0700", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2019-10-24 14:59:21 -0700, Soumyadeep Chakraborty wrote:\n> After looking at\n> v2-0006-jit-Reference-functions-by-name-in-IOCOERCE-steps.patch, I was\n> wondering\n> about other places in the code where we have const pointers to functions\n> outside\n> LLVM's purview: specially EEOP_FUNCEXPR* for any function call expressions,\n> EEOP_DISTINCT and EEOP_NULLIF which involve operator specific comparison\n> function call invocations, deserialization and trans functions for\n> aggregates\n> etc. All of the above cases involve to some degree some server functions\n> that\n> can be inlined/optimized.\n\nI don't think there's other cases like this, except when we don't have a\nsymbol name. In the normal course that's \"just\" EEOP_PARAM_CALLBACK\nIIRC.\n\nFor EEOP_PARAM_CALLBACK one solution would be to not use a callback\nspecified by pointer, but instead use an SQL level function taking an\nINTERNAL parameter (to avoid it being called via SQL).\n\n\nThere's also a related edge-case where are unable to figure out a symbol\nname in llvm_function_reference(), and then resort to creating a global\nvariable pointing to the function. This is a somewhat rare case (IIRC\nit's mostly if not solely around language PL handlers), so I don't think\nit matters *too* much.\n\nWe probably should change that to not initialize the global, and instead\nresolve the symbol during link time. As long as we generate a symbol\nname that llvm_resolve_symbol() can somehow resolve, we'd be good. I\nwas a bit wary of doing syscache lookups from within\nllvm_resolve_symbol(), otherwise we could just look look up the function\naddress from within there. So if we went this route I'd probably go for\na hashtable of additional symbol resolutions, which\nllvm_resolve_symbol() would consult.\n\nIf indeed the only case this is being hit is language PL handlers, it\nmight be better to instead work out the symbol name for that handler -\nwe should be able to get that via pg_language.lanplcallfoid.\n\n\n> If we do go down this road, the most immediate solution that comes to mind\n> would\n> be to populate referenced_functions[] with these. Also, we can replace all\n> l_ptr_const() calls taking function addresses with calls to\n> llvm_function_reference() (this is safe as it falls back to a l_pt_const()\n> call). We could do the l_ptr_const() -> llvm_function_reference() even if we\n> don't go down this road.\n\nWhich cases are you talking about here? Because I don't think there's\nany others where would know a symbol name to add to referenced_functions\nin the first place?\n\nI'm also not quite clear what adding to referenced_functions would buy\nus wrt constants. The benefit of adding a function there is that we get\nthe correct signature of the function, which makes it much harder to\naccidentally screw up and call with the wrong signature. I don't think\nthere's any benefits around symbol names?\n\nI do want to benefit from getting accurate signatures for patch\n[PATCH v2 26/32] WIP: expression eval: relative pointer suppport\nI had a number of cases where I passed the wrong parameters, and llvm\ncouldn't tell me...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 15:43:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "On 10/23/19 6:38 PM, Andres Freund wrote:\n> In the attached *prototype* patch series there's a lot of incremental\n> improvements (and some cleanups) (in time, not importance order):\n\nYou may already know this but your patch set seems to require clang 9.\n\nI get the below compilation error which is probably cause by \nhttps://github.com/llvm/llvm-project/commit/90868bb0584f first being \ncommitted for clang 9 (I run \"clang version 7.0.1-8 \n(tags/RELEASE_701/final)\").\n\nIn file included from gistutil.c:24:\n../../../../src/include/utils/float.h:103:7: error: invalid output \nconstraint '=@ccae' in asm\n : \"=@ccae\"(ret), [clobber_reg]\"=&x\"(clobber_reg)\n ^\n1 error generated.\n\n\n\n", "msg_date": "Fri, 25 Oct 2019 00:43:37 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2019-10-25 00:43:37 +0200, Andreas Karlsson wrote:\n> On 10/23/19 6:38 PM, Andres Freund wrote:\n> > In the attached *prototype* patch series there's a lot of incremental\n> > improvements (and some cleanups) (in time, not importance order):\n> \n> You may already know this but your patch set seems to require clang 9.\n\nI didn't, so thanks!\n\n\n> I get the below compilation error which is probably cause by\n> https://github.com/llvm/llvm-project/commit/90868bb0584f first being\n> committed for clang 9 (I run \"clang version 7.0.1-8\n> (tags/RELEASE_701/final)\").\n> \n> In file included from gistutil.c:24:\n> ../../../../src/include/utils/float.h:103:7: error: invalid output\n> constraint '=@ccae' in asm\n> : \"=@ccae\"(ret), [clobber_reg]\"=&x\"(clobber_reg)\n> ^\n> 1 error generated.\n\nI'll probably just drop this patch for now, it's not directly related. I\nkind of wanted it on the list, so I have I place I can find it if I\nforget :).\n\nI think what really needs to happen instead is to improve the code\ngenerated for __builtin_isinf[_sign]() by gcc/clang. They should proce\nthe constants like I did, instead of loading from the constant pool\nevery single time. That adds a fair bit of latency...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Oct 2019 15:53:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi Andres,\n\nApologies, I realize my understanding of symbol resolution and the\nreferenced_functions mechanism wasn't correct. Thank you for your very\nhelpful\nexplanations.\n\n> There's also a related edge-case where are unable to figure out a symbol\n> name in llvm_function_reference(), and then resort to creating a global\n> variable pointing to the function.\n\nIndeed.\n\n> If indeed the only case this is being hit is language PL handlers, it\n> might be better to instead work out the symbol name for that handler -\n> we should be able to get that via pg_language.lanplcallfoid.\n\nI took a stab at this (on top of your patch set):\nv1-0001-Resolve-PL-handler-names-for-JITed-code-instead-o.patch\n\n> Which cases are you talking about here? Because I don't think there's\n> any others where would know a symbol name to add to referenced_functions\n> in the first place?\n\nI had misunderstood the intent of referenced_functions.\n\n> I do want to benefit from getting accurate signatures for patch\n> [PATCH v2 26/32] WIP: expression eval: relative pointer suppport\n> I had a number of cases where I passed the wrong parameters, and llvm\n> couldn't tell me...\n\nI took a stab:\nv1-0001-Rely-on-llvmjit_types-for-building-EvalFunc-calls.patch\n\n\nOn a separate note, I had submitted a patch earlier to optimize functions\nearlier\nin accordance to the code comment:\n/*\n * Do function level optimization. This could be moved to the point where\n * functions are emitted, to reduce memory usage a bit.\n */\n LLVMInitializeFunctionPassManager(llvm_fpm);\nRefer:\nhttps://www.postgresql.org/message-id/flat/CAE-ML+_OE4-sHvn0AA_qakc5qkZvQvainxwb1ztuuT67SPMegw@mail.gmail.com\nI have rebased that patch on top of your patch set. Here it is:\nv2-0001-Optimize-generated-functions-earlier-to-lower-mem.patch\n\n--\nSoumyadeep", "msg_date": "Sun, 27 Oct 2019 23:46:22 -0700", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2019-10-27 23:46:22 -0700, Soumyadeep Chakraborty wrote:\n> Apologies, I realize my understanding of symbol resolution and the\n> referenced_functions mechanism wasn't correct. Thank you for your very\n> helpful\n> explanations.\n\nNo worries! I was just wondering whether I was misunderstanding you.\n\n\n> > If indeed the only case this is being hit is language PL handlers, it\n> > might be better to instead work out the symbol name for that handler -\n> > we should be able to get that via pg_language.lanplcallfoid.\n> \n> I took a stab at this (on top of your patch set):\n> v1-0001-Resolve-PL-handler-names-for-JITed-code-instead-o.patch\n\nI think I'd probably try to apply this to master independent of the\nlarger patchset, to avoid a large dependency.\n\n\n> From 07c7ff996706c6f71e00d76894845c1f87956472 Mon Sep 17 00:00:00 2001\n> From: soumyadeep2007 <sochakraborty@pivotal.io>\n> Date: Sun, 27 Oct 2019 17:42:53 -0700\n> Subject: [PATCH v1] Resolve PL handler names for JITed code instead of using\n> const pointers\n> \n> Using const pointers to PL handler functions prevents optimization\n> opportunities in JITed code. Now fmgr_symbol() resolves PL function\n> references to the corresponding language's handler.\n> llvm_function_reference() now no longer needs to create the global to\n> such a function.\n\nDid you check whether there's any cases this fails in the tree with your\npatch applied? The way I usually do that is by running the regression\ntests like\nPGOPTIONS='-cjit_above_cost=0' make -s -Otarget check-world\n\n(which will take a bit longer if use an optimized LLVM build, and a\n*lot* longer if you use a debug llvm build)\n\n\n> Discussion: https://postgr.es/m/20191024224303.jvdx3hq3ak2vbit3%40alap3.anarazel.de:wq\n> ---\n> src/backend/jit/llvm/llvmjit.c | 29 +++--------------------------\n> src/backend/utils/fmgr/fmgr.c | 30 +++++++++++++++++++++++-------\n> 2 files changed, 26 insertions(+), 33 deletions(-)\n> \n> diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c\n> index 82c4afb701..69a4167ac9 100644\n> --- a/src/backend/jit/llvm/llvmjit.c\n> +++ b/src/backend/jit/llvm/llvmjit.c\n> @@ -369,38 +369,15 @@ llvm_function_reference(LLVMJitContext *context,\n> \n> \tfmgr_symbol(fcinfo->flinfo->fn_oid, &modname, &basename);\n> \n> -\tif (modname != NULL && basename != NULL)\n> +\tif (modname != NULL)\n> \t{\n> \t\t/* external function in loadable library */\n> \t\tfuncname = psprintf(\"pgextern.%s.%s\", modname, basename);\n> \t}\n> -\telse if (basename != NULL)\n> -\t{\n> -\t\t/* internal function */\n> -\t\tfuncname = psprintf(\"%s\", basename);\n> -\t}\n> \telse\n> \t{\n> -\t\t/*\n> -\t\t * Function we don't know to handle, return pointer. We do so by\n> -\t\t * creating a global constant containing a pointer to the function.\n> -\t\t * Makes IR more readable.\n> -\t\t */\n> -\t\tLLVMValueRef v_fn_addr;\n> -\n> -\t\tfuncname = psprintf(\"pgoidextern.%u\",\n> -\t\t\t\t\t\t\tfcinfo->flinfo->fn_oid);\n> -\t\tv_fn = LLVMGetNamedGlobal(mod, funcname);\n> -\t\tif (v_fn != 0)\n> -\t\t\treturn LLVMBuildLoad(builder, v_fn, \"\");\n> -\n> -\t\tv_fn_addr = l_ptr_const(fcinfo->flinfo->fn_addr, TypePGFunction);\n> -\n> -\t\tv_fn = LLVMAddGlobal(mod, TypePGFunction, funcname);\n> -\t\tLLVMSetInitializer(v_fn, v_fn_addr);\n> -\t\tLLVMSetGlobalConstant(v_fn, true);\n> -\n> -\t\treturn LLVMBuildLoad(builder, v_fn, \"\");\n> +\t\t/* internal function or a PL handler */\n> +\t\tfuncname = psprintf(\"%s\", basename);\n> \t}\n\nHm. Aren't you breaking things here? If fmgr_symbol returns a basename\nof NULL, as is the case for all internal functions, you're going to\nprint a NULL pointer, no?\n\n\n> \t/* check if function already has been added */\n> diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c\n> index 099ebd779b..71398bb3c1 100644\n> --- a/src/backend/utils/fmgr/fmgr.c\n> +++ b/src/backend/utils/fmgr/fmgr.c\n> @@ -265,11 +265,9 @@ fmgr_info_cxt_security(Oid functionId, FmgrInfo *finfo, MemoryContext mcxt,\n> /*\n> * Return module and C function name providing implementation of functionId.\n> *\n> - * If *mod == NULL and *fn == NULL, no C symbol is known to implement\n> - * function.\n> - *\n> * If *mod == NULL and *fn != NULL, the function is implemented by a symbol in\n> - * the main binary.\n> + * the main binary. If the function being looked up is not a C language\n> + * function, it's language handler name is returned.\n> *\n> * If *mod != NULL and *fn !=NULL the function is implemented in an extension\n> * shared object.\n> @@ -285,6 +283,11 @@ fmgr_symbol(Oid functionId, char **mod, char **fn)\n> \tbool\t\tisnull;\n> \tDatum\t\tprosrcattr;\n> \tDatum\t\tprobinattr;\n> +\tOid\t\t\tlanguage;\n> +\tHeapTuple\tlanguageTuple;\n> +\tForm_pg_language languageStruct;\n> +\tHeapTuple\tplHandlerProcedureTuple;\n> +\tForm_pg_proc plHandlerProcedureStruct;\n> \n> \t/* Otherwise we need the pg_proc entry */\n> \tprocedureTuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(functionId));\n> @@ -304,8 +307,9 @@ fmgr_symbol(Oid functionId, char **mod, char **fn)\n> \t\treturn;\n> \t}\n> \n> +\tlanguage = procedureStruct->prolang;\n> \t/* see fmgr_info_cxt_security for the individual cases */\n> -\tswitch (procedureStruct->prolang)\n> +\tswitch (language)\n> \t{\n> \t\tcase INTERNALlanguageId:\n> \t\t\tprosrcattr = SysCacheGetAttr(PROCOID, procedureTuple,\n> @@ -342,9 +346,21 @@ fmgr_symbol(Oid functionId, char **mod, char **fn)\n> \t\t\tbreak;\n> \n> \t\tdefault:\n> +\t\t\tlanguageTuple = SearchSysCache1(LANGOID,\n> +\t\t\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(language));\n> +\t\t\tif (!HeapTupleIsValid(languageTuple))\n> +\t\t\t\telog(ERROR, \"cache lookup failed for language %u\", language);\n> +\t\t\tlanguageStruct = (Form_pg_language) GETSTRUCT(languageTuple);\n> +\t\t\tplHandlerProcedureTuple = SearchSysCache1(PROCOID,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t ObjectIdGetDatum(\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t languageStruct->lanplcallfoid));\n> +\t\t\tif (!HeapTupleIsValid(plHandlerProcedureTuple))\n> +\t\t\t\telog(ERROR, \"cache lookup failed for function %u\", functionId);\n> +\t\t\tplHandlerProcedureStruct = (Form_pg_proc) GETSTRUCT(plHandlerProcedureTuple);\n> \t\t\t*mod = NULL;\n> -\t\t\t*fn = NULL;\t\t\t/* unknown, pass pointer */\n> -\t\t\tbreak;\n> +\t\t\t*fn = pstrdup(NameStr(plHandlerProcedureStruct->proname));\n> +\t\t\tReleaseSysCache(languageTuple);\n> +\t\t\tReleaseSysCache(plHandlerProcedureTuple);\n> \t}\n\n\n> > I do want to benefit from getting accurate signatures for patch\n> > [PATCH v2 26/32] WIP: expression eval: relative pointer suppport\n> > I had a number of cases where I passed the wrong parameters, and llvm\n> > couldn't tell me...\n> \n> I took a stab:\n> v1-0001-Rely-on-llvmjit_types-for-building-EvalFunc-calls.patch\n\nCool! I'll probably merge that into my patch (with attribution of\ncourse).\n\nI wonder if it'd nicer to not have separate C variables for all of\nthese, and instead look them up on-demand from the module loaded in\nllvm_create_types(). Not sure.\n\n\n> On a separate note, I had submitted a patch earlier to optimize functions\n> earlier\n> in accordance to the code comment:\n> /*\n> * Do function level optimization. This could be moved to the point where\n> * functions are emitted, to reduce memory usage a bit.\n> */\n> LLVMInitializeFunctionPassManager(llvm_fpm);\n> Refer:\n> https://www.postgresql.org/message-id/flat/CAE-ML+_OE4-sHvn0AA_qakc5qkZvQvainxwb1ztuuT67SPMegw@mail.gmail.com\n> I have rebased that patch on top of your patch set. Here it is:\n> v2-0001-Optimize-generated-functions-earlier-to-lower-mem.patch\n\nSorry for not replying to that earlier. I'm not quite sure it's\nactually worthwhile doing so - did you try to measure any memory / cpu\nsavings?\n\nThe magnitude of wins aside, I also have a local patch that I'm going to\ntry to publish this or next week, that deduplicates functions more\naggressively, mostly to avoid redundant optimizations. It's quite\npossible that we should run that before the function passes - or even\ngive up entirely on the function pass optimizations. As the module pass\nmanager does the same optimizations it's not that clear in which cases\nit'd be beneficial to run it, especially if it means we can't\ndeduplicate before optimizations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 28 Oct 2019 15:32:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi Andres,\n\n> I think I'd probably try to apply this to master independent of the\n> larger patchset, to avoid a large dependency.\n\nAwesome! +1. Attached 2nd version of patch rebased on master.\n(v2-0001-Resolve-PL-handler-names-for-JITed-code-instead-o.patch)\n\n> Did you check whether there's any cases this fails in the tree with your\n> patch applied? The way I usually do that is by running the regression\n> tests like\n> PGOPTIONS='-cjit_above_cost=0' make -s -Otarget check-world\n>\n> (which will take a bit longer if use an optimized LLVM build, and a\n> *lot* longer if you use a debug llvm build)\n\nGreat suggestion! I used:\nPGOPTIONS='-c jit_above_cost=0' gmake installcheck-world\nIt all passed except a couple of logical decoding tests that never pass\non my machine for any tree (t/006_logical_decoding.pl and\nt/010_logical_decoding_timelines.pl) and point (which seems to be failing\neven\non master as of: d80be6f2f) I have attached the regression.diffs which\ncaptures\nthe point failure.\n\n> Hm. Aren't you breaking things here? If fmgr_symbol returns a basename\n> of NULL, as is the case for all internal functions, you're going to\n> print a NULL pointer, no?\n\nFor internal functions, it is supposed to return modname = NULL but basename\nwill be non-NULL right? As things stand, fmgr_symbol can never return a\nnull\nbasename. I have added an Assert to make that even more explicit.\n\n> Cool! I'll probably merge that into my patch (with attribution of\n> course).\n>\n> I wonder if it'd nicer to not have separate C variables for all of\n> these, and instead look them up on-demand from the module loaded in\n> llvm_create_types(). Not sure.\n\nGreat! It is much nicer indeed. Attached version 2 with your suggested\nchanges.\n(v2-0001-Rely-on-llvmjit_types-for-building-EvalFunc-calls.patch)\nUsed the same testing method as above.\n\n> Sorry for not replying to that earlier. I'm not quite sure it's\n> actually worthwhile doing so - did you try to measure any memory / cpu\n> savings?\n\nNo problem, thanks for the reply! Unfortunately, I did not do anything\nsignificant in terms of mem/cpu measurements. However, I have noticed\nnon-trivial\ndifferences between optimized and unoptimized .bc files that were dumped\nfrom\ntime to time.\n\n> The magnitude of wins aside, I also have a local patch that I'm going to\n> try to publish this or next week, that deduplicates functions more\n> aggressively, mostly to avoid redundant optimizations. It's quite\n> possible that we should run that before the function passes - or even\n> give up entirely on the function pass optimizations. As the module pass\n> manager does the same optimizations it's not that clear in which cases\n> it'd be beneficial to run it, especially if it means we can't\n> deduplicate before optimizations.\n\nAgreed, excited to see the patch!\n\n--\nSoumyadeep", "msg_date": "Mon, 28 Oct 2019 23:58:11 -0700", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2019-10-28 23:58:11 -0700, Soumyadeep Chakraborty wrote:\n> > Cool! I'll probably merge that into my patch (with attribution of\n> > course).\n> >\n> > I wonder if it'd nicer to not have separate C variables for all of\n> > these, and instead look them up on-demand from the module loaded in\n> > llvm_create_types(). Not sure.\n> \n> Great! It is much nicer indeed. Attached version 2 with your suggested\n> changes.\n> (v2-0001-Rely-on-llvmjit_types-for-building-EvalFunc-calls.patch)\n> Used the same testing method as above.\n\nI've comitted a (somewhat evolved) version of this patch. I think it\nreally improves the code!\n\nMy changes largely were to get rid of the LLVMGetNamedFunction() added\nto each opcode implementation, to also convert the ExecEval* functions\nwe were calling directly, to remove the other functions in llvmjit.h,\nand finally to rebase it onto master, from the patch series in this\nthread.\n\nI do wonder about adding a variadic wrapper like the one introduced here\nmore widely, seems like it could simplify a number of places. If we then\nredirected all function calls through a common wrapper, for LLVMBuildCall,\nthat also validated parameter count (and perhaps types), I think it'd be\neasier to develop...\n\nThanks!\n\nAndres\n\n\n", "msg_date": "Thu, 6 Feb 2020 22:28:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2019-10-28 23:58:11 -0700, Soumyadeep Chakraborty wrote:\n> > Sorry for not replying to that earlier. I'm not quite sure it's\n> > actually worthwhile doing so - did you try to measure any memory / cpu\n> > savings?\n> \n> No problem, thanks for the reply! Unfortunately, I did not do anything\n> significant in terms of mem/cpu measurements. However, I have noticed\n> non-trivial differences between optimized and unoptimized .bc files\n> that were dumped from time to time.\n\nCould you expand on what you mean here? Are you saying that you got\nsignificantly better optimization results by doing function optimization\nearly on? That'd be surprising imo?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Feb 2020 22:35:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi Andres,\n> I've comitted a (somewhat evolved) version of this patch. I think it\n> really improves the code!\nAwesome! Thanks for taking it forward!\n\n> I do wonder about adding a variadic wrapper like the one introduced here\n> more widely, seems like it could simplify a number of places. If we then\n> redirected all function calls through a common wrapper, for LLVMBuildCall,\n> that also validated parameter count (and perhaps types), I think it'd be\n> easier to develop...\n+1. I was wondering whether such validations should be Asserts instead of\nERRORs.\n\nRegards,\n\nSoumyadeep Chakraborty\nSenior Software Engineer\nPivotal Greenplum\nPalo Alto\n\n\nOn Thu, Feb 6, 2020 at 10:35 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-10-28 23:58:11 -0700, Soumyadeep Chakraborty wrote:\n> > > Sorry for not replying to that earlier. I'm not quite sure it's\n> > > actually worthwhile doing so - did you try to measure any memory / cpu\n> > > savings?\n> >\n> > No problem, thanks for the reply! Unfortunately, I did not do anything\n> > significant in terms of mem/cpu measurements. However, I have noticed\n> > non-trivial differences between optimized and unoptimized .bc files\n> > that were dumped from time to time.\n>\n> Could you expand on what you mean here? Are you saying that you got\n> significantly better optimization results by doing function optimization\n> early on? That'd be surprising imo?\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi Andres,> I've comitted a (somewhat evolved) version of this patch. I think it> really improves the code!Awesome! Thanks for taking it forward!> I do wonder about adding a variadic wrapper like the one introduced here> more widely, seems like it could simplify a number of places. If we then> redirected all function calls through a common wrapper, for LLVMBuildCall,> that also validated parameter count (and perhaps types), I think it'd be> easier to develop...+1. I was wondering whether such validations should be Asserts instead ofERRORs.Regards,Soumyadeep ChakrabortySenior Software EngineerPivotal GreenplumPalo AltoOn Thu, Feb 6, 2020 at 10:35 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-10-28 23:58:11 -0700, Soumyadeep Chakraborty wrote:\n> > Sorry for not replying to that earlier.  I'm not quite sure it's\n> > actually worthwhile doing so - did you try to measure any memory / cpu\n> > savings?\n> \n> No problem, thanks for the reply! Unfortunately, I did not do anything\n> significant in terms of mem/cpu measurements. However, I have noticed\n> non-trivial differences between optimized and unoptimized .bc files\n> that were dumped from time to time.\n\nCould you expand on what you mean here? Are you saying that you got\nsignificantly better optimization results by doing function optimization\nearly on?  That'd be surprising imo?\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 9 Feb 2020 17:28:02 -0800", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi Andres,\n\n> Could you expand on what you mean here? Are you saying that you got\n> significantly better optimization results by doing function optimization\n> early on? That'd be surprising imo?\n\nSorry for the ambiguity, I meant that I had observed differences in the\nsizes\nof the bitcode files dumped.\n\nThese are the size differences that I observed (for TPCH Q1):\nWithout my patch:\n-rw------- 1 pivotal staff 278K Feb 9 11:59 1021.0.bc\n-rw------- 1 pivotal staff 249K Feb 9 11:59 1374.0.bc\n-rw------- 1 pivotal staff 249K Feb 9 11:59 1375.0.bc\nWith my patch:\n-rw------- 1 pivotal staff 245K Feb 9 11:43 88514.0.bc\n-rw------- 1 pivotal staff 245K Feb 9 11:43 88515.0.bc\n-rw------- 1 pivotal staff 270K Feb 9 11:43 79323.0.bc\n\nThis means that the sizes of the module when execution encountered:\n\nif (jit_dump_bitcode)\n{\nchar *filename;\n\nfilename = psprintf(\"%u.%zu.bc\",\nMyProcPid,\ncontext->module_generation);\nLLVMWriteBitcodeToFile(context->module, filename);\npfree(filename);\n}\n\nwere smaller with my patch applied. This means there is less memory\npressure between when the functions were built and when\nllvm_compile_module() is called. I don't know if the difference is\npractically\nsignificant.\n\nSoumyadeep\n\nHi Andres,> Could you expand on what you mean here? Are you saying that you got> significantly better optimization results by doing function optimization> early on?  That'd be surprising imo?Sorry for the ambiguity, I meant that I had observed differences in the sizesof the bitcode files dumped.These are the size differences that I observed (for TPCH Q1):Without my patch:-rw-------   1 pivotal  staff   278K Feb  9 11:59 1021.0.bc-rw-------   1 pivotal  staff   249K Feb  9 11:59 1374.0.bc-rw-------   1 pivotal  staff   249K Feb  9 11:59 1375.0.bcWith my patch:-rw-------   1 pivotal  staff   245K Feb  9 11:43 88514.0.bc-rw-------   1 pivotal  staff   245K Feb  9 11:43 88515.0.bc-rw-------   1 pivotal  staff   270K Feb  9 11:43 79323.0.bcThis means that the sizes of the module when execution encountered:if (jit_dump_bitcode){char *filename;filename = psprintf(\"%u.%zu.bc\",MyProcPid,context->module_generation);LLVMWriteBitcodeToFile(context->module, filename);pfree(filename);}were smaller with my patch applied. This means there is less memorypressure between when the functions were built and when llvm_compile_module() is called. I don't know if the difference is practicallysignificant.Soumyadeep", "msg_date": "Sun, 9 Feb 2020 17:29:21 -0800", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hey Andres,\n\n> Awesome! +1. Attached 2nd version of patch rebased on master.\n> (v2-0001-Resolve-PL-handler-names-for-JITed-code-instead-o.patch)\n>\n>\n>\n> > Did you check whether there's any cases this fails in the tree with your\n> > patch applied? The way I usually do that is by running the regression\n> > tests like\n> > PGOPTIONS='-cjit_above_cost=0' make -s -Otarget check-world\n> >\n> > (which will take a bit longer if use an optimized LLVM build, and a\n> > *lot* longer if you use a debug llvm build)\n>\n>\n>\n> Great suggestion! I used:\n> PGOPTIONS='-c jit_above_cost=0' gmake installcheck-world\n> It all passed except a couple of logical decoding tests that never pass\n> on my machine for any tree (t/006_logical_decoding.pl and\n> t/010_logical_decoding_timelines.pl) and point (which seems to be failing\n> even\n> on master as of: d80be6f2f) I have attached the regression.diffs which\n> captures\n> the point failure.\n\nI have attached the 3rd version of the patch rebased on master. I made one\nslight modification to the previous patch. PL handlers, such as that of\nplsh,\ncan be in an external library. So I account for that in modname (earlier\nnaively I set it to NULL). There are also some minor changes to the comments\nand I have rehashed the commit message.\n\nApart from running the regress tests as you suggested above, I installed\nplsh\nand forced JIT on the following:\n\nCREATE FUNCTION query_plsh (x int) RETURNS text\nLANGUAGE plsh\nAS $$\n#!/bin/sh\npsql -At -c \"select 1\"\n$$;\n\nSELECT query_plsh(5);\n\nand I also ran plsh's make installcheck with jit_above_cost = 0. Everything\nlooks good. I think this is ready for another round of review. Thanks!!\n\nSoumyadeep", "msg_date": "Wed, 19 Feb 2020 17:17:57 -0800", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hello Andres,\n\nAttached is a patch on top of\nv2-0026-WIP-expression-eval-relative-pointer-suppport.patch that eliminates\nthe\nconst pointer references to fmgrInfo in the generated code.\n\nFmgrInfos are now allocated like the FunctionCallInfos are\n(ExprBuilderAllocFunctionMgrInfo()) and are initialized with\nexpr_init_fmgri().\n\nUnfortunately, inside expr_init_fmgri(), I had to emit const pointers to set\nfn_addr, fn_extra, fn_mcxt and fn_expr.\n\nfn_addr, fn_mcxt should always be the same const pointer value in between\ntwo identical\ncalls. So this isn't too bad?\n\nfn_extra is NULL most of the time. So not too bad?\n\nfn_expr is very difficult to eliminate because it is allocated way earlier.\nIs\nit something that will have a const pointer value in between two identical\ncalls? (don't know enough about plan caching..I ran the same query twice\nand it\nseemed to have different pointer values). Eliminating this pointer poses\na similar challenge to that of FunctionCallInfo->context. fn_expr is\nallocated\nquite early on. I had tried writing ExprBuilderAllocNode() to handle the\ncontext\nfield. The trouble with writing something like expr_init_node() or something\neven more specific like expr_init_percall() (for the percall context for\naggs)\nas these structs have lots of pointer references to further pointers and so\non\n-> so eventually we would have to emit some const pointers.\nOne naive way to handle this problem may be to emit a call to the _copy*()\nfunctions inside expr_init_node(). It wouldn't be as performant though.\n\nWe could decide to live with the const pointers even if our cache key would\nbe\nthe generated code. The caching layer could be made smart enough to ignore\nsuch\npointer references OR we could feed the caching layer with generated code\nthat\nhas been passed through a custom pass that normalizes all const pointer\nvalues\nto some predetermined / sentinel value. To help the custom pass we could\nemit\nsome metadata when we generate a const pointer (that we know won't have the\nsame\nconst pointer value) to tell the pass to ignore it.\n\nSoumyadeep", "msg_date": "Tue, 3 Mar 2020 12:21:44 -0800", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "> On 3 Mar 2020, at 21:21, Soumyadeep Chakraborty <sochakraborty@pivotal.io> wrote:\n\n> Attached is a patch on top of\n> v2-0026-WIP-expression-eval-relative-pointer-suppport.patch that eliminates the\n> const pointer references to fmgrInfo in the generated code.\n\nSince the CFBot patch tester isn't to apply and test a patchset divided across\nmultiple emails, can you please submit the full patchset for consideration such\nthat we can get it to run in the CI?\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 14:50:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "On Wed, Jul 01, 2020 at 02:50:14PM +0200, Daniel Gustafsson wrote:\n> Since the CFBot patch tester isn't to apply and test a patchset divided across\n> multiple emails, can you please submit the full patchset for consideration such\n> that we can get it to run in the CI?\n\nThis thread seems to have died a couple of weeks ago, so I have marked\nit as RwF.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 15:54:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Andres asked me off-list for comments on 0026, so here goes.\n\nAs a general comment, I think the patches could really benefit from\nmore meaningful commit messages and more comments on individual\nfunctions. It would definitely help me review, and it might help other\npeople review, or modify the code later. For example, I'm looking at\nExprEvalStep. If the intent here is that we don't want the union\nmembers to point to data that might differ from one execution of the\nplan to the next, it's surely important to mention that and explain to\npeople who are trying to add steps later what they should do instead.\nBut I'm also not entirely sure that's the intended rule. It kind of\nsurprises me that the only things that we'd be pointing to here that\nwould fall into that category would be a bool, a NullableDatum, a\nNullableDatum array, and a FunctionCallInfo ... but I've been\nsurprised by a lot of things that turned out to be true.\n\nI am not a huge fan of the various Rel[Whatever] typedefs. I am not\nsure that's really adding any clarity. On the other hand I would be a\nbig fan of renaming the structure members in some systematic way. This\nkind of thing doesn't sit well with me:\n\n- NullableDatum *value; /* value to return */\n+ RelNullableDatum value; /* value to return */\n\nWell, if NullableDatum was the value to return, then RelNullableDatum\nisn't. It's some kind of thing that lets you find the value to return.\nActually that's not really right either, because before 'value' was a\npointer to the value to return and the corresponding isnull flag, and\nnow it is a way of finding that stuff. I don't know exactly what to do\nhere to keep the comment comprehensible and not unreasonably long, but\nI don't think not changing at it all is the thing. Nor do I think just\nhaving it be called 'value' when it's clearly not the value, nor even\na pointer to the value, is as clear as I would like to be.\n\nI wonder if ExprBuilderAllocBool ought to be using sizeof(bool) rather\nthan sizeof(NullableDatum).\n\nIs it true that allocno is only used for, err, some kind of LLVM\nthing, and not in the regular interpreted path? As far as I can see,\noutside of the LLVM code, we only ever test whether it's 0, and don't\nactually care about the specific value.\n\nI hope that the fact that this patch reverses the order of the first\ntwo arguments to ExecInitExprRec is only something you did to make it\nso that the compiler would find places you needed to update. Because\notherwise it makes no sense to introduce a new thing called an\nExprStateBuilder in 0017, make it an argument to that function, and\nthen turn around and change the signature again in 0026. Anyway, a\nfinal patch shouldn't include this kind of churn.\n\n+ offsetof(ExprState, steps) * esb->steps_len * sizeof(ExprEvalStep) +\n+ state->mutable_off = offsetof(ExprState, steps) * esb->steps_len *\nsizeof(ExprEvalStep);\n\nWell, either I'm confused here, or the first * should be a + in each\ncase. I wonder how this works at all.\n\n+ /* copy in step data */\n+ {\n+ ListCell *lc;\n+ int off = 0;\n+\n+ foreach(lc, esb->steps)\n+ {\n+ memcpy(&state->steps[off], lfirst(lc), sizeof(ExprEvalStep));\n+ off++;\n+ }\n+ }\n\nThis seems incredibly pointless to me. Why use a List in the first\nplace if we're going to have to flatten it using this kind of code?\n\nI think stuff like RelFCIOff() and RelFCIIdx() and RelArrayIdx() is\njust pretty much incomprehensible. Now, the executor is full of\nbadly-named stuff already -- ExecInitExprRec being a fine example of a\nname nobody is going to understand on first reading, or maybe ever --\nbut we ought to try not to make things worse. I also do understand\nthat anything with relative pointers is bound to involve a bunch of\ncrappy notation that we're just going to have to accept as the price\nof doing business. But it would help to pick names that are not so\nheavily abbreviated. Like, if RelFCIIdx() were called\nfind_function_argument_in_relative_fcinfo() or even\nget_fnarg_from_relfcinfo() the casual reader might have a chance of\nguessing what it does. Sure, the code might be longer, but if you can\ntell what it does without cross-referencing, it's still better.\n\nI would welcome changes that make it clearer which things happen just\nonce and which things happen at execution time; that said, it seems\nlike RELPTR_RESOLVE() happens at execution time, and it surprises me a\nbit that this is OK from a performance perspective. The pointers can\nchange from execution to execution, but not within an individual\nexecution, or so I think. So it doesn't need to be resolved every\ntime, if somehow that can be avoided. But maybe CPUs are sufficiently\nwell-optimized for computing a pointer address as a+b*c that it does\nnot matter.\n\nI'm not sure how helpful any of these comments are, but those are my\ninitial thoughts.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Nov 2021 12:30:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nI pushed a rebased (ugh, that was painul) version of the patches to\nhttps://github.com/anarazel/postgres/tree/jit-relative-offsets\n\nBesides rebasing I dropped a few patches and did some *minor* cleanup. Besides\nthat there's one substantial improvement, namely that I got rid of one more\nabsolute pointer reference (in the aggregate steps).\n\nThe main sources for pointers that remain is FunctionCallInfo->{flinfo,\ncontext}. There's also WindowFuncExprState->wfuncno (which isn't yet known at\n\"expression compile time\"), but that's not too hard to solve differently.\n\n\nOn 2021-11-04 12:30:00 -0400, Robert Haas wrote:\n> As a general comment, I think the patches could really benefit from\n> more meaningful commit messages and more comments on individual\n> functions. It would definitely help me review, and it might help other\n> people review, or modify the code later.\n\nSure. I was mostly exploring what would be necessary to to change expression\nevaluation so that there's no absolute pointers in it. I still haven't figured\nout all the necessary bits.\n\n\n> For example, I'm looking at ExprEvalStep. If the intent here is that we\n> don't want the union members to point to data that might differ from one\n> execution of the plan to the next, it's surely important to mention that and\n> explain to people who are trying to add steps later what they should do\n> instead. But I'm also not entirely sure that's the intended rule. It kind\n> of surprises me that the only things that we'd be pointing to here that\n> would fall into that category would be a bool, a NullableDatum, a\n> NullableDatum array, and a FunctionCallInfo ... but I've been surprised by a\n> lot of things that turned out to be true.\n\nThe immediate goal is to be able to generate JITed code/LLVM-IR that doesn't\ncontain any absolute pointer values. If the generated code doesn't change\nregardless of any of the other contents of ExprEvalStep, we can still cache\nthe JIT optimization / code emission steps - which are the expensive bits.\n\nWith the exception of what I listed at the top, the types that you listed\nreally are what's needed to avoid such pointer constants. There are more\ncontents in the steps, but either they are constants (and thus just can be\nembedded into the generated code), the expression step is just passed to\nExecEval*, or the data can just be loaded from the ExprStep at runtime\n(although that makes the generated code slower).\n\n\nThere's a \"more advanced\" version of this where we can avoid recreating\nExprStates for e.g. prepared statements. Then we'd need to make a bit more of\nthe data use relative pointers. But that's likely a bit further off. A more\nmoderate version will be to just store the number of steps for expressions\ninside the expressions - for simple queries the allocation / growing / copying\nof ExprSteps is quite visible.\n\nFWIW interpreted execution does seem to win a bit from the higher density of\nmemory allocations for variable data this provides.\n\n\n> I am not a huge fan of the various Rel[Whatever] typedefs. I am not\n> sure that's really adding any clarity. On the other hand I would be a\n> big fan of renaming the structure members in some systematic way. This\n> kind of thing doesn't sit well with me:\n\nI initially had all the Rel* use the same type, and it was much more error\nprone because the compiler couldn't tell that the types are different.\n\n\n> - NullableDatum *value; /* value to return */\n> + RelNullableDatum value; /* value to return */\n> \n> Well, if NullableDatum was the value to return, then RelNullableDatum\n> isn't.It's some kind of thing that lets you find the value to return.\n\nI don't really know what you mean? It's essentially just a different type of\npointer?\n\n\n> Is it true that allocno is only used for, err, some kind of LLVM\n> thing, and not in the regular interpreted path? As far as I can see,\n> outside of the LLVM code, we only ever test whether it's 0, and don't\n> actually care about the specific value.\n\nI'd expect it to be useful for a few interpreded cases as well, but right now\nit's not.\n\n\n> I hope that the fact that this patch reverses the order of the first\n> two arguments to ExecInitExprRec is only something you did to make it\n> so that the compiler would find places you needed to update. Because\n> otherwise it makes no sense to introduce a new thing called an\n> ExprStateBuilder in 0017, make it an argument to that function, and\n> then turn around and change the signature again in 0026. Anyway, a\n> final patch shouldn't include this kind of churn.\n\nYes, that definitely needs to go.\n\n\n> + offsetof(ExprState, steps) * esb->steps_len * sizeof(ExprEvalStep) +\n> + state->mutable_off = offsetof(ExprState, steps) * esb->steps_len *\n> sizeof(ExprEvalStep);\n> \n> Well, either I'm confused here, or the first * should be a + in each\n> case. I wonder how this works at all.\n\nOh. yes, that doesn't look right. I assume it's just always too big, and\nthat's why it doesn't cause problems...\n\n\n> + /* copy in step data */\n> + {\n> + ListCell *lc;\n> + int off = 0;\n> +\n> + foreach(lc, esb->steps)\n> + {\n> + memcpy(&state->steps[off], lfirst(lc), sizeof(ExprEvalStep));\n> + off++;\n> + }\n> + }\n> \n> This seems incredibly pointless to me. Why use a List in the first\n> place if we're going to have to flatten it using this kind of code?\n\nWe don't know how many steps an expression is going to require. It turns out\nthat in the current code we spend a good amount of time just growing\n->steps. Using a list (even if it's an array of pointers as List now is)\nduring building makes appending fairly cheap. Building a dense array after all\nsteps have been computed keeps the execution time benefit.\n\n\n> I think stuff like RelFCIOff() and RelFCIIdx() and RelArrayIdx() is\n> just pretty much incomprehensible. Now, the executor is full of\n> badly-named stuff already -- ExecInitExprRec being a fine example of a\n> name nobody is going to understand on first reading, or maybe ever --\n> but we ought to try not to make things worse. I also do understand\n> that anything with relative pointers is bound to involve a bunch of\n> crappy notation that we're just going to have to accept as the price\n> of doing business. But it would help to pick names that are not so\n> heavily abbreviated. Like, if RelFCIIdx() were called\n> find_function_argument_in_relative_fcinfo() or even\n> get_fnarg_from_relfcinfo() the casual reader might have a chance of\n> guessing what it does.\n\nYea, they're crappily named. If this were C++ it'd be easy to wrap the\nrelative pointers in something that then makes them behave like normal\npointers, but ...\n\n\n> Sure, the code might be longer, but if you can tell what it does without\n> cross-referencing, it's still better.\n\nUnfortunately it's really hard to keep the code legible and keep pgindent\nhappy with long names :(. But I'm sure that we can do better than these.\n\n\n> I would welcome changes that make it clearer which things happen just\n> once and which things happen at execution time; that said, it seems\n> like RELPTR_RESOLVE() happens at execution time, and it surprises me a\n> bit that this is OK from a performance perspective.\n\nIt's actually fairly cheap, at least on x86, because every relative pointer\ndereference is just an offset from one base pointer. That base address can be\nkept in a register. In some initial benchmarking the gains from the higher\nallocation density of the variable data is bigger than potential losses.\n\n\n> The pointers can change from execution to execution, but not within an\n> individual execution, or so I think. So it doesn't need to be resolved every\n> time, if somehow that can be avoided. But maybe CPUs are sufficiently\n> well-optimized for computing a pointer address as a+b*c that it does not\n> matter.\n\nIt should just be a + b, right? Well, for arrays it's more complicated, but\nit's also more complicated for \"normal arrays\".\n\n\n> I'm not sure how helpful any of these comments are, but those are my\n> initial thoughts.\n\nIt's helpful.\n\n\nThe biggest issue I see with getting to the point of actually caching JITed\ncode is the the ->flinfo, ->context thing mentioned above. The best thing I\ncan come up with is moving the allocation of those into the ExprState as well,\nbut my gut says there must be a better approach that I'm not quite seeing.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Nov 2021 16:47:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "On Thu, Nov 4, 2021 at 7:47 PM Andres Freund <andres@anarazel.de> wrote:\n> The immediate goal is to be able to generate JITed code/LLVM-IR that doesn't\n> contain any absolute pointer values. If the generated code doesn't change\n> regardless of any of the other contents of ExprEvalStep, we can still cache\n> the JIT optimization / code emission steps - which are the expensive bits.\n\nI'm not sure why that requires all of this relative pointer stuff,\nhonestly. Under that problem statement, we don't need everything to be\none contiguous allocation. We just need it to have the same lifespan\nas the JITted code. If you introduced no relative pointers at all,\nyou could still solve this problem: create a new memory context that\ncontains all of the EvalExprSteps and all of the allocations upon\nwhich they depend, make sure everything you care about is allocated in\nthat context, and don't destroy any of it until you destroy it all. Or\nanother option would be: instead of having one giant allocation in\nwhich we have to place data of every different type, have one\nallocation per kind of thing. Figure out how many FunctionCallInfo\nobjects we need and make an array of them. Figure out how many\nNullableDatum objects we need and make a separate array of those. And\nso on. Then just use pointers.\n\nI think that part of your motivation here is unrelated caching the JIT\nresults: you also want to improve performance by increasing memory\nlocality. That's a good goal as far as it goes, but maybe there's a\nway to be a little less ambitious and still get most of the benefit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 08:34:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2021-11-05 08:34:26 -0400, Robert Haas wrote:\n> I'm not sure why that requires all of this relative pointer stuff,\n> honestly. Under that problem statement, we don't need everything to be\n> one contiguous allocation. We just need it to have the same lifespan\n> as the JITted code. If you introduced no relative pointers at all,\n> you could still solve this problem: create a new memory context that\n> contains all of the EvalExprSteps and all of the allocations upon\n> which they depend, make sure everything you care about is allocated in\n> that context, and don't destroy any of it until you destroy it all.\n\nI don't see how that works - the same expression can be evaluated multiple\ntimes at once, recursively. So you can't have things like FunctionCallInfoData\nshared. One key point of separating out the mutable data into something that\ncan be relocated is precisely so that every execution can have its own\n\"mutable\" data area, without needing to change anything else.\n\n\n> Or another option would be: instead of having one giant allocation in which\n> we have to place data of every different type, have one allocation per kind\n> of thing. Figure out how many FunctionCallInfo objects we need and make an\n> array of them. Figure out how many NullableDatum objects we need and make a\n> separate array of those. And so on. Then just use pointers.\n\nWithout the relative pointer thing you'd still have pointers into those arrays\nof objects. Which then would make the thing non-shareable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Nov 2021 09:48:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "On Fri, Nov 5, 2021 at 12:48 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't see how that works - the same expression can be evaluated multiple\n> times at once, recursively. So you can't have things like FunctionCallInfoData\n> shared. One key point of separating out the mutable data into something that\n> can be relocated is precisely so that every execution can have its own\n> \"mutable\" data area, without needing to change anything else.\n\nOh. That makes it harder.\n\n> > Or another option would be: instead of having one giant allocation in which\n> > we have to place data of every different type, have one allocation per kind\n> > of thing. Figure out how many FunctionCallInfo objects we need and make an\n> > array of them. Figure out how many NullableDatum objects we need and make a\n> > separate array of those. And so on. Then just use pointers.\n>\n> Without the relative pointer thing you'd still have pointers into those arrays\n> of objects. Which then would make the thing non-shareable.\n\nWell, I guess you could store indexes into the individual arrays, but\nthen I guess you're not gaining much of anything.\n\nIt's a pretty annoying problem, really. Somehow it's hard to shake the\nfeeling that there ought to be a better approach than relative\npointers.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 13:09:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2021-11-05 13:09:10 -0400, Robert Haas wrote:\n> On Fri, Nov 5, 2021 at 12:48 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't see how that works - the same expression can be evaluated multiple\n> > times at once, recursively. So you can't have things like FunctionCallInfoData\n> > shared. One key point of separating out the mutable data into something that\n> > can be relocated is precisely so that every execution can have its own\n> > \"mutable\" data area, without needing to change anything else.\n> \n> Oh. That makes it harder.\n\nYes. Optimally we'd do JIT caching across connections as well. One of the\nbiggest issues with the costs of JITing is actually parallel query, where\nwe'll often recreate the same JIT code again and again. For that you really\ncan't have much in the way of pointers...\n\n\n> > > Or another option would be: instead of having one giant allocation in which\n> > > we have to place data of every different type, have one allocation per kind\n> > > of thing. Figure out how many FunctionCallInfo objects we need and make an\n> > > array of them. Figure out how many NullableDatum objects we need and make a\n> > > separate array of those. And so on. Then just use pointers.\n> >\n> > Without the relative pointer thing you'd still have pointers into those arrays\n> > of objects. Which then would make the thing non-shareable.\n> \n> Well, I guess you could store indexes into the individual arrays, but\n> then I guess you're not gaining much of anything.\n\nYou'd most likely just loose a bit of locality, because the different types of\ndata are now all on separate cachelines, even if referenced by the one\nexpression step.\n\n\n> It's a pretty annoying problem, really. Somehow it's hard to shake the\n> feeling that there ought to be a better approach than relative\n> pointers.\n\nYes. I don't like it much either :(. Basically native code has the same issue,\nand also largely ended up with making most things relative (see x86-64 which\ndoes most addressing relative to the instruction pointer, and binaries\npre-relocation, where the addresses aren't resolved yed).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Nov 2021 10:20:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2021-11-05 09:48:16 -0700, Andres Freund wrote:\n> On 2021-11-05 08:34:26 -0400, Robert Haas wrote:\n> > I'm not sure why that requires all of this relative pointer stuff,\n> > honestly. Under that problem statement, we don't need everything to be\n> > one contiguous allocation. We just need it to have the same lifespan\n> > as the JITted code. If you introduced no relative pointers at all,\n> > you could still solve this problem: create a new memory context that\n> > contains all of the EvalExprSteps and all of the allocations upon\n> > which they depend, make sure everything you care about is allocated in\n> > that context, and don't destroy any of it until you destroy it all.\n> \n> I don't see how that works - the same expression can be evaluated multiple\n> times at once, recursively. So you can't have things like FunctionCallInfoData\n> shared. One key point of separating out the mutable data into something that\n> can be relocated is precisely so that every execution can have its own\n> \"mutable\" data area, without needing to change anything else.\n\nOh, and the other bit is that the absolute addresses make it much harder to\ngenerate efficient code. If I remove the code setting\nFunctionCallInfo->{context,flinfo} to the constant pointers (obviously\nincorrect, but works for functions not using either), E.g. TPCH-Q1 gets about\n20% faster.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Nov 2021 10:27:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "On Fri, Nov 5, 2021 at 1:20 PM Andres Freund <andres@anarazel.de> wrote:\n> Yes. Optimally we'd do JIT caching across connections as well. One of the\n> biggest issues with the costs of JITing is actually parallel query, where\n> we'll often recreate the same JIT code again and again. For that you really\n> can't have much in the way of pointers...\n\nWell that much is clear, and parallel query also needs relative\npointers in some places for other reasons, which reminds me to ask you\nwhether these new relative pointers can't reuse \"utils/relptr.h\"\ninstead of inventing another way of do it. And if not maybe we should\ntry to first change relptr.h and the one existing client\n(freepage.c/h) to something better and then use that in both places,\nbecause if we're going to be stuck with relative pointers are all over\nthe place it would at least be nice not to have too many different\nkinds.\n\n> > It's a pretty annoying problem, really. Somehow it's hard to shake the\n> > feeling that there ought to be a better approach than relative\n> > pointers.\n>\n> Yes. I don't like it much either :(. Basically native code has the same issue,\n> and also largely ended up with making most things relative (see x86-64 which\n> does most addressing relative to the instruction pointer, and binaries\n> pre-relocation, where the addresses aren't resolved yed).\n\nYes, but the good thing about those cases is that they're handled by\nthe toolchain. What's irritating about this case is that we're using a\njust-in-time compiler, and yet somehow it feels like the job that\nought to be done by the compiler is having to be done by our code, and\nthe result is a lot of extra notation. I don't know what the\nalternative is -- if you don't tell the compiler which things it's\nsupposed to assume are constant and which things might vary from\nexecution to execution, it can't know. But it feels a little weird\nthat there isn't some better way to give it that information.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 14:13:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: expression evaluation improvements" }, { "msg_contents": "Hi,\n\nOn 2021-11-05 14:13:38 -0400, Robert Haas wrote:\n> On Fri, Nov 5, 2021 at 1:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yes. Optimally we'd do JIT caching across connections as well. One of the\n> > biggest issues with the costs of JITing is actually parallel query, where\n> > we'll often recreate the same JIT code again and again. For that you really\n> > can't have much in the way of pointers...\n> \n> Well that much is clear, and parallel query also needs relative\n> pointers in some places for other reasons, which reminds me to ask you\n> whether these new relative pointers can't reuse \"utils/relptr.h\"\n> instead of inventing another way of do it. And if not maybe we should\n> try to first change relptr.h and the one existing client\n> (freepage.c/h) to something better and then use that in both places,\n> because if we're going to be stuck with relative pointers are all over\n> the place it would at least be nice not to have too many different\n> kinds.\n\nHm. Yea, that's a fair point. Right now the \"allocno\" bit would be a\nproblem. Perhaps we can get around that somehow. We could search for\nallocations by the offset, I guess.\n\n\n> > > It's a pretty annoying problem, really. Somehow it's hard to shake the\n> > > feeling that there ought to be a better approach than relative\n> > > pointers.\n> >\n> > Yes. I don't like it much either :(. Basically native code has the same issue,\n> > and also largely ended up with making most things relative (see x86-64 which\n> > does most addressing relative to the instruction pointer, and binaries\n> > pre-relocation, where the addresses aren't resolved yed).\n> \n> Yes, but the good thing about those cases is that they're handled by\n> the toolchain. What's irritating about this case is that we're using a\n> just-in-time compiler, and yet somehow it feels like the job that\n> ought to be done by the compiler is having to be done by our code, and\n> the result is a lot of extra notation. I don't know what the\n> alternative is -- if you don't tell the compiler which things it's\n> supposed to assume are constant and which things might vary from\n> execution to execution, it can't know. But it feels a little weird\n> that there isn't some better way to give it that information.\n\nYes, I feel like there must be something better too. But in the end, I think\nwe want something like this for the non-JIT path too, so that we can avoid the\nexpensive re-creation of expression for every query execution. Which does make\nreferencing at least the mutable data only by offset fairly attractive, imo.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Nov 2021 16:01:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: WIP: expression evaluation improvements" } ]
[ { "msg_contents": "Hi,\n\nI tried to install PostgreSQL 12 with the \"Norwegian Bokmål, Norway\" locale in hope that it would, among other things, provide proper support for Norwegian characters out-of-the-box.\n\nBut initcluster.vbs appear to fail during post-install because the locale name contains a Norwegian character that is being mishandled (full log in attached zip file):\n\ninitdb: error: invalid locale name \"NorwegianBokm†l,Norway\"\n\nCalled Die(Failed to initialise the database cluster with initdb)...\nFailed to initialise the database cluster with initdb\n\nScript stderr:\nProgram ended with an error exit code\n\nError running cscript //NoLogo \"C:\\Program Files\\PostgreSQL\\12/installer/server/initcluster.vbs\" \"NT AUTHORITY\\NetworkService\" \"postgres\" \"****\" \"C:\\temp/postgresql_installer_c24b846fc9\" \"C:\\Program Files\\PostgreSQL\\12\" \"C:\\Program Files\\PostgreSQL\\12\\data\" 5432 \"NorwegianBokmål,Norway\" 0: Program ended with an error exit code\nProblem running post-install step. Installation may not complete correctly\nThe database cluster initialisation failed.\nExecuting icacls \"C:\\temp/postgresql_installer_baa40bb6af\" /inheritance:r\nScript exit code: 0\n\nThe letter \"å\" has been turned into a \"†\" (cross).\n\nI tried to uninstall, and reinstall with the \"Norwegian Bokmål, Norway\" locale once more, but with the same error. In the end, I managed to reinstall PostgreSQL 12 without any error by selecting the \"Default\" locale.\n\nBest regards,\nSkjalg", "msg_date": "Thu, 24 Oct 2019 11:06:01 +0000", "msg_from": "\"Skjalg A. Skagen\" <skjalg.skagen@pm.me>", "msg_from_op": true, "msg_subject": "PostgreSQL 12 installation fails because locale name contained\n non-english characters" }, { "msg_contents": "\nThis has been fixed with the this patch:\n\n\thttps://www.postgresql.org/message-id/E1iMcHC-0007Ci-7G@gemulon.postgresql.org\n\nand will be in the next minor release is due on November 14:\n\n\thttps://www.postgresql.org/developer/roadmap/\n\n---------------------------------------------------------------------------\n\nOn Thu, Oct 24, 2019 at 11:06:01AM +0000, Skjalg A. Skagen wrote:\n> Hi,\n> \n> I tried to install PostgreSQL 12 with the \"Norwegian Bokmål, Norway\" locale in\n> hope that it would, among other things, provide proper support for Norwegian\n> characters out-of-the-box.\n> \n> But initcluster.vbs appear to fail during post-install because the locale name\n> contains a Norwegian character that is being mishandled (full log in attached\n> zip file):\n> \n> initdb: error: invalid locale name \"NorwegianBokm†l,Norway\"\n> \n> Called Die(Failed to initialise the database cluster with initdb)...\n> Failed to initialise the database cluster with initdb\n> \n> Script stderr:\n> Program ended with an error exit code\n> \n> Error running cscript //NoLogo \"C:\\Program Files\\PostgreSQL\\12/installer/server\n> /initcluster.vbs\" \"NT AUTHORITY\\NetworkService\" \"postgres\" \"****\" \"C:\\temp/\n> postgresql_installer_c24b846fc9\" \"C:\\Program Files\\PostgreSQL\\12\" \"C:\\Program\n> Files\\PostgreSQL\\12\\data\" 5432 \"NorwegianBokmål,Norway\" 0: Program ended with\n> an error exit code\n> Problem running post-install step. Installation may not complete correctly\n> The database cluster initialisation failed.\n> Executing icacls \"C:\\temp/postgresql_installer_baa40bb6af\" /inheritance:r\n> Script exit code: 0\n> \n> \n> \n> \n> The letter \"å\" has been turned into a \"†\" (cross).\n> \n> I tried to uninstall, and reinstall with the \"Norwegian Bokmål, Norway\" locale\n> once more, but with the same error. In the end, I managed to reinstall\n> PostgreSQL 12 without any error by selecting the \"Default\" locale.\n> \n> Best regards,\n> Skjalg\n\n> Log started 10/21/2019 at 10:18:01\n> Preferred installation mode : qt\n> Trying to init installer in mode qt\n> Mode qt successfully initialized\n> Executing icacls \"C:\\temp/postgresql_installer_cb63a513f2\" /inheritance:r\n> Script exit code: 0\n> \n> Script output:\n> processed file: C:\\temp/postgresql_installer_cb63a513f2\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Executing icacls \"C:\\temp/postgresql_installer_cb63a513f2\" /T /Q /grant \"CENSORED\\censored:(OI)(CI)F\"\n> Script exit code: 0\n> \n> Script output:\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Executing cscript //NoLogo \"C:\\temp\\postgresql_installer_cb63a513f2\\prerun_checks.vbs\"\n> Script exit code: 0\n> \n> Script output:\n> The scripting host appears to be functional.\n> \n> Script stderr:\n> \n> \n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Base Directory. Setting variable iBaseDirectory to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Branding. Setting variable iBranding to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Version. Setting variable brandingVer to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Shortcuts. Setting variable iShortcut to empty value\n> [10:18:06] Using branding: PostgreSQL 12\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 SB_Version. Setting variable sb_version to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 pgAdmin_Version. Setting variable pgadmin_version to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 CLT_Version. Setting variable clt_version to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Data Directory. Setting variable server_data_dir to empty value\n> Executing C:\\temp/postgresql_installer_cb63a513f2/temp_check_comspec.bat \n> Script exit code: 0\n> \n> Script output:\n> \"test ok\"\n> \n> Script stderr:\n> \n> \n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Data Directory. Setting variable iDataDirectory to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Base Directory. Setting variable iBaseDirectory to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Service ID. Setting variable iServiceName to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Service Account. Setting variable iServiceAccount to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Super User. Setting variable iSuperuser to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Branding. Setting variable iBranding to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Version. Setting variable brandingVer to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 Shortcuts. Setting variable iShortcut to empty value\n> Could not find registry key HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-x64-12 DisableStackBuilder. Setting variable iDisableStackBuilder to empty value\n> [10:18:07] Existing base directory: \n> [10:18:07] Existing data directory: \n> [10:18:07] Using branding: PostgreSQL 12\n> [10:18:07] Using Super User: postgres and Service Account: NT AUTHORITY\\NetworkService\n> [10:18:07] Using Service Name: postgresql-x64-12\n> Executing C:\\temp\\postgresql_installer_cb63a513f2\\getlocales.exe \n> Script exit code: 0\n> \n> Script output:\n> AfrikaansxxCOMMAxxxxSPxxSouthxxSPxxAfrica=Afrikaans, South Africa\n> AlbanianxxCOMMAxxxxSPxxAlbania=Albanian, Albania\n> AlsatianxxCOMMAxxxxSPxxFrance=Alsatian, France\n> AmharicxxCOMMAxxxxSPxxEthiopia=Amharic, Ethiopia\n> ArabicxxCOMMAxxxxSPxxAlgeria=Arabic, Algeria\n> ArabicxxCOMMAxxxxSPxxBahrain=Arabic, Bahrain\n> ArabicxxCOMMAxxxxSPxxEgypt=Arabic, Egypt\n> ArabicxxCOMMAxxxxSPxxIraq=Arabic, Iraq\n> ArabicxxCOMMAxxxxSPxxJordan=Arabic, Jordan\n> ArabicxxCOMMAxxxxSPxxKuwait=Arabic, Kuwait\n> ArabicxxCOMMAxxxxSPxxLebanon=Arabic, Lebanon\n> ArabicxxCOMMAxxxxSPxxLibya=Arabic, Libya\n> ArabicxxCOMMAxxxxSPxxMorocco=Arabic, Morocco\n> ArabicxxCOMMAxxxxSPxxOman=Arabic, Oman\n> ArabicxxCOMMAxxxxSPxxQatar=Arabic, Qatar\n> ArabicxxCOMMAxxxxSPxxSaudixxSPxxArabia=Arabic, Saudi Arabia\n> ArabicxxCOMMAxxxxSPxxSyria=Arabic, Syria\n> ArabicxxCOMMAxxxxSPxxTunisia=Arabic, Tunisia\n> ArabicxxCOMMAxxxxSPxxUnitedxxSPxxArabxxSPxxEmirates=Arabic, United Arab Emirates\n> ArabicxxCOMMAxxxxSPxxYemen=Arabic, Yemen\n> ArmenianxxCOMMAxxxxSPxxArmenia=Armenian, Armenia\n> AssamesexxCOMMAxxxxSPxxIndia=Assamese, India\n> AzerbaijanixxSPxxxxOBxxCyrillicxxCBxxxxCOMMAxxxxSPxxAzerbaijan=Azerbaijani (Cyrillic), Azerbaijan\n> AzerbaijanixxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxAzerbaijan=Azerbaijani (Latin), Azerbaijan\n> BanglaxxCOMMAxxxxSPxxBangladesh=Bangla, Bangladesh\n> BanglaxxCOMMAxxxxSPxxIndia=Bangla, India\n> BashkirxxCOMMAxxxxSPxxRussia=Bashkir, Russia\n> BasquexxCOMMAxxxxSPxxSpain=Basque, Spain\n> BelarusianxxCOMMAxxxxSPxxBelarus=Belarusian, Belarus\n> BosnianxxSPxxxxOBxxCyrillicxxCBxxxxCOMMAxxxxSPxxBosniaxxSPxxandxxSPxxHerzegovina=Bosnian (Cyrillic), Bosnia and Herzegovina\n> BosnianxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxBosniaxxSPxxandxxSPxxHerzegovina=Bosnian (Latin), Bosnia and Herzegovina\n> BretonxxCOMMAxxxxSPxxFrance=Breton, France\n> BulgarianxxCOMMAxxxxSPxxBulgaria=Bulgarian, Bulgaria\n> BurmesexxCOMMAxxxxSPxxMyanmar=Burmese, Myanmar\n> CatalanxxCOMMAxxxxSPxxSpain=Catalan, Spain\n> CentralxxSPxxAtlasxxSPxxTamazightxxSPxxxxOBxxArabicxxCBxxxxCOMMAxxxxSPxxMorocco=Central Atlas Tamazight (Arabic), Morocco\n> CentralxxSPxxAtlasxxSPxxTamazightxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxAlgeria=Central Atlas Tamazight (Latin), Algeria\n> CentralxxSPxxAtlasxxSPxxTamazightxxSPxxxxOBxxTifinaghxxCBxxxxCOMMAxxxxSPxxMorocco=Central Atlas Tamazight (Tifinagh), Morocco\n> CentralxxSPxxKurdishxxCOMMAxxxxSPxxIraq=Central Kurdish, Iraq\n> CherokeexxCOMMAxxxxSPxxUnitedxxSPxxStates=Cherokee, United States\n> ChinesexxSPxxxxOBxxSimplifiedxxCBxxxxCOMMAxxxxSPxxChina=Chinese (Simplified), China\n> ChinesexxSPxxxxOBxxSimplifiedxxCBxxxxCOMMAxxxxSPxxSingapore=Chinese (Simplified), Singapore\n> ChinesexxSPxxxxOBxxTraditionalxxCBxxxxCOMMAxxxxSPxxHongxxSPxxKongxxSPxxSAR=Chinese (Traditional), Hong Kong SAR\n> ChinesexxSPxxxxOBxxTraditionalxxCBxxxxCOMMAxxxxSPxxMacaoxxSPxxSAR=Chinese (Traditional), Macao SAR\n> ChinesexxSPxxxxOBxxTraditionalxxCBxxxxCOMMAxxxxSPxxTaiwan=Chinese (Traditional), Taiwan\n> CorsicanxxCOMMAxxxxSPxxFrance=Corsican, France\n> CroatianxxCOMMAxxxxSPxxBosniaxxSPxxandxxSPxxHerzegovina=Croatian, Bosnia and Herzegovina\n> CroatianxxCOMMAxxxxSPxxCroatia=Croatian, Croatia\n> CzechxxCOMMAxxxxSPxxCzechxxSPxxRepublic=Czech, Czech Republic\n> DanishxxCOMMAxxxxSPxxDenmark=Danish, Denmark\n> DarixxCOMMAxxxxSPxxAfghanistan=Dari, Afghanistan\n> DivehixxCOMMAxxxxSPxxMaldives=Divehi, Maldives\n> DutchxxCOMMAxxxxSPxxBelgium=Dutch, Belgium\n> DutchxxCOMMAxxxxSPxxNetherlands=Dutch, Netherlands\n> DzongkhaxxCOMMAxxxxSPxxBhutan=Dzongkha, Bhutan\n> EdoxxCOMMAxxxxSPxxNigeria=Edo, Nigeria\n> EnglishxxCOMMAxxxxSPxxAustralia=English, Australia\n> EnglishxxCOMMAxxxxSPxxBelize=English, Belize\n> EnglishxxCOMMAxxxxSPxxCanada=English, Canada\n> EnglishxxCOMMAxxxxSPxxCaribbean=English, Caribbean\n> EnglishxxCOMMAxxxxSPxxHongxxSPxxKongxxSPxxSAR=English, Hong Kong SAR\n> EnglishxxCOMMAxxxxSPxxIndia=English, India\n> EnglishxxCOMMAxxxxSPxxIndonesia=English, Indonesia\n> EnglishxxCOMMAxxxxSPxxIreland=English, Ireland\n> EnglishxxCOMMAxxxxSPxxJamaica=English, Jamaica\n> EnglishxxCOMMAxxxxSPxxMalaysia=English, Malaysia\n> EnglishxxCOMMAxxxxSPxxNewxxSPxxZealand=English, New Zealand\n> EnglishxxCOMMAxxxxSPxxPhilippines=English, Philippines\n> EnglishxxCOMMAxxxxSPxxSingapore=English, Singapore\n> EnglishxxCOMMAxxxxSPxxSouthxxSPxxAfrica=English, South Africa\n> EnglishxxCOMMAxxxxSPxxTrinidadxxSPxxandxxSPxxTobago=English, Trinidad and Tobago\n> EnglishxxCOMMAxxxxSPxxUnitedxxSPxxKingdom=English, United Kingdom\n> EnglishxxCOMMAxxxxSPxxUnitedxxSPxxStates=English, United States\n> EnglishxxCOMMAxxxxSPxxZimbabwe=English, Zimbabwe\n> EstonianxxCOMMAxxxxSPxxEstonia=Estonian, Estonia\n> FaroesexxCOMMAxxxxSPxxFaroexxSPxxIslands=Faroese, Faroe Islands\n> FilipinoxxCOMMAxxxxSPxxPhilippines=Filipino, Philippines\n> FinnishxxCOMMAxxxxSPxxFinland=Finnish, Finland\n> FrenchxxCOMMAxxxxSPxxBelgium=French, Belgium\n> FrenchxxCOMMAxxxxSPxxCameroon=French, Cameroon\n> FrenchxxCOMMAxxxxSPxxCanada=French, Canada\n> FrenchxxCOMMAxxxxSPxxCaribbean=French, Caribbean\n> FrenchxxCOMMAxxxxSPxxCongoxxSPxxxxOBxxDRCxxCBxx=French, Congo (DRC)\n> FrenchxxCOMMAxxxxSPxxCôtexxSPxxd’Ivoire=French, Côte d’Ivoire\n> FrenchxxCOMMAxxxxSPxxFrance=French, France\n> FrenchxxCOMMAxxxxSPxxHaiti=French, Haiti\n> FrenchxxCOMMAxxxxSPxxLuxembourg=French, Luxembourg\n> FrenchxxCOMMAxxxxSPxxMali=French, Mali\n> FrenchxxCOMMAxxxxSPxxMonaco=French, Monaco\n> FrenchxxCOMMAxxxxSPxxMorocco=French, Morocco\n> FrenchxxCOMMAxxxxSPxxRéunion=French, Réunion\n> FrenchxxCOMMAxxxxSPxxSenegal=French, Senegal\n> FrenchxxCOMMAxxxxSPxxSwitzerland=French, Switzerland\n> FulahxxCOMMAxxxxSPxxNigeria=Fulah, Nigeria\n> FulahxxCOMMAxxxxSPxxSenegal=Fulah, Senegal\n> GalicianxxCOMMAxxxxSPxxSpain=Galician, Spain\n> GeorgianxxCOMMAxxxxSPxxGeorgia=Georgian, Georgia\n> GermanxxCOMMAxxxxSPxxAustria=German, Austria\n> GermanxxCOMMAxxxxSPxxGermany=German, Germany\n> GermanxxCOMMAxxxxSPxxLiechtenstein=German, Liechtenstein\n> GermanxxCOMMAxxxxSPxxLuxembourg=German, Luxembourg\n> GermanxxCOMMAxxxxSPxxSwitzerland=German, Switzerland\n> GreekxxCOMMAxxxxSPxxGreece=Greek, Greece\n> GreenlandicxxCOMMAxxxxSPxxGreenland=Greenlandic, Greenland\n> GuaranixxCOMMAxxxxSPxxParaguay=Guarani, Paraguay\n> GujaratixxCOMMAxxxxSPxxIndia=Gujarati, India\n> HausaxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxNigeria=Hausa (Latin), Nigeria\n> HawaiianxxCOMMAxxxxSPxxUnitedxxSPxxStates=Hawaiian, United States\n> HebrewxxCOMMAxxxxSPxxIsrael=Hebrew, Israel\n> HindixxCOMMAxxxxSPxxIndia=Hindi, India\n> HungarianxxCOMMAxxxxSPxxHungary=Hungarian, Hungary\n> IbibioxxCOMMAxxxxSPxxNigeria=Ibibio, Nigeria\n> IcelandicxxCOMMAxxxxSPxxIceland=Icelandic, Iceland\n> IgboxxCOMMAxxxxSPxxNigeria=Igbo, Nigeria\n> IndonesianxxCOMMAxxxxSPxxIndonesia=Indonesian, Indonesia\n> InuktitutxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxCanada=Inuktitut (Latin), Canada\n> InuktitutxxSPxxxxOBxxSyllabicsxxCBxxxxCOMMAxxxxSPxxCanada=Inuktitut (Syllabics), Canada\n> IrishxxCOMMAxxxxSPxxIreland=Irish, Ireland\n> ItalianxxCOMMAxxxxSPxxItaly=Italian, Italy\n> ItalianxxCOMMAxxxxSPxxSwitzerland=Italian, Switzerland\n> JapanesexxCOMMAxxxxSPxxJapan=Japanese, Japan\n> KannadaxxCOMMAxxxxSPxxIndia=Kannada, India\n> KanurixxCOMMAxxxxSPxxNigeria=Kanuri, Nigeria\n> KashmirixxSPxxxxOBxxDevanagarixxCBxxxxCOMMAxxxxSPxxIndia=Kashmiri (Devanagari), India\n> KazakhxxCOMMAxxxxSPxxKazakhstan=Kazakh, Kazakhstan\n> KhmerxxCOMMAxxxxSPxxCambodia=Khmer, Cambodia\n> KinyarwandaxxCOMMAxxxxSPxxRwanda=Kinyarwanda, Rwanda\n> KiswahilixxCOMMAxxxxSPxxKenya=Kiswahili, Kenya\n> KonkanixxCOMMAxxxxSPxxIndia=Konkani, India\n> KoreanxxCOMMAxxxxSPxxKorea=Korean, Korea\n> KyrgyzxxCOMMAxxxxSPxxKyrgyzstan=Kyrgyz, Kyrgyzstan\n> LaoxxCOMMAxxxxSPxxLaos=Lao, Laos\n> LatinxxCOMMAxxxxSPxxWorld=Latin, World\n> LatvianxxCOMMAxxxxSPxxLatvia=Latvian, Latvia\n> LithuanianxxCOMMAxxxxSPxxLithuania=Lithuanian, Lithuania\n> LowerxxSPxxSorbianxxCOMMAxxxxSPxxGermany=Lower Sorbian, Germany\n> LuxembourgishxxCOMMAxxxxSPxxLuxembourg=Luxembourgish, Luxembourg\n> MacedonianxxCOMMAxxxxSPxxMacedonia,xxSPxxFYRO=Macedonian, Macedonia, FYRO\n> MalayalamxxCOMMAxxxxSPxxIndia=Malayalam, India\n> MalayxxCOMMAxxxxSPxxBrunei=Malay, Brunei\n> MalayxxCOMMAxxxxSPxxMalaysia=Malay, Malaysia\n> MaltesexxCOMMAxxxxSPxxMalta=Maltese, Malta\n> ManipurixxCOMMAxxxxSPxxIndia=Manipuri, India\n> MaorixxCOMMAxxxxSPxxNewxxSPxxZealand=Maori, New Zealand\n> MapudungunxxCOMMAxxxxSPxxChile=Mapudungun, Chile\n> MarathixxCOMMAxxxxSPxxIndia=Marathi, India\n> MohawkxxCOMMAxxxxSPxxCanada=Mohawk, Canada\n> MongolianxxCOMMAxxxxSPxxMongolia=Mongolian, Mongolia\n> MongolianxxSPxxxxOBxxTraditionalxxSPxxMongolianxxCBxxxxCOMMAxxxxSPxxChina=Mongolian (Traditional Mongolian), China\n> MongolianxxSPxxxxOBxxTraditionalxxSPxxMongolianxxCBxxxxCOMMAxxxxSPxxMongolia=Mongolian (Traditional Mongolian), Mongolia\n> NepalixxCOMMAxxxxSPxxIndia=Nepali, India\n> NepalixxCOMMAxxxxSPxxNepal=Nepali, Nepal\n> NorthernxxSPxxSamixxCOMMAxxxxSPxxNorway=Northern Sami, Norway\n> NorwegianxxSPxxBokmålxxCOMMAxxxxSPxxNorway=Norwegian Bokmål, Norway\n> NorwegianxxSPxxNynorskxxCOMMAxxxxSPxxNorway=Norwegian Nynorsk, Norway\n> OccitanxxCOMMAxxxxSPxxFrance=Occitan, France\n> OdiaxxCOMMAxxxxSPxxIndia=Odia, India\n> OromoxxCOMMAxxxxSPxxEthiopia=Oromo, Ethiopia\n> PapiamentoxxCOMMAxxxxSPxxCaribbean=Papiamento, Caribbean\n> PashtoxxCOMMAxxxxSPxxAfghanistan=Pashto, Afghanistan\n> PersianxxCOMMAxxxxSPxxIran=Persian, Iran\n> PolishxxCOMMAxxxxSPxxPoland=Polish, Poland\n> PortuguesexxCOMMAxxxxSPxxBrazil=Portuguese, Brazil\n> PortuguesexxCOMMAxxxxSPxxPortugal=Portuguese, Portugal\n> PunjabixxCOMMAxxxxSPxxIndia=Punjabi, India\n> PunjabixxCOMMAxxxxSPxxPakistan=Punjabi, Pakistan\n> QuechuaxxCOMMAxxxxSPxxBolivia=Quechua, Bolivia\n> QuechuaxxCOMMAxxxxSPxxPeru=Quechua, Peru\n> QuichuaxxCOMMAxxxxSPxxEcuador=Quichua, Ecuador\n> RomanianxxCOMMAxxxxSPxxMoldova=Romanian, Moldova\n> RomanianxxCOMMAxxxxSPxxRomania=Romanian, Romania\n> RomanshxxCOMMAxxxxSPxxSwitzerland=Romansh, Switzerland\n> RussianxxCOMMAxxxxSPxxMoldova=Russian, Moldova\n> RussianxxCOMMAxxxxSPxxRussia=Russian, Russia\n> SakhaxxCOMMAxxxxSPxxRussia=Sakha, Russia\n> SamixxSPxxxxOBxxInarixxCBxxxxCOMMAxxxxSPxxFinland=Sami (Inari), Finland\n> SamixxSPxxxxOBxxLulexxCBxxxxCOMMAxxxxSPxxNorway=Sami (Lule), Norway\n> SamixxSPxxxxOBxxLulexxCBxxxxCOMMAxxxxSPxxSweden=Sami (Lule), Sweden\n> SamixxSPxxxxOBxxNorthernxxCBxxxxCOMMAxxxxSPxxFinland=Sami (Northern), Finland\n> SamixxSPxxxxOBxxNorthernxxCBxxxxCOMMAxxxxSPxxSweden=Sami (Northern), Sweden\n> SamixxSPxxxxOBxxSkoltxxCBxxxxCOMMAxxxxSPxxFinland=Sami (Skolt), Finland\n> SamixxSPxxxxOBxxSouthernxxCBxxxxCOMMAxxxxSPxxNorway=Sami (Southern), Norway\n> SamixxSPxxxxOBxxSouthernxxCBxxxxCOMMAxxxxSPxxSweden=Sami (Southern), Sweden\n> SanskritxxCOMMAxxxxSPxxIndia=Sanskrit, India\n> ScottishxxSPxxGaelicxxCOMMAxxxxSPxxUnitedxxSPxxKingdom=Scottish Gaelic, United Kingdom\n> SerbianxxSPxxxxOBxxCyrillicxxCBxxxxCOMMAxxxxSPxxBosniaxxSPxxandxxSPxxHerzegovina=Serbian (Cyrillic), Bosnia and Herzegovina\n> SerbianxxSPxxxxOBxxCyrillicxxCBxxxxCOMMAxxxxSPxxMontenegro=Serbian (Cyrillic), Montenegro\n> SerbianxxSPxxxxOBxxCyrillicxxCBxxxxCOMMAxxxxSPxxSerbia=Serbian (Cyrillic), Serbia\n> SerbianxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxBosniaxxSPxxandxxSPxxHerzegovina=Serbian (Latin), Bosnia and Herzegovina\n> SerbianxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxMontenegro=Serbian (Latin), Montenegro\n> SerbianxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxSerbia=Serbian (Latin), Serbia\n> SesothoxxCOMMAxxxxSPxxSouthxxSPxxAfrica=Sesotho, South Africa\n> SesothoxxSPxxsaxxSPxxLeboaxxCOMMAxxxxSPxxSouthxxSPxxAfrica=Sesotho sa Leboa, South Africa\n> SetswanaxxCOMMAxxxxSPxxBotswana=Setswana, Botswana\n> SetswanaxxCOMMAxxxxSPxxSouthxxSPxxAfrica=Setswana, South Africa\n> SindhixxCOMMAxxxxSPxxPakistan=Sindhi, Pakistan\n> SindhixxSPxxxxOBxxDevanagarixxCBxxxxCOMMAxxxxSPxxIndia=Sindhi (Devanagari), India\n> SinhalaxxCOMMAxxxxSPxxSrixxSPxxLanka=Sinhala, Sri Lanka\n> SlovakxxCOMMAxxxxSPxxSlovakia=Slovak, Slovakia\n> SlovenianxxCOMMAxxxxSPxxSlovenia=Slovenian, Slovenia\n> SomalixxCOMMAxxxxSPxxSomalia=Somali, Somalia\n> SpanishxxCOMMAxxxxSPxxArgentina=Spanish, Argentina\n> SpanishxxCOMMAxxxxSPxxBolivia=Spanish, Bolivia\n> SpanishxxCOMMAxxxxSPxxChile=Spanish, Chile\n> SpanishxxCOMMAxxxxSPxxColombia=Spanish, Colombia\n> SpanishxxCOMMAxxxxSPxxCostaxxSPxxRica=Spanish, Costa Rica\n> SpanishxxCOMMAxxxxSPxxCuba=Spanish, Cuba\n> SpanishxxCOMMAxxxxSPxxDominicanxxSPxxRepublic=Spanish, Dominican Republic\n> SpanishxxCOMMAxxxxSPxxEcuador=Spanish, Ecuador\n> SpanishxxCOMMAxxxxSPxxElxxSPxxSalvador=Spanish, El Salvador\n> SpanishxxCOMMAxxxxSPxxGuatemala=Spanish, Guatemala\n> SpanishxxCOMMAxxxxSPxxHonduras=Spanish, Honduras\n> SpanishxxCOMMAxxxxSPxxLatinxxSPxxAmerica=Spanish, Latin America\n> SpanishxxCOMMAxxxxSPxxMexico=Spanish, Mexico\n> SpanishxxCOMMAxxxxSPxxNicaragua=Spanish, Nicaragua\n> SpanishxxCOMMAxxxxSPxxPanama=Spanish, Panama\n> SpanishxxCOMMAxxxxSPxxParaguay=Spanish, Paraguay\n> SpanishxxCOMMAxxxxSPxxPeru=Spanish, Peru\n> SpanishxxCOMMAxxxxSPxxPuertoxxSPxxRico=Spanish, Puerto Rico\n> SpanishxxCOMMAxxxxSPxxSpain=Spanish, Spain\n> SpanishxxCOMMAxxxxSPxxSpain=Spanish, Spain\n> SpanishxxCOMMAxxxxSPxxUnitedxxSPxxStates=Spanish, United States\n> SpanishxxCOMMAxxxxSPxxUruguay=Spanish, Uruguay\n> SpanishxxCOMMAxxxxSPxxVenezuela=Spanish, Venezuela\n> SwedishxxCOMMAxxxxSPxxFinland=Swedish, Finland\n> SwedishxxCOMMAxxxxSPxxSweden=Swedish, Sweden\n> SyriacxxCOMMAxxxxSPxxSyria=Syriac, Syria\n> TajikxxSPxxxxOBxxCyrillicxxCBxxxxCOMMAxxxxSPxxTajikistan=Tajik (Cyrillic), Tajikistan\n> TamilxxCOMMAxxxxSPxxIndia=Tamil, India\n> TamilxxCOMMAxxxxSPxxSrixxSPxxLanka=Tamil, Sri Lanka\n> TatarxxCOMMAxxxxSPxxRussia=Tatar, Russia\n> TeluguxxCOMMAxxxxSPxxIndia=Telugu, India\n> ThaixxCOMMAxxxxSPxxThailand=Thai, Thailand\n> TibetanxxCOMMAxxxxSPxxChina=Tibetan, China\n> TigrinyaxxCOMMAxxxxSPxxEritrea=Tigrinya, Eritrea\n> TigrinyaxxCOMMAxxxxSPxxEthiopia=Tigrinya, Ethiopia\n> TurkishxxCOMMAxxxxSPxxTurkey=Turkish, Turkey\n> TurkmenxxCOMMAxxxxSPxxTurkmenistan=Turkmen, Turkmenistan\n> UkrainianxxCOMMAxxxxSPxxUkraine=Ukrainian, Ukraine\n> UpperxxSPxxSorbianxxCOMMAxxxxSPxxGermany=Upper Sorbian, Germany\n> UrduxxCOMMAxxxxSPxxIndia=Urdu, India\n> UrduxxCOMMAxxxxSPxxPakistan=Urdu, Pakistan\n> UyghurxxCOMMAxxxxSPxxChina=Uyghur, China\n> UzbekxxSPxxxxOBxxCyrillicxxCBxxxxCOMMAxxxxSPxxUzbekistan=Uzbek (Cyrillic), Uzbekistan\n> UzbekxxSPxxxxOBxxLatinxxCBxxxxCOMMAxxxxSPxxUzbekistan=Uzbek (Latin), Uzbekistan\n> ValencianxxCOMMAxxxxSPxxSpain=Valencian, Spain\n> VendaxxCOMMAxxxxSPxxSouthxxSPxxAfrica=Venda, South Africa\n> VietnamesexxCOMMAxxxxSPxxVietnam=Vietnamese, Vietnam\n> WelshxxCOMMAxxxxSPxxUnitedxxSPxxKingdom=Welsh, United Kingdom\n> WesternxxSPxxFrisianxxCOMMAxxxxSPxxNetherlands=Western Frisian, Netherlands\n> WolofxxCOMMAxxxxSPxxSenegal=Wolof, Senegal\n> XitsongaxxCOMMAxxxxSPxxSouthxxSPxxAfrica=Xitsonga, South Africa\n> YiddishxxCOMMAxxxxSPxxWorld=Yiddish, World\n> YixxCOMMAxxxxSPxxChina=Yi, China\n> YorubaxxCOMMAxxxxSPxxNigeria=Yoruba, Nigeria\n> isiXhosaxxCOMMAxxxxSPxxSouthxxSPxxAfrica=isiXhosa, South Africa\n> isiZuluxxCOMMAxxxxSPxxSouthxxSPxxAfrica=isiZulu, South Africa\n> \n> Script stderr:\n> \n> \n> Preparing to Install\n> Preparing to Install\n> Creating directory C:\\Program Files\\PostgreSQL\n> Creating directory C:\\Program Files\\PostgreSQL\\12\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\installer\n> Unpacking files\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\prerun_checks.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\vcredist_x86.exe\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\installer\n> Unpacking files\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\vcredist_x64.exe\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\bin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\doc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\doc\\contrib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\doc\\extension\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\libxml\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\datatype\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\foreign\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\dicts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\mb\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\netinet\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\arpa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\sys\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\\sys\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\partitioning\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\regex\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\bootstrap\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\tcop\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\statistics\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\jit\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\ltree\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\isn\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\hstore\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\cube\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\seg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\server\\portability\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\internal\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\internal\\libpq\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\informix\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\informix\\esql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\openssl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\include\\libpq\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\installer\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\installer\\server\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\scripts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\scripts\\images\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\contrib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\he\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\he\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ro\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ro\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_TW\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_TW\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Mexico\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\North_Dakota\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Kentucky\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Arctic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Chile\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Brazil\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\extension\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\debug_symbols\n> Unpacking files\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\icutest53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_standby.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\initdb.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_ctl.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_test_fsync.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libpgtypes.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_isolation_regress.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\isolationtester.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_regress_ecpg.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_receivewal.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\icutu53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_test_timing.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_rewind.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_upgrade.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_controldata.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_recvlogical.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\oid2name.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libecpg.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\zic.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_archivecleanup.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_regress.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_config.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_waldump.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_resetwal.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\iculx53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\icuio53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_checksums.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgcrypto.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\event-trigger-matrix.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-matching.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-connect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgbench.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-logging.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\warm-standby.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-wal.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-info.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-aggregate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-pgtypes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\routine-vacuuming.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgdump.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\errcodes-appendix.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-statements.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-select.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-replication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-flow.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-variables.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-formatting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-client.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-query.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-dictionaries.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertable.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spgist-extensibility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-admin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-exec.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\multibyte.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtable.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\reference.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\kernel-resources.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgrestore.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-json.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-connection.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createfunction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libecpg_compat.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-initdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-dropuser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-ldap.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgcontroldata.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-username-maps.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgchecksums.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-cert.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bki-example.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-createdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-ident.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pg-isready.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\brin-builtin-opclasses.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgrewind.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-pam.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgconfig.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-delay.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\amcheck.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bki-structure.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\applevel-consistency.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-ecpg.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bloom.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-vacuumdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-default-acl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-statistic-ext.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\client-authentication-problems.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-trigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-opfamily.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-amop.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-ts-config.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-db-role-setting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-statistic-ext-data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\client-interfaces.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-seclabel.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-sequence.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-transform.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-authid.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-enum.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-conversion.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-collation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-ts-config-map.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bug-reporting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalogs-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-foreign-server.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-subscription.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-attribute.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\btree-implementation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\btree-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\btree-behavior.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\btree-gin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-language.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-ts-dict.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-publication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-amproc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-user-mapping.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-inherits.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-tablespace.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-init-privs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-shdepend.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-publication-rel.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-policy.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\btree.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-priv.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-depend.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-basics.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\cube.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-cancel-query.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-uuid.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-get-result.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-pg-lsn.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-prog-client.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\custom-scan-execution.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-get-notify.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-others.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\custom-scan-plan.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-enum.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\custom-scan.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\database-roles.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-close.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-binary.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-spi.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datetime-invalid-input.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datetime-keywords.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-build-sql-insert.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-geometric.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-exec.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-prog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\creating-cluster.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-character.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-connect-u.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-open.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\connect-estab.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-default.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\event-trigger-table-rewrite-example.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\extend-how.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-describe.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-lo.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-get-descriptor.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\docguide-docbook.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\event-trigger-interface.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\event-trigger-example.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-errors.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dml-returning.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\event-triggers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-schemas.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\encryption-options.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\error-style-guide.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dict-xsyn.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-cpp.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-rowsecurity.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\default-roles.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-connect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-process.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-whenever.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dml-insert.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-disconnect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-prepare.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\docguide-build.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-set-connection.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-commands.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\docguide.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-commands.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-math.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-window.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-sequence.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin-examples.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-net.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\geqo-biblio.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\fdw-row-locking.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\external-interfaces.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin-extensibility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\geqo-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-trigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\external-admin-tools.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\extend.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\extend-pgxs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-event-triggers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\external-extensions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\geqo.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-statistics.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\extend-type-system.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-comparisons.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\fdwhandler.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\fdw-functions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-domain-udt-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-unique.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\hstore.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gist-extensibility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexam.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-opclass.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-columns.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-bitmap-scans.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-multicolumn.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\index-locking.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-constraint-table-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\index-functions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gssapi-enc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-check-constraint-routine-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\index-cost-estimation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\index-scanning.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\index-unique-checks.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gssapi-auth.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-column-udt-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-character-sets.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gist.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\information-schema.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-column-column-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\installation-notes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-role-column-grants.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-view-routine-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-user-mappings.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-routines.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sql-implementation-info.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\isn.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-view-table-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-enabled-roles.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-udt-privileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sql-sizing.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sql-packages.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\installation-platform-notes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-foreign-data-wrapper-options.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-foreign-server-options.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\intagg.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\install-windows.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\largeobjects.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\jit-reason.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\internals.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-views.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-role-usage-grants.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-triggers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-view-column-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-triggered-update-columns.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-user-mapping-options.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-role-udt-grants.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-schemata.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\jit-extensibility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-element-types.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-foreign-tables.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-usage-privileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-tables.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-routine-privileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\jit-decision.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sql-languages.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-output-plugin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\monitoring-locks.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ltree.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-envars.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-ldap.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-quick-setup.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-build.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-status.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-fastpath.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-writer.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-monitoring.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\monitoring.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logfile-maintenance.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-single-row-mode.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\manage-ag-dropdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-copy.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\locking-indexes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\manage-ag-createdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\lo-examplesect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-events.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-async.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-example.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\lo-implementation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-funcs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-development-tips.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\parallel-plans.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgrowlocks.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgfreespacemap.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\planner-stats.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgvisibility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\mvcc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-global.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-trusted.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\parallel-query.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\planner-optimizer.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plhandler.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-under-the-hood.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\performance-tips.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\passwordcheck.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pageinspect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgtestfsync.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgarchivecleanup.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-event-triggers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgstatstatements.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\planner-stats-security.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgstandby.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\nls-translator.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\notation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-envar.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-order.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-config.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-changes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-subtransactions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-porting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-error-fields.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-dbaccess.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rangetypes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-union.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-trigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-message-types.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-logicalrep-message-formats.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\reference-server.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-do.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-funcs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-error-handling.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\regress-coverage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-trigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\querytree.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-expressions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\role-attributes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\seg.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\role-membership.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rules-views.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sasl-authentication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\server-shutdown.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spgist-builtin-opclasses.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\server-start.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\release.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\regress.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-error-handling.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-memory.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\regress-run.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rule-system.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\release-prior.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spgist-implementation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\server-programming.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-commit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rules.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rules-materializedviews.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\role-removal.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\source-format.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-autovacuum.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rules-privileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rules-triggers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-compatible.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-cursor-move.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alteroperator.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-scroll-cursor-move.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterpublication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterusermapping.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterforeigndatawrapper.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertrigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-close.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altersequence.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-returntuple.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-abort.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altersubscription.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterlargeobject.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterfunction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterdefaultprivileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertsdictionary.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-execute.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-begin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertstemplate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterserver.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterstatistics.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-fnumber.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-result-code-string.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-keepplan.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-rollback.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterdomain.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-palloc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-getargtypeid.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-gettype.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterforeigntable.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-connect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altersystem.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-comment.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createeventtrigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createopclass.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createdatabase.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-creatematerializedview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createextension.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-commit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-commands.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createopfamily.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createprocedure.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createserver.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createconversion.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createpublication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createrule.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createpolicy.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtablespace.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createcast.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createforeigndatawrapper.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropaggregate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropopfamily.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-deallocate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-end.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptablespace.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptable.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-drop-access-method.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-delete.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtsdictionary.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droprule.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createusermapping.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-execute.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropcast.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropschema.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropstatistics.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droprole.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptransform.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropserver.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-load.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptsparser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtsconfig.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droproutine.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropoperator.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createuser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-fetch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droplanguage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropopclass.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-importforeignschema.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropcollation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-declare.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropconversion.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-discard.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropsubscription.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropdatabase.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtrigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptrigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptype.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptsconfig.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropextension.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-indexes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tableam.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-release-savepoint.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-values.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-tables.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-rollback-to.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-revoke.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-vacuum.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-syntax.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-advanced-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-savepoint.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-createdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-arch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-limitations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-prepare.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-reset.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\storage-toast.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\storage-vm.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-set-session-authorization.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-advanced.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-move.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\storage-init.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-features.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-truncate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-notify.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-populate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ssh-tunnels.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-security-label.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\system-catalog-declarations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-parsers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-unlisten.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc-volatility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\uuid-ossp.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-user-mappings.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\vacuumlo.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\views-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-cursors.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-available-extension-versions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-stats-ext.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-prepared-xacts.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-user.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-timezone-abbrevs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-views.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-settings.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-locks.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-hba-file-rules.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\wal-configuration.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\typeconv-select.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-config.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\warm-standby-failover.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-shadow.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-policies.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-update.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\when-can-parallel-query-be-used.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-sql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\unaccent.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-available-extensions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-group.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc-optimization.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-replication-origin-status.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\wal.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc-overload.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\typeconv.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\wal-reliability.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\typeconv-func.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\server_license.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libcurl.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\edblogo.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\README.pldebugger\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xml2.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xoper.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xplang-install.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xplang.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xproc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pagelayout.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xoper-optimization.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-postgres.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgbasebackup.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-replication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\monitoring-stats.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc-c.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-controls.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sepgsql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\hot-standby.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\continuous-archiving.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-expressions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-xml.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-pg-hba-conf.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-message-formats.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-syntax-lexical.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-control-structures.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\install-procedure.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtype.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-table-expressions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-keywords-appendix.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-descriptors.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\fdw-callbacks.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\extend-extensions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\using-explain.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-resource.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-datetime.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-informix-compat.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc-sql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-string.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\postgres-fdw.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xaggr.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-datetime.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rules-update.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-json.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgupgrade.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-textsearch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\release-12.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createaggregate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-copy.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\features-sql-standard.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-partitioning.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\stylesheet.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-clusterdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\appendixes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-reindexdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgreceivewal.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-postmaster.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-methods.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pg-ctl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-createuser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\adminpack.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pg-dumpall.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\arrays.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-dropdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgresetwal.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-bsd.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\acronyms.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\admin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-pgrecvlogical.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-largeobject-metadata.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-cast.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\backup.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-aggregate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\brin-extensibility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-auth-members.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-radius.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\backup-dump.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-depend.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-extension.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-foreign-data-wrapper.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-description.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auto-explain.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-attrdef.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\brin-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bgworker.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bki.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-am.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\brin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bki-format.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-foreign-table.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-event-trigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-trust.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bki-commands.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-constraint.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-class.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\biblio.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\btree-gist.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\auth-peer.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\backup-file.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\btree-support-funcs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-proc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-statistic.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-namespace.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-ts-template.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-replication-origin.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-shseclabel.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\config-setting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-get-connections.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-largeobject.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalogs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-error-message.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-build-sql-update.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\citext.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-fetch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\charset.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\collation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-range.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-shdescription.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-get-pkey.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-type.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-ts-parser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-function.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\client-authentication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-opclass.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-subscription-rel.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-operator.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-partitioned-table.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-connect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-build-sql-delete.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-pltemplate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-disconnect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datetime-input-rules.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-send-query.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-net-types.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-generated-columns.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-constraints.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-textsearch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-bit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-foreign-data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-oid.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dblink.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-prog-server.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datetime-config-files.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-boolean.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-xml.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datetime-appendix.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-alter.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-numeric.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-money.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datatype-pseudo.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\datetime-units-history.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\executor.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-allocate-descriptor.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\docguide-authoring.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dml.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-inherit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-preproc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-var.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\different-replication-solutions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dml-delete.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\external-pl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\docguide-style.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\docguide-toolsets.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dict-int.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ddl-system-columns.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dynamic-trace.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-declare.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-set-descriptor.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\event-trigger-definition.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\earthdistance.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\event-log-registration.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\domains.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-open.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-deallocate-descriptor.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-set-autocommit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\disk-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\diskusage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-library.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\error-message-reporting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-connect.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-concept.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-dynamic.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-develop.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\disk-full.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin-implementation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-examine.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-collations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gist-examples.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\features.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-range.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\history.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\index-api.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\generic-wal.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin-tips.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-geometry.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\geqo-pg-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\how-parallel-query-works.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-logical.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gist-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\git.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-conditional.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gist-builtin-opclasses.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-comparison.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\geqo-intro2.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\high-availability.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-binarystring.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin-limit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\fuzzystrmatch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-expressional.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\fdw-planning.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-enum.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-array.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-domain-constraints.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-column-privileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-table-constraints.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-role-routine-grants.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-ordering.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-index-only-scans.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-foreign-table-options.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-collation-character-set-applicab.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-table-privileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-partial.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\install-post.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-column-options.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-collations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-constraint-column-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\install-requirements.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sql-sizing-profiles.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-parameters.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-key-column-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-role-table-grants.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-foreign-servers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sql-features.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-schema.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-datatypes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sequences.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-attributes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\install-short.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-information-schema-catalog-name.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-transforms.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-sql-parts.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-referential-constraints.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-domains.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\indexes-types.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-column-domain-usage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-administrable-role-authorizations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\install-getsource.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-data-type-privileges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-foreign-data-wrappers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-applicable-roles.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-user-defined-types.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-ssl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\lo-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\limits.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\jit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-threading.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\locale.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-example.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-catalogs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\log-shipping-alternative.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-sql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\legalnotice.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\install-windows-full.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-explanation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-conflicts.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\installation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-misc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-notice-processing.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-security.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-pgpass.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\jit-configuration.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-publication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-cancel.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\lo-interfaces.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-control.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-notify.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-restrictions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\intro-whatis.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-subscription.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-architecture.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\lo-funcs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgstattuple.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\mvcc-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\non-durability.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgtesttiming.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-synchronous.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\manage-ag-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\nls.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\manage-ag-config.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\manage-ag-templatedbs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\manage-ag-tablespaces.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\nls-programmer.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\perm-functions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logicaldecoding-walsender.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgtrgm.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\maintenance.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\multivariate-statistics-examples.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-builtins.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\oid2name.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgprewarm.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\parser-stage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\parallel-safety.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plperl-triggers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgbuffercache.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\mvcc-caveats.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\monitoring-ps.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-subtransaction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-transactions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-implementation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\progress-reporting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol-logical-replication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-transactions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-global.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\preface.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-python23.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\preventing-server-spoofing.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-functions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\protocol.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-util.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-declarations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-database.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-limit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-event-trigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-structure.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-sharing.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpython-data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-trigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\postgres-user.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\populate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-errors-and-messages.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pltcl-procnames.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-file-locations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spgist-examples.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\regress-variant.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-realloc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-cursor-close.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-examples.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\source.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\replication-origins.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\regress-evaluation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-statistics.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\source-conventions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\reference-client.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\routine-reindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rules-status.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sourcerepo.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-custom.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-locks.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\rowtypes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-values.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-developer.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-interface-support.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spgist.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-select-lists.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-preset.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spgist-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\regress-tap.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\queries-with.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\query-path.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-copytuple.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\row-estimation-examples.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-interface.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alteropfamily.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-register-trigger-data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-getargcount.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-getnspname.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterpolicy.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alteraggregate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-execute-plan.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-prepare.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-freetupletable.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterextension.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterlanguage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-cursor-fetch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-fname.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-getbinval.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-cursor-open-with-args.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-register-relation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altermaterializedview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterrule.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-finish.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterrole.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterdatabase.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-getvalue.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertablespace.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-cursor-find.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-getrelname.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterconversion.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-is-cursor-plan.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-freeplan.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-execp.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterprocedure.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-freetuple.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-modifytuple.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterschema.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alteropclass.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-cursor-open-with-paramlist.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-execute-with-args.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-gettypeid.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-visibility.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-scroll-cursor-fetch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-transaction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-unregister-relation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-prepare-params.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-start-transaction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-prepare-cursor.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-execute-plan-with-paramlist.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-call.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createsubscription.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertype.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtableas.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-creategroup.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createcollation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtransform.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createschema.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertsconfig.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtstemplate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-commit-prepared.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createrole.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createlanguage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createtsparser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-analyze.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createoperator.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-do.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-cluster.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createdomain.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-checkpoint.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createforeigntable.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createstatistics.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alteruser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altertsparser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-create-access-method.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-listen.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-set.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-grant.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-rollback-prepared.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-set-transaction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-lock.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-refreshmaterializedview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropprocedure.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropforeigndatawrapper.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropfunction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropsequence.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-show.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptsdictionary.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropgroup.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropdomain.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropusermapping.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droptstemplate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-start-transaction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-reassign-owned.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-selectinto.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-syntax-calling-funcs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-prepare-transaction.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-reindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-set-role.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droppolicy.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-insert.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-droppublication.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-set-constraints.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropforeigntable.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropeventtrigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-update.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\storage-file-layout.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\supported-platforms.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tcn.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\storage-fsm.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-debugging.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tablesample-method.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ssl-tcp.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sspi-auth.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\test-decoding.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-psql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\storage-page-layout.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tablesample-support-functions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\storage.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\system-catalog-initial-data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sslinfo.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\transaction-iso.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-publication-tables.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-concepts.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\trigger-interface.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\trigger-example.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-conclusion.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\trigger-datachanges.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-agg.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-file-settings.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\typeconv-union-case.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-join.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tsm-system-rows.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-accessdb.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\unsupported-features-sql-standard.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-select.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\user-manag.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-sql-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-transactions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-matviews.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-start.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\typeconv-oper.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-table.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-replication-slots.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\trigger-definition.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\triggers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\upgrading.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-inheritance.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\typeconv-overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-delete.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-window.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\typeconv-query.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-sequences.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-rules.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\wal-internals.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc-internal.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-roles.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-seclabels.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-timezone-names.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-tables.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\wal-intro.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-views.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xml-limits-conformance.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xtypes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\wal-async-commit.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\xfunc-pl.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tablefunc.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\app-psql.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\zlib.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\geodesic.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\proj.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_api.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\vrtdataset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_feature.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_port.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_geometry.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_core.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_priv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_srs_api.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_spatialref.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\geos_c.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\proj_experimental.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlerror.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\tree.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\parser.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\c.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\internal\\c.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pgstat.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\plpgsql.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fmgr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\fmgroids.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\fmgrprotos.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\tableam.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\nbtree.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\plannodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\primnodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\execnodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\pathnodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\parsenodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\xsltInternals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\extension\\autoinc.example\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\extension\\moddatetime.example\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\extension\\refint.example\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\extension\\insert_username.example\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_atomic_ops.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\autosprintf.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_auto_close.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\catalog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\c14n.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\chvalid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\builtins.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\cash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\attoptcache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\arrayaccess.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\aclchk_internal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\catcache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\ascii.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\array.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\acl.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\bytea.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\combocid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\basebackup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\backendid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\bufmgr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\condition_variable.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\buf_internals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\block.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\copydir.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\checksum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\bufpage.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\checksum_impl.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\buf.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\barrier.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\buffile.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\connect.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\conditional.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\catversion.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\catalog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\binary_upgrade.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\brin_pageops.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\clog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\brin_xlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\bufmask.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\brin_page.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\amapi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\commit_ts.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\brin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\attnum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\amvalidate.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\brin_tuple.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\brin_revmap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\brin_internal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\bipartite_match.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\bloomfilter.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\binaryheap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\aix.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\arch-ia64.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\arch-ppc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\arch-arm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\arch-hppa.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\arch-x86.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\clauses.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\appendinfo.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\cost.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\bootstrap\\bootstrap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\api.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\analyze.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\be-fsstubs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\be-gssapi-common.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\auth.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\controldata_utils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\base64.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\config_info.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\bgworker_internals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\bgworker.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\bgwriter.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\autovacuum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\conversioncmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\async.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\collationcmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\cluster.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\comment.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\copy.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\alter.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\bitmapset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\attributes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_config.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_vsi_error.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_error.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_minizip_ioapi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_progress.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_csv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_multiproc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_hash_set.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_odbc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_list.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_quad_tree.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_minixml.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_conv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_time.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_config_extras.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_http.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cplkeywordparser.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_string.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_json.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_spawn.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_minizip_unzip.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_vsi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_virtualmem.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_minizip_zip.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\cpl_vsi_virtual.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\debugXML.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\date.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\datetime.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\datum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\decode.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\dependency.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\cygwin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\darwin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tcop\\deparse_utility.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\crypt.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\cube\\cubedata.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\dbcommands.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\defrem.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\createas.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\dbcommands_xlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\informix\\esql\\decimal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\informix\\esql\\datetime.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ecpgtype.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ecpg_informix.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ecpg_config.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ecpglib.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ecpgerrno.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\dict.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\encoding.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\DOCBparser.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\entities.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\expandedrecord.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\expandeddatum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\dynahash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\fmgrtab.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\float.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\evtcache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\errcodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\dsa.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\freepage.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\elog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\formatting.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\fsm_internals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\dsm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\freespace.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\fd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\dsm_impl.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\execPartition.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\execdesc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\execParallel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\execExpr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\execdebug.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\executor.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\foreign\\fdwapi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\foreign\\foreign.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\dshash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\freebsd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\dlfcn.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\\dirent.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\\sys\\file.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\fallback.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tcop\\fastpath.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tcop\\dest.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\statistics\\extended_stats_internal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\file_perm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\file_utils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\fe_memutils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\fork_process.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\event_trigger.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\extension.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\explain.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\discard.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\extensible.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\documents.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\extra.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\extensions.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\e_os2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_mdreader.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdalpansharpen.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_pam.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdaljp2metadata.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdalgrid_priv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_alg.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_alg_priv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdalgeorefpamdataset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdalgrid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_csv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdalwarper.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdaljp2abstractdataset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_frmts.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_utils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_simplesurf.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_vrt.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_proxy.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_version.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gdal_rat.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\funcapi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\functions.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\genbki.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\genam.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\generic_xlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\generic-gcc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\generic-acc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\generic.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\generic-msvc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\generic-sunpro.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\atomics\\generic-xlc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\functions.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gnm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\geos.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gnmgraph.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\gnm_api.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\hash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\globals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\getopt_long.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\getaddrinfo.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\hashutils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\geo_decls.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\guc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\guc_tables.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\hashjoin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\heap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\hash_xlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\hash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\gistxlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\gist.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\ginxlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\ginblock.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\gist_private.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\gistscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\gin_private.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\gin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\grp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_mutation.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_copy.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_random.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_recombination.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_misc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_gene.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_pool.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\geqo_selection.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\header.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\header.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\gram.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\gramparse.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\hba.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\iconv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\HTMLtree.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\HTMLparser.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\inet.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\jsonb.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\json.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\index_selfuncs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\jsonapi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\hsearch.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\jsonpath.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\help_config.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\int8.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\inval.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\ipc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\itemid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\indexfsm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\item.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\itemptr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\instrument.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\indexing.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\index.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\htup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\hio.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\htup_details.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\itup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\heapam.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\heapam_xlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\knapsack.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\ilist.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\hyperloglog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\integerset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\hpux.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\netinet\\in.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\arpa\\inet.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\inherit.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\joininfo.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\kwlist.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\jit\\jit.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\ifaddr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\kwlookup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\keywords.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\ip.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\int.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\int128.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\isn\\isn.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\hstore\\hstore.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\portability\\instr_time.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\imports.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\keys.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libintl.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libpq-fe.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libpq-events.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\localcharset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\memdataset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libcharset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\list.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\miscadmin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\memutils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\logtape.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\lsyscache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\memdebug.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\logicalfuncs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\logicalrelation.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\logicallauncher.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\message.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\logical.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\logicalproto.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\logicalworker.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\lmgr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\lwlock.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\md.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\latch.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\lock.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\lockdefs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\large_object.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\lwlocknames.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\mbprint.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\namespace.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\multixact.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\linux.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\jit\\llvmjit.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\jit\\llvmjit_emit.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\libpq-be.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\libpq.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\libpq-fs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\logging.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\link-canary.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\md5.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\ltree\\ltree.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\matview.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\lockcmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\memnodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\makefuncs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\lockoptions.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\portability\\mem.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\internal\\libpq-int.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\libxslt.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\namespaces.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_p.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_geocoding.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogr_featurestyle.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\ogrsf_frmts.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\org_proj4_PJ.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\nanohttp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\parserInternals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\nanoftp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\numeric.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\partcache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\palloc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\origin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\output_plugin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\off.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeNamedtuplestorescan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeGather.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeCustom.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeGroup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeSort.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeSubqueryscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeIndexonlyscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeWorktablescan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeForeignscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeHashjoin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeLimit.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeIndexscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeBitmapIndexscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeBitmapOr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeFunctionscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeSetOp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeNestloop.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeAppend.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeSamplescan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeUnique.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeMaterial.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeResult.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeProjectSet.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeMergeAppend.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeAgg.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeWindowAgg.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeValuesscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeTidscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeTableFuncscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeGatherMerge.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeSeqscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeRecursiveunion.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeSubplan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeCtescan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeMergejoin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeBitmapHeapscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeLockRows.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeModifyTable.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeBitmapAnd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\nodeHash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\opfam_internal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\partition.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\objectaddress.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\objectaccess.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\parallel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\nbtxlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\pairingheap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\netbsd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\openbsd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\netdb.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\\sys\\param.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\partitioning\\partdefs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\partitioning\\partdesc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\partitioning\\partbounds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\paramassign.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\optimizer.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\orclauses.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parsetree.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_cte.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_utilcmd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_target.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_collate.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_expr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_clause.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_node.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_param.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_coerce.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_enr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_func.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_oper.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_type.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_relation.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parse_agg.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\parser.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\nodeFuncs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\nodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\params.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\numbersInternals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pg_config_ext.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pg_config_os.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pg_config.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pg_config_manual.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\pattern.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pg_config_ext.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pg_getopt.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pg_config_os.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pg_config.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pg_config_manual.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\pg_crc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\pg_lsn.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\pg_locale.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_description_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_database.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_db_role_setting.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_auth_members.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_pltemplate.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_control.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_attribute_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_default_acl.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_operator.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_foreign_table.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_init_privs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_extension.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_authid_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_extension_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_amop.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_pltemplate_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_aggregate.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_inherits.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_index.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_conversion_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_language.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_language_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_foreign_data_wrapper_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_index_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_collation_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_am_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_collation.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_inherits_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_partitioned_table_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_foreign_server_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_event_trigger_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_foreign_table_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_partitioned_table.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_largeobject_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_operator_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_authid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_constraint.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_description.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_largeobject_metadata.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_opfamily.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_largeobject_metadata_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_enum_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_cast.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_event_trigger.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_policy_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_amproc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_largeobject.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_attribute.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_class.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_conversion.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_attrdef_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_depend.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_proc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_amproc_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_am.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_default_acl_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_opclass.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_opclass_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_init_privs_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_foreign_server.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_enum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_cast_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_db_role_setting_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_amop_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_opfamily_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_policy.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_foreign_data_wrapper.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_constraint_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_database_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_class_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_namespace.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_depend_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_attrdef.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_aggregate_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_namespace_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_auth_members_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\pg_bitutils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\pg_crc32c.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\pg_bswap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\partitioning\\partprune.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\pathnode.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\paths.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\pg_lzcompress.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\pg_list.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\pattern.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\postgres_ext.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pgtypes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pgtypes_error.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pgtypes_date.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pgtypes_timestamp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pgtypes_numeric.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\pgtypes_interval.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postgres_ext.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pg_trace.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postgres.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pgtime.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postgres_fe.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\pgtar.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\pidfile.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\plancache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\pg_rusage.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\portal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\pgoutput.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\pmsignal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\pg_sema.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\pg_shmem.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\predicate.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_sequence.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_shdepend.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_subscription_rel_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_statistic_ext_data.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_subscription_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_user_mapping.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_template.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_sequence_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_statistic.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_publication.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_trigger_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_config_map_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_subscription.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_parser_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_subscription_rel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_proc_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_publication_rel_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_transform_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_tablespace_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_config.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_range_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_statistic_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_shdescription_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_range.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_dict.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_shseclabel_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_type_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_shdescription.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_statistic_ext.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_seclabel_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_replication_origin_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_config_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_rewrite_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_publication_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_type.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_rewrite.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_user_mapping_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_parser.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_statistic_ext_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_dict_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_tablespace.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_config_map.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_ts_template_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_publication_rel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_statistic_ext_data_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_replication_origin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_transform.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_trigger.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_shdepend_d.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_shseclabel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\pg_seclabel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\mb\\pg_wchar.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\planmain.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\plancat.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\planner.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\placeholder.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tcop\\pquery.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\pqmq.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\pqcomm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\pqsignal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\pqformat.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\pgarch.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\postmaster.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\policy.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\portalcmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\internal\\port.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\internal\\postgres_fe.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\internal\\pqexpbuffer.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\internal\\libpq\\pqcomm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\rawdataset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\proj_constants.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\proj_api.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\proj_symbol_rename.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\relaxng.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\probes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\relmapper.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\reltrigger.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\regproc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\relfilenodemap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\ps_status.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\relcache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\rangetypes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\relptr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\queryenvironment.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\rel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\relfilenode.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\proclist.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\reinit.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\procarray.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\procsignal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\proclist_types.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\predicate_internals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\proc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\psqlscan_int.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\print.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\psqlscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\printsimple.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\relation.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\printtup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\relscan.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\reloptions.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\dicts\\regis.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\rbtree.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\pwd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\prep.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\regex\\regex.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\regex\\regcustom.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\regex\\regerrs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\regex\\regguts.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\regex\\regexport.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\relpath.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\\prs2lock.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\progress.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\publicationcmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\proclang.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\prepare.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\print.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\readfuncs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\preproc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\SAX2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\schemasInternals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\SAX.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\schematron.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rusagestub.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\selfuncs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\resowner.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\rls.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\sharedtuplestore.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\snapshot.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\ruleutils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\sampling.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\snapmgr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\resowner_private.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\snapbuild.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\slot.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\reorderbuffer.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\sharedfileset.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\shmem.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\smgr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\s_lock.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\shm_mq.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\sinval.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\sinvaladt.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\shm_toc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\simple_list.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\schemapg.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\rmgr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\session.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\skey.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\rewriteheap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\sdir.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\slru.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\rmgrlist.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\simplehash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\solaris.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\sys\\socket.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\restrictinfo.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\scanner.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\parser\\scansup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\libpq\\scram.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\shortest_dec.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\saslprep.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\scram-common.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\sha2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\restricted_token.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\\rewriteDefine.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\\rewriteManip.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\\rewriteSupport.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\\rewriteHandler.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\\rewriteRemove.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\rewrite\\rowsecurity.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\extension\\seg\\segdata.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\sequence.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\schemacmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\seclabel.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\replnodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\security.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\sql3types.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\sqlda-compat.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\sqlca.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\sqlda-native.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\sqlda.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\threads.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\sortsupport.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\syscache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\timestamp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\timeout.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\spccache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\syncrep.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\spin.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\standby.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\sync.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\storage\\standbydefs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\fe_utils\\string_utils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\tablefunc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\tqueue.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\spi_priv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\tstoreReceiver.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\spi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\datatype\\timestamp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\toasting.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\storage_xlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\catalog\\storage.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\tsmapi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\spgxlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\subtrans.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\tupdesc_details.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\transam.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\spgist.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\spgist_private.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\stratnum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\sysattr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\table.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\tupconvert.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\tupdesc.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\timeline.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\ts_locale.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\ts_public.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\ts_utils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\ts_cache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\ts_type.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tsearch\\dicts\\spell.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\lib\\stringinfo.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\\sys\\time.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\tlist.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\optimizer\\subselect.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tcop\\tcopprot.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\statistics\\statistics.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_irish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_hungarian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_nepali.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_french.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_lithuanian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_porter.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_KOI8_R_russian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_tamil.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_2_romanian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_dutch.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_norwegian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_french.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_danish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_indonesian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_italian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_spanish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_russian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_english.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_danish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_2_hungarian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_spanish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_romanian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_swedish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_portuguese.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_german.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_turkish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_english.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_dutch.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_norwegian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_portuguese.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_swedish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_finnish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_italian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_indonesian.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_arabic.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_finnish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_UTF_8_irish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_german.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\snowball\\libstemmer\\stem_ISO_8859_1_porter.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\string.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\startup.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\syslogger.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\tablecmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\tablespace.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\trigger.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\subscriptioncmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\supportnodes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\tidbitmap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\informix\\esql\\sqltypes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\templates.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\trio.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\transform.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\triodef.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\uuid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlmodule.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlIO.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlmemory.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\uri.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xinclude.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlautomata.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\valid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xlink.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlexports.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\windowapi.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\tzparser.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\uuid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\typcache.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\xml.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\tuplestore.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\tuplesort.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\varlena.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\utils\\varbit.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\walsender_private.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\walreceiver.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\walsender.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\replication\\worker_internal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\executor\\tuptable.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xact.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\tuptoaster.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xlog.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xloginsert.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xlog_internal.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\twophase.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\valid.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xlogdefs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xlogutils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\twophase_rmgr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\tupmacs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xlogrecord.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\visibilitymap.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\access\\xlogreader.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_port.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32\\sys\\wait.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\\utime.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\port\\win32_msvc\\unistd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\tcop\\utility.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\unicode_norm.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\username.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\postmaster\\walwriter.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\vacuum.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\view.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\variable.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\typecmds.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\commands\\user.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\nodes\\value.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\variables.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\win32config.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\explicit-joins.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\explicit-locking.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-execute-immediate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\ecpg-sql-type.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\contrib-dblink-is-busy.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\file-fdw.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\external-projects.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\dml-update.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-bitstring.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-database.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\catalog-pg-rewrite.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\fdw-helpers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\custom-scan-path.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\zconf.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlregexp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlstring.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlversion.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlschemastypes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlwriter.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlunicode.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xpointer.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xpath.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xpathInternals.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlsave.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlschemas.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxml\\xmlreader.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\xsltexports.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\xsltlocale.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\xsltutils.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\xslt.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libxslt\\xsltconfig.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-indexes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-createsequence.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-rollback.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gin-builtin-opclasses.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\logical-replication-config.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-exec.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altergroup.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-explain.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropuser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-cursor-open.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-dropmaterializedview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\infoschema-check-constraints.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-cursors.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-subquery.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\managing-databases.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-pfree.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\libpq-pgservice.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\spi-spi-saveplan.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\functions-srf.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\intarray.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\planner-stats-details.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\textsearch-configuration.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altereventtrigger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-altercollation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tsm-system-time.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-drop-owned.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\gist-implementation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-prepared-statements.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\plpgsql-transactions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\sql-alterroutine.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-install.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\resources.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\lo.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\pgwaldump.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\runtime-config-short.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\tutorial-fk.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\view-pg-stats.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\sslerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\x509v3.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\x509.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\engine.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\tls1.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\bio.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ssl.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\obj_mac.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\asn1.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\asn1t.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\evp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ec.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\information_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\errcodes.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\system_views.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\postgres.description\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\sql_features.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\snowball_create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_dump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\pg_upgrade-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\psql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\cast.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ecdsa.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\bn.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\comperr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ecerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\crypto.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\camellia.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\comp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\cterr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\conf.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\async.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\dherr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\__DECC_INCLUDE_EPILOGUE.H\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\asn1_mac.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\dtls1.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ebcdic.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\cmac.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\conferr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\bnerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\bioerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\conf_api.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ecdh.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\asyncerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\aes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\blowfish.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\__DECC_INCLUDE_PROLOGUE.H\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\dh.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\dsa.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\engineerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\cryptoerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\des.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\buffererr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ct.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\cms.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\dsaerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\asn1err.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\cmserr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\buffer.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\getlocales.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\validateuser.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\createuser.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\scripts\\runpsql.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\pkcs12err.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\rc4.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\pkcs7err.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\rc5.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\seed.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\symhacks.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\opensslconf.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\storeerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\rc2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\rsa.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\pemerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\rand_drbg.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\hmac.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ocsperr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ssl2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\lhash.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\md4.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\rand.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\srtp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\err.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ocsp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\pkcs12.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ssl3.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ripemd.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\mdc2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\objects.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\kdf.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\evperr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\md5.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ossl_typ.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\store.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\idea.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\stack.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\pem2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\sha.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\rsaerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\srp.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\kdferr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\objectserr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\modes.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\opensslv.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\pkcs7.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\pem.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\safestack.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\randerr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\md2.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\libpq\\libpq-fs.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ui.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\uierr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\tserr.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\x509err.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\whrlpool.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\x509_vfy.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\x509v3err.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\ts.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\openssl\\txt_db.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\scripts\\images\\pg-help.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\scripts\\images\\pg-reload.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_TW\\LC_MESSAGES\\ecpg-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\ecpglib-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\he\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\initdb-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_archivecleanup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\libpq-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_checksums-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_checksums-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_checksums-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_checksums-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\he\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ro\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_checksums-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_checksums-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_basebackup-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_checksums-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_TW\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\pg_config-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\he\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\pg_controldata-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_resetwal-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\pg_ctl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_rewind-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\pg_test_fsync-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\pg_test_timing-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pg_waldump-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pgscripts-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ro\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\plperl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\plpgsql-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\prerun_checks.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\initcluster.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\startupcfg.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\startserver.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\createshortcuts_server.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\loadmodules.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\scripts\\serverctl.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\pg_hba.conf.sample\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\psqlrc.sample\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\postgresql.conf.sample\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\pg_service.conf.sample\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\postgres.shdescription\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\pg_ident.conf.sample\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pt_BR\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\vi\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\cs\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\plpython-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\uk\\LC_MESSAGES\\pltcl-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Pacific.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Atlantic.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Asia.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Antarctica.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\America.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Europe.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Australia.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Etc.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Indian.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Australia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Africa.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Mexico\\BajaSur\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Mexico\\BajaNorte\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Anchorage\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Aruba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Lower_Princes\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Bahia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Atikokan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Curacao\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Adak\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Araguaina\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Bahia_Banderas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Asuncion\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Coral_Harbour\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Atka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Santa_Isabel\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Mazatlan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Tijuana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Ensenada\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Kralendijk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Astrakhan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Andorra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Amsterdam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Athens\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Apia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Accra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Algiers\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Azores\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Baghdad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Atyrau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Aqtobe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Aqtau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Anadyr\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Ashgabat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Bahrain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Amman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Ashkhabad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Almaty\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Qatar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Aleutian\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Alaska\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\Default\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezonesets\\India\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\GMT0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\EST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\MET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\EET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Israel\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Jamaica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\GMT-0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Greenwich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Kwajalein\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Eire\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\CET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\GB-Eire\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Japan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Iceland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\GMT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Hongkong\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\EST5EDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\HST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Egypt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Factory\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\GMT+0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Libya\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Iran\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\CST6CDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\GB\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Cuba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Mexico\\General\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Knox_IN\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Danmarkshavn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Creston\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Caracas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Buenos_Aires\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Dominica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Anguilla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Jamaica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Glace_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\St_Vincent\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Matamoros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Boa_Vista\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Belem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Miquelon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Hermosillo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Marigot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Edmonton\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Catamarca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Costa_Rica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Fort_Nelson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Goose_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Manaus\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Barbados\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Boise\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Guyana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Maceio\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Managua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Moncton\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Cancun\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Virgin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Eirunepe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\La_Paz\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Godthab\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Guatemala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Mexico_City\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Montserrat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Guadeloupe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Detroit\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Monterrey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Iqaluit\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Belize\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Louisville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Antigua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\El_Salvador\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Menominee\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\St_Thomas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\St_Kitts\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Cambridge_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Jujuy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Halifax\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Inuvik\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Campo_Grande\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Grenada\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Montevideo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Merida\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Fortaleza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Los_Angeles\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Lima\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Grand_Turk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Fort_Wayne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Tortola\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Martinique\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Cayenne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Cuiaba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\St_Lucia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Dawson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Cayman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Chicago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Juneau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Metlakatla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Panama\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Guayaquil\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Dawson_Creek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Havana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\St_Barthelemy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Bogota\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Port_of_Spain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indianapolis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Mendoza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Blanc-Sablon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Chihuahua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\North_Dakota\\Beulah\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\North_Dakota\\Center\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Kentucky\\Louisville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Marengo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Knox\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Indianapolis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Buenos_Aires\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\ComodRivadavia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\La_Rioja\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Catamarca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Jujuy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Mendoza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Arctic\\Longyearbyen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Bratislava\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Kaliningrad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Mariehamn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Dublin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Belfast\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Kiev\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Madrid\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Guernsey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Kirov\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Monaco\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Minsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Malta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Chisinau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Budapest\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\London\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Luxembourg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Jersey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Helsinki\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Prague\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Oslo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Tiraspol\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Gibraltar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Copenhagen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Berlin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Bucharest\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Brussels\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Isle_of_Man\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Chile\\EasterIsland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Galapagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Guadalcanal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Kosrae\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Efate\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Kiritimati\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Honolulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Kwajalein\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Guam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Majuro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Johnston\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Easter\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Gambier\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Fiji\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Funafuti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Enderbury\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Fakaofo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Saipan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Marquesas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Bougainville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-8\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-5\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-7\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+5\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+12\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\Greenwich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+11\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+10\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-6\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+4\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-13\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+3\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-9\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-10\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-14\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+8\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-11\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+6\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+9\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-12\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-4\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT+7\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\GMT-3\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Mountain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Atlantic\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Brisbane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Currie\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Eucla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Queensland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Lord_Howe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\LHI\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Lindeman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Hobart\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Tasmania\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Ceuta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Lome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\El_Aaiun\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Lusaka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Monrovia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Maputo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Johannesburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Banjul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Gaborone\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Khartoum\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Maseru\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Harare\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Abidjan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Ouagadougou\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Bujumbura\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Timbuktu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Nouakchott\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Lubumbashi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Cairo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Mbabane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Bissau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Tripoli\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Conakry\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Dakar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Juba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Blantyre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Freetown\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Bamako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Kigali\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Casablanca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\St_Helena\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Cape_Verde\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Faroe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Reykjavik\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Faeroe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Madeira\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Bermuda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Jan_Mayen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Canary\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Davis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Macquarie\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\DumontDUrville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Mawson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Casey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Brazil\\West\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Cocos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Kerguelen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Maldives\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Chagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Mahe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Christmas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Mauritius\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Katmandu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kuching\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Dili\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kuala_Lumpur\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Choibalsan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Dacca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Bishkek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Irkutsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Macau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Karachi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Damascus\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Dhaka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Dushanbe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Colombo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kashgar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Tehran\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Barnaul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kabul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kolkata\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Jayapura\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Khandyga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Krasnoyarsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Jakarta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Macao\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Beirut\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Gaza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Calcutta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Magadan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Tokyo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Famagusta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Chita\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Tel_Aviv\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Urumqi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Hebron\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Manila\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kathmandu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Baku\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Jerusalem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kamchatka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Hovd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Hong_Kong\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Brunei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Central\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Michigan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\NZ-CHAT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Universal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\WET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\MST7MDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\posixrules\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Portugal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\ROC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Poland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\NZ\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\MST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Zulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Singapore\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\UTC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Navajo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\ROK\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\PST8PDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\W-SU\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Turkey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\PRC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\UCT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Toronto\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Whitehorse\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Santo_Domingo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Shiprock\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Punta_Arenas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Winnipeg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Regina\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Recife\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Scoresbysund\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Swift_Current\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Yellowknife\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Rainy_River\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Rankin_Inlet\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Santarem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Porto_Acre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Nipigon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Rosario\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Thule\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Pangnirtung\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Montreal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Cordoba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Sao_Paulo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Thunder_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Sitka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Porto_Velho\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Resolute\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Phoenix\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Nome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Vancouver\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Yakutat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Rio_Branco\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Ojinaga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Santiago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Paramaribo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\St_Johns\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\New_York\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Puerto_Rico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Port-au-Prince\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Tegucigalpa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Noronha\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Nassau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Denver\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\North_Dakota\\New_Salem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Kentucky\\Monticello\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Petersburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Tell_City\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Winamac\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Vevay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Indiana\\Vincennes\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Cordoba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Tucuman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Salta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Ushuaia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\San_Juan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\San_Luis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\America\\Argentina\\Rio_Gallegos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Sofia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Zurich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Simferopol\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Vatican\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Volgograd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\San_Marino\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Lisbon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Rome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Ulyanovsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Warsaw\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Vienna\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Saratov\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Paris\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Moscow\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Zaporozhye\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Istanbul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Tirane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Busingen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Riga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Nicosia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Vilnius\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Tallinn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Uzhgorod\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Vaduz\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Skopje\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Podgorica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Samara\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Belgrade\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Stockholm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Sarajevo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Zagreb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Europe\\Ljubljana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Chile\\Continental\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Tarawa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Rarotonga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Nauru\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Pago_Pago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Tahiti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Chatham\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Norfolk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Auckland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Tongatapu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Wake\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Midway\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Yap\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Samoa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Pitcairn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Port_Moresby\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Palau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Pohnpei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Wallis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Ponape\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Niue\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Truk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Noumea\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Pacific\\Chuuk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\Universal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\Zulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\UTC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Etc\\UCT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Central\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Eastern\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Saskatchewan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Pacific\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Yukon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Canada\\Newfoundland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\South\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Yancowinna\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\North\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Victoria\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Melbourne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\NSW\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Canberra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Sydney\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Adelaide\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Broken_Hill\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\ACT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Darwin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\Perth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Australia\\West\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Nairobi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Asmara\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Tunis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Mogadishu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Niamey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Porto-Novo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Luanda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Djibouti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Kampala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Dar_es_Salaam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Douala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Brazzaville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Windhoek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Kinshasa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Ndjamena\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Asmera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Addis_Ababa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Malabo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Libreville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Sao_Tome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Bangui\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Africa\\Lagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\Stanley\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Atlantic\\South_Georgia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Rothera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\McMurdo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\South_Pole\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Vostok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Syowa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Palmer\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Antarctica\\Troll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Brazil\\Acre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Brazil\\DeNoronha\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Brazil\\East\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Mayotte\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Reunion\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Comoro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Indian\\Antananarivo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Ulan_Bator\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Pyongyang\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Seoul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Yekaterinburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Qostanay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Thimphu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Chongqing\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Riyadh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Novosibirsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Tashkent\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Kuwait\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Saigon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Qyzylorda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Samarkand\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Ujung_Pandang\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Ho_Chi_Minh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Sakhalin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Yangon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Muscat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Istanbul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Ulaanbaatar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Vladivostok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Novokuznetsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Tomsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Yerevan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Yakutsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Singapore\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Nicosia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Harbin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Taipei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Oral\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Shanghai\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Vientiane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Bangkok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Ust-Nera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Dubai\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Rangoon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Aden\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Srednekolymsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Tbilisi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Phnom_Penh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Pontianak\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Thimbu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Chungking\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Makassar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\Asia\\Omsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\isn--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\hunspell_sample.affix\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\hunspell_sample_long.affix\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\ispell_sample.dict\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\ispell_sample.affix\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\hunspell_sample_long.dict\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\hunspell_sample_num.dict\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\hunspell_sample_num.affix\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_visibility.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plpython3u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pltclu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgcrypto.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dict_int.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plpython2u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\cube.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpgsql.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plpython2u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_bloomfilter.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\autoinc.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plpythonu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\isn.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgrowlocks.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intagg.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\earthdistance.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\moddatetime.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree_plpythonu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\uuid-ossp.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree_plpython2u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext6.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext4.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_freespacemap.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext8.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\adminpack.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext5.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intarray.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_trgm.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plperl.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pldbgapi.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plperlu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\unaccent.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plperlu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tsm_system_time.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plpythonu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dict_xsyn.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plperl.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\refint.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\file_fdw.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext3.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_predtest.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\fuzzystrmatch.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_buffercache.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpythonu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_rbtree.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\lo.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree_plpython3u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\amcheck.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_prewarm.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tsm_system_rows.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpython3u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dblink.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\insert_username.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext2.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext7.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_pg_dump.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpython2u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\sslinfo.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext_cyclic1.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\postgres_fdw.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext_cyclic2.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plperl.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tcn.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pltcl.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plperlu.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gin.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext1.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tablefunc.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\xml2.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_integerset.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plpython3u.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\seg.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\bloom.control\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\unaccent.rules\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\xsyn_sample.rules\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist--1.4--1.5.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\amcheck--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\autoinc--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\adminpack--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\bloom--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gin--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gin--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\amcheck--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\amcheck--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gin--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\adminpack--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\adminpack--1.1--2.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gin--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gin--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\autoinc--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_buffercache--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\moddatetime--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.5.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_buffercache--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\lo--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plpythonu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plpython3u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_buffercache--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dict_int--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plpythonu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dblink--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.6--1.7.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dict_xsyn--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dict_int--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\cube--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\isn--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\fuzzystrmatch--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_prewarm--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\moddatetime--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intarray--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dblink--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree_plpython2u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\earthdistance--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\btree_gist--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intagg--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plperlu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\fuzzystrmatch--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_prewarm--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\cube--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--1.4--1.5.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\isn--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\cube--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_buffercache--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_freespacemap--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.4--1.5.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plperlu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\file_fdw--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\cube--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\insert_username--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dblink--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plperl--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_buffercache--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intarray--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plpython2u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_freespacemap--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_prewarm--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\isn--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--1.5--1.6.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\earthdistance--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\cube--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pageinspect--1.5--1.6.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intagg--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\insert_username--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--1.5--1.6.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_freespacemap--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dblink--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree_plpython3u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\fuzzystrmatch--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intagg--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\cube--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plpython3u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\dict_xsyn--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\lo--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\earthdistance--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_freespacemap--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore_plperl--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\ltree_plpythonu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\jsonb_plpython2u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\citext--1.4--1.5.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\hstore--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\lo--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intarray--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\intarray--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Eastern\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Samoa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Pacific\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Mountain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\East-Indiana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Arizona\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Indiana-Starke\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\timezone\\US\\Hawaii\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\russian.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\french.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\nepali.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\german.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\thesaurus_sample.ths\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\italian.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\synonym_sample.syn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\portuguese.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\danish.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\turkish.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\dutch.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\swedish.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\norwegian.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\spanish.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\english.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\finnish.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\tsearch_data\\hungarian.stop\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\sslinfo--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgrowlocks--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple--1.4--1.5.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\seg--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tablefunc--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpythonu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pltcl--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext3--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\sslinfo--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\unaccent--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\uuid-ossp--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\seg--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_trgm--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\seg--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pldbgapi--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.6--1.7.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgrowlocks--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\unaccent--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_trgm--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pldbgapi--unpackaged--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_trgm--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext7--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\xml2--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgcrypto--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgrowlocks--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\refint--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_predtest--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext1--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\sslinfo--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_integerset--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pltclu--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_trgm--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.4--1.5.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgstattuple--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext7--1.0--2.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_visibility--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext5--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\xml2--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\seg--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plperlu--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgcrypto--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext4--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tablefunc--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_trgm--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext_cyclic2--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpython2u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tcn--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plperl--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tsm_system_time--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpgsql--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\uuid-ossp--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgrowlocks--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext6--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\tsm_system_rows--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext2--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext8--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\unaccent--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pltclu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plperlu--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.3--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_visibility--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpythonu--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_visibility--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpgsql--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\uuid-ossp--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpython2u--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\refint--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\sslinfo--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\postgres_fdw--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_trgm--1.1--1.2.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pltcl--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plperl--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.5--1.6.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpython3u--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_ext_cyclic1--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\seg--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgcrypto--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_pg_dump--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgcrypto--1.0--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\plpython3u--unpackaged--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_rbtree--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pldbgapi--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pgcrypto--1.2--1.3.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\pg_stat_statements--1.4.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\xml2--1.1.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\extension\\test_bloomfilter--1.0.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libxml2.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\postgres.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\ecpg.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libssl-1_1-x64.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libcrypto-1_1-x64.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libiconv-2.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\icudt53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\icuuc53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\icule53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libintl-8.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libxslt.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\icuin53.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\doc\\postgresql\\html\\bookindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\include\\server\\common\\unicode_norm_table.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\vcredist_x64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\vcredist_x86.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\postgres.bki\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ru\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\de\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\tr\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ja\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\es\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\fr\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\it\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\sv\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\zh_CN\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\ko\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\locale\\pl\\LC_MESSAGES\\postgres-12.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\oid2name.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_checksums.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_test_fsync.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_archivecleanup.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\hstore_plperl.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\clusterdb.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_uhc.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\postgres.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_config.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\tablefunc.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_big5.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\reindexdb.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_euc_tw.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\autoinc.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\initdb.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_iso8859.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\createdb.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_cyrillic.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pgcrypto.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pageinspect.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\adminpack.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\bloom.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\moddatetime.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_standby.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\hstore.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_win.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\cyrillic_and_mic.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_ctl.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pgoutput.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_isolation_regress.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_isready.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_waldump.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\earthdistance.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\dblink.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pltcl.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_visibility.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pgbench.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_ascii.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_euc_jp.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\euc_cn_and_mic.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_buffercache.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\ltree_plpython3.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\psql.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\_int.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\tsm_system_rows.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\dict_int.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_controldata.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pgxml.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\sslinfo.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\test_integerset.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\insert_username.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\unaccent.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_resetwal.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\libecpg.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_receivewal.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\dropdb.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\regress.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\dict_snowball.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_prewarm.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\refint.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\ascii_and_mic.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\hstore_plpython3.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_regress.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\lo.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pgrowlocks.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\test_rbtree.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pgstattuple.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\plpython3.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\libpgtypes.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\btree_gist.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_freespacemap.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\plperl.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\auto_explain.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_euc_kr.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\seg.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_test_timing.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_stat_statements.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\euc_jp_and_sjis.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_sjis.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_johab.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_rewind.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\dropuser.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_regress_ecpg.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\test_decoding.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\libpq.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\vacuumdb.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\latin2_and_win1250.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\file_fdw.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\test_bloomfilter.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\euc2004_sjis2004.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\postgres_fdw.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\passwordcheck.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_sjis2004.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\libecpg_compat.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\ltree.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\ecpg.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\auth_delay.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_trgm.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\jsonb_plpython3.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_euc2004.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_dumpall.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_euc_cn.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pgevent.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_upgrade.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\fuzzystrmatch.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\createuser.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\uuid-ossp.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\btree_gin.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\isolationtester.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\jsonb_plperl.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\tsm_system_time.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_iso8859_1.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\cube.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_dump.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\citext.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\plpgsql.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_gb18030.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\dict_xsyn.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\utf8_and_gbk.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\latin_and_mic.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\tcn.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\zic.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\vacuumlo.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_restore.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_recvlogical.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\libpqwalreceiver.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\euc_tw_and_big5.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\pg_basebackup.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\test_predtest.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\amcheck.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\euc_kr_and_mic.pdb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\debug_symbols\\isn.pdb\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\platforms\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\platforms\\qtwebengine\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\javascript\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\parseutils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\redirects\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\actions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.6_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.6_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.6_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\templates\\foreign_key\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\templates\\foreign_key\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\exclusion_constraint\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\exclusion_constraint\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\exclusion_constraint\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\exclusion_constraint\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\templates\\check_constraint\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\templates\\check_constraint\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\templates\\triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\templates\\triggers\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\templates\\compound_triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\templates\\compound_triggers\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\templates\\rules\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\templates\\rules\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\templates\\columns\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\templates\\columns\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\\5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.6_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\templates\\edbvars\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\templates\\edbvars\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\templates\\catalog_object_column\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\templates\\catalog_object_column\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\templates\\catalog_object_column\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\pg\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\ppas\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\children\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\9.5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\\functions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\\schemas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\datatype\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\datatype\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\datatype\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\datatype\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\vacuum_settings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\vacuum_settings\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\\extensions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\\extensions\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\\user_mappings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\\user_mappings\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\\user_mappings\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\9.3_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\9.6_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\templates\\pga_schedule\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\templates\\pga_schedule\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\\sql\\pre3.4\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\\pre3.4\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\templates\\pga_jobstep\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\templates\\pga_jobstep\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\9.6_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\\sql\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\\sql\\9.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\9.6_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\zh\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\zh\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\settings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\settings\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\settings\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\help\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\help\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\help\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\templates\\preferences\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\model\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\resources\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\codemirror\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\codemirror\\addon\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\codemirror\\addon\\fold\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\codemirror\\extension\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\\databases\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\\databases\\external_tables\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\tree\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\history\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\history\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\event_handlers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\plugins\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\misc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\misc\\statistics\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\alertify\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\nodes\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\bundle\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\require\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\backgrid\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\backform\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\templates\\user_management\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\templates\\user_management\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\12_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\gpdb_5.0_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\9.2_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\templates\\import_export\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\templates\\import_export\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\templates\\datagrid\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\templates\\maintenance\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\templates\\maintenance\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\10_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\11_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\10_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\10_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\11_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\templates\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\templates\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\templates\\sql\\macros\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependents\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependents\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependents\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\sql\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\sql\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\scss\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\css\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\templates\\file_manager\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\templates\\file_manager\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\templates\\file_manager\\js\\languages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\statistics\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\statistics\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\statistics\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependencies\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependencies\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependencies\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\11_plus\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\setup\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\email\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\\static\\js\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\\templates\\about\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Mexico\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\North_Dakota\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Kentucky\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Arctic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Chile\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Brazil\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\http1.0\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\opt0.4\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\_bundled\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Mail-0.9.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests-2.22.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson-3.16.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2-2.8.3.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Werkzeug-0.16.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Mexico\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\North_Dakota\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Kentucky\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Arctic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Chile\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Brazil\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna-2.8.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\python3html\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\python3html\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_BabelEx-0.9.3.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize-1.1.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils-0.15.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask-multidb\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask-multidb\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_paranoid\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_paranoid\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\static\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_babelex\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_babelex\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\compat\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\compat\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\_data\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\_data\\wordsets\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt-3.1.7.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ca\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ca\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de_CH\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de_CH\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\hu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\hu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nb\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nb\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\tr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\tr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\he\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\he\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\bg\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\bg\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fa\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\en\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\en\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\el\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\el\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cy\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\et\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\et\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cs_CZ\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cs_CZ\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ar\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ar\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh_TW\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh_TW\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\uk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\uk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\dateutil\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\dateutil\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\templatetags\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\templatetags\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\SQLAlchemy-1.3.8.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\extern\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\extern\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\PyNaCl-1.3.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pkg_resources\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pkg_resources\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\backports\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\backports\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\ssl_match_hostname\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\ssl_match_hostname\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\cli\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\cli\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Click-7.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\MarkupSafe-1.1.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater-1.3.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\zoneinfo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\zoneinfo\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\backports\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\backports\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\ssl_match_hostname\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\ssl_match_hostname\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_htmlmin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_htmlmin\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\filters\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\filters\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_HTMLmin-1.5.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3-1.25.6.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_perf\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_perf\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\WTForms-2.2.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\json\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\json\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer-1.9.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six-1.12.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ca\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ca\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_BR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_BR\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nb_NO\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nb_NO\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\si\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\si\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\da\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\da\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr@latin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr@latin\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_PT\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_PT\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\tr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\tr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\he\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\he\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\mk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\mk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ro\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ro\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fa\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eo\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\id\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\id\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\uk_UA\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\uk_UA\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ne\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ne\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\el\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\el\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cy\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_CN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_CN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\bn\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\bn\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi_IN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi_IN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\et\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\et\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ar\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ar\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ta\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ta\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\vi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\vi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cs\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_TW\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_TW\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\.tx\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ca\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ca\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_BR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_BR\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nb_NO\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nb_NO\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\si\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\si\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\da\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\da\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr@latin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr@latin\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_PT\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_PT\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\tr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\tr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\he\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\he\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\mk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\mk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ro\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ro\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fa\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eo\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\id\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\id\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\uk_UA\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\uk_UA\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ne\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ne\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\el\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\el\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cy\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_CN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_CN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\bn\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\bn\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi_IN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi_IN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\et\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\et\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ar\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ar\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ta\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ta\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\vi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\vi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cs\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_TW\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_TW\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\.tx\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ca\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ca\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_BR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_BR\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nb_NO\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nb_NO\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\si\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\si\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\da\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\da\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr@latin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr@latin\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_PT\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_PT\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\tr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\tr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\he\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\he\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\mk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\mk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ro\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ro\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fa\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eo\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\id\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\id\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\uk_UA\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\uk_UA\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ne\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ne\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cak\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cak\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\el\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\el\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ur\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ur\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cy\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_CN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_CN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\bn\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\bn\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi_IN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi_IN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\et\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\et\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ar\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ar\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ta\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ta\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\vi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\vi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cs\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_TW\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_TW\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\.tx\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\jsmath\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\jsmath\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ca\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ca\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_BR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_BR\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr_RS\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr_RS\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nb_NO\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nb_NO\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\si\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\si\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\da\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\da\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr@latin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr@latin\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_PT\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_PT\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\tr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\tr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\he\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\he\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\mk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\mk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ro\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ro\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fa\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eo\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\id\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\id\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\uk_UA\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\uk_UA\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ne\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ne\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cak\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cak\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\el\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\el\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ur\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ur\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cy\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_CN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_CN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\bn\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\bn\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi_IN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi_IN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\et\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\et\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ar\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ar\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ta\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ta\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\vi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\vi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cs\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_TW\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_TW\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\.tx\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ca\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ca\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_BR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_BR\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ru\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ru\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr_RS\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr_RS\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nb_NO\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nb_NO\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\de\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\de\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\si\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\si\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\da\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\da\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr@latin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr@latin\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_PT\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_PT\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\tr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\tr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ja\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ja\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\he\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\he\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\mk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\mk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ro\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ro\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\es\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\es\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fa\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fa\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eo\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eo\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sk\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sk\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\id\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\id\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eu\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eu\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\uk_UA\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\uk_UA\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lt\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ne\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ne\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cak\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cak\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\el\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\el\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\it\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\it\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ur\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ur\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cy\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_CN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_CN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\bn\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\bn\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi_IN\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi_IN\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\et\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\et\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ar\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ar\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ta\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ta\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\vi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\vi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cs\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cs\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_TW\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_TW\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ko\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ko\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pl\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lv\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\.tx\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\generic\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\generic\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\pylons\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\pylons\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\multidb\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\multidb\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous-1.1.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Principal-0.4.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_gravatar\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_gravatar\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil-5.5.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\extern\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\extern\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama-0.4.1.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\databases\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\databases\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\cli\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\cli\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\da_DK\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\da_DK\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\fr_FR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\fr_FR\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\nl_NL\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\nl_NL\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\ru_RU\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\ru_RU\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\de_DE\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\de_DE\\LC_MESSAGES\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\xetex\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\xetex\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\latex2e\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\latex2e\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html4css1\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html4css1\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\medium-black\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\big-white\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\big-black\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\small-black\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\small-white\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\medium-white\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\odf_odt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\odf_odt\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\pep_html\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\pep_html\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser-2.19.dist-info\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\nt\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\common\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\posix\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\scripts\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\scripts\\images\n> Unpacking files\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin_3rd_party_licenses.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Sql.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5MultimediaWidgets.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Sensors.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\table_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\searchindex.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\config_py.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\editgrid.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\editgrid.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\procedure_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\procedure_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\exclusion_constraint_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\exclusion_constraint_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\grant_wizard_step1.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\grant_wizard_step1.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\master_password_enter.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\master_password_enter.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_unique.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_unique.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_output_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_output_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_data_wrapper_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_data_wrapper_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\server_advanced.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\server_advanced.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\cast_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\cast_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\procedure_arguments.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\procedure_arguments.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_function_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_function_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_key_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_key_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\materialized_view_parameter.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\materialized_view_parameter.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\procedure_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\procedure_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_schedule_repeat.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_schedule_repeat.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\function_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\function_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\compound_trigger_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\compound_trigger_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\compound_trigger_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\compound_trigger_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_dashboard_graphs.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_dashboard_graphs.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_autocomplete.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_autocomplete.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\server_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\server_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_data_wrapper_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_data_wrapper_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\role_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\role_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\materialized_view_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\materialized_view_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_function_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_function_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_events.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_events.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\language_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\language_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_storage_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_storage_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\grant_wizard_step3.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\grant_wizard_step3.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\ce_error_hba.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\ce_error_hba.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\index_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\index_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\tablespace_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\tablespace_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\procedure_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\procedure_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_error_message.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_error_message.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\resource_group_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\resource_group_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_queries.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_queries.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\collation_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\collation_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\package_header.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\package_header.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_table_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_table_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_template_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_template_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\unique_constraint_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\unique_constraint_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_browser_keyboard_shortcuts.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_browser_keyboard_shortcuts.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\import_export_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\import_export_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\master_password_set.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\master_password_set.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_properties_edit.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_properties_edit.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\function_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\function_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\event_trigger_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\event_trigger_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\compound_trigger_events.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\compound_trigger_events.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_variables.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_variables.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\import_export_complete.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\import_export_complete.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\file_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\file_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_function_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_function_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_composite.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_composite.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_advanced.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_advanced.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\unique_constraint_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\unique_constraint_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_browser_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_browser_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\schema_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\schema_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\server_ssl.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\server_ssl.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_explain.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_explain.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_editor.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_editor.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\language_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\language_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_left_pane.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_left_pane.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\maintenance_pw.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\maintenance_pw.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\check_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\check_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\rule_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\rule_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_debugger_keyboard_shortcuts.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_debugger_keyboard_shortcuts.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\materialized_view_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\materialized_view_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_check.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_check.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\exclusion_constraint_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\exclusion_constraint_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_csv_output.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_csv_output.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_paths_help.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_paths_help.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\materialized_view_storage.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\materialized_view_storage.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\database_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\database_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\object_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\object_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\role_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\role_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_step_in.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_step_in.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\role_membership.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\role_membership.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\schema_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\schema_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_parser_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_parser_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_dictionary_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_dictionary_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\help_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\help_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_queries.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_queries.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_step_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_step_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\master_password_reset.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\master_password_reset.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\login_recover.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\login_recover.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_table_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_table_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\role_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\role_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_browser_nodes.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_browser_nodes.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_set_breakpoint.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_set_breakpoint.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\move_objects_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\move_objects_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\jquery.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\jquery.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_do_not_save.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_do_not_save.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_tool_message.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_tool_message.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\sequence_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\sequence_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_function_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_function_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_parameter.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_parameter.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\materialized_view_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\materialized_view_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\role_privileges.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\role_privileges.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\tablespace_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\tablespace_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_tool_connection_status.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_tool_connection_status.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\sequence_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\sequence_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_keyboard_shortcuts.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_keyboard_shortcuts.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\move_objects_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\move_objects_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_constraint_identity.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_constraint_identity.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\database_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\database_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\language_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\language_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_queries.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_queries.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_execute_section.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_execute_section.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_output_explain.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_output_explain.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\database_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\database_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\server_ssh_tunnel.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\server_ssh_tunnel.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_output_data.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_tool.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_output_data.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_tool.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\import_export_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\import_export_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\function_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\function_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_foreign_key.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_foreign_key.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_schedule_exceptions.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_schedule_exceptions.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\function_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\function_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_statistics.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_statistics.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_constraint_generated.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_constraint_generated.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_dictionary_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_dictionary_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\ce_password_failed.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\ce_password_failed.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\toolbar.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\toolbar.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_browser_properties.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_browser_properties.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_output_history.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_output_history.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\materialized_view_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\materialized_view_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_main.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_main.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\event_trigger_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\event_trigger_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\user_mapping_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\user_mapping_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\function_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\function_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\schema_default_privileges.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\schema_default_privileges.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_dashboard_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_dashboard_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_ic_step_in.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_ic_step_in.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_transition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\index_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\index_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_function_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_function_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_key_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_key_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\view_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\view_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_template_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_template_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_results_grid.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_results_grid.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_table_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_table_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_dashboard.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_dashboard.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\sequence_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\sequence_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_configuration_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_configuration_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\connect_to_tunneled_server.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\connect_to_tunneled_server.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_parser_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_parser_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\add_user.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\add_user.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\package_body.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\package_body.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\grant_wizard_step2.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\grant_wizard_step2.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\procedure_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\procedure_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\view_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\view_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_configuration_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_configuration_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\favicon.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\favicon.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_variables.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_variables.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_sql_auto_completion.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_sql_auto_completion.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_output_error.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_output_error.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\tool_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\tool_menu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_globals_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_globals_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\role_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\role_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_data_wrapper_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_data_wrapper_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_sections.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_sections.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_misc_user_language.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_misc_user_language.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\package_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\package_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\procedure_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\procedure_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_function_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_function_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\view_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\view_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\rule_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\rule_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\package_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\package_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\exclusion_constraint_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\exclusion_constraint_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\user.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\user.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\import_export_pw.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\import_export_pw.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_table_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_table_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_key_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_key_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\view_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\view_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\rule_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\rule_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_properties.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_properties.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\primary_key_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\primary_key_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_function_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_function_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_server_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_server_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_step_definition_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_step_definition_code.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\package_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\package_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\function_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\function_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_paths_binary.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_paths_binary.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\procedure_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\procedure_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\view_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\view_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_schedule_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_schedule_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_do_not_save.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_do_not_save.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\server_connection.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\server_connection.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\primary_key_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\primary_key_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_exclude.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_exclude.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\collation_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\collation_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_output_explain_details.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_output_explain_details.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\rule_commands.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\rule_commands.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_table_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_table_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\libEGL.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\collation_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\backup_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\coding_standards.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\backup_globals_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\code_review.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\add_restore_point_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\change_password_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\column_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\backup_and_restore.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\clear_saved_passwords.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\compound_trigger_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\cast_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\backup_server_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\code_overview.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\check_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\change_user_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\classic.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\style.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\basic.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pygments.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\foreign_data_wrapper_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\foreign_key_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\domain_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\connect_to_server.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\deployment.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\foreign_server_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\connect_error.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\foreign_table_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\connecting.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\debugger.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\extension_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\domain_constraint_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\developer_tools.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\editgrid.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\event_trigger_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\database_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\container_deployment.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\exclusion_constraint_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\contributions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\desktop_deployment.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\keyboard_shortcuts.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\fts_parser_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\language_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\login.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\import_export_data.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\getting_started.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\fts_template_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\maintenance_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\management_basics.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\function_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\fts_configuration_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\index_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\grant_wizard.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\managing_database_objects.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\managing_cluster_objects.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\master_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\genindex.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\import_export_servers.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\licence.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\fts_dictionary_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\query_tool.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_1_0.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\preferences.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\modifying_tables.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\menu_bar.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\move_objects.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\primary_key_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\pgagent_jobs.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_1_1.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\package_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_1_2.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\procedure_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\materialized_view_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\query_tool_toolbar.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\pgagent_install.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\pgagent.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_3_4.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_3_6.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_3_0.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_3_1.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_3_5.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_1.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_3_2.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_1_3.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_1_5.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_2.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_1_4.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_2_0.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_0.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_3_3.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_1_6.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_2_1.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\rule_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\resource_group_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_4.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_7.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\sequence_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_6.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\restore_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\schema_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_8.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_11.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_12.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\role_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_3.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_5.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\search.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_13.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_10.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\release_notes_4_9.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\tree_control.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\user_mapping_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\using_pgagent.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\view_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\synonym_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\server_deployment.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\objects.inv\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\tabbed_browser.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\submitting_patches.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\tablespace_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\trigger_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\user_management.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\server_group_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\translations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\trigger_function_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\server_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\type_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\unique_constraint_dialog.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\user_interface.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\toolbar.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\doctools.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\documentation_options.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_disable.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\cast_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_sections.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_disable.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\underscore.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\add_restore_point.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\language_data.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_objects.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\sidebar.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\searchtools.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_objects.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\cast_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\collation_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\connect_to_server.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\clear_tunnel_password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\clear_saved_password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_params.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\database_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\change_user_password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\check_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\check_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\editgrid_toolbar.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_toolbar.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\extension_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_constraint_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\exclusion_constraint_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_constraint_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\extension_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\debug_stack.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\event_trigger_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_server_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_table_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_server_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\file.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_data_wrapper_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_key_action.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_server_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_data_wrapper_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_key_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\extension_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\logo-128.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_dictionary_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_configuration_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\login.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_dependents.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_template_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\logo-right-128.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\language_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\index_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_configuration_tokens.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_properties_icons.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_steps.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\primary_key_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\plus.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\maintenance_complete.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_sql_editor.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\minus.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_properties_table.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_query_tool.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\resource_group_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_do_not_save.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\synonym_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\server_group.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_toolbar.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_disable.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\role_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\query_tool_editable_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_objects.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\user_mapping_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\user_mapping_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_enumeration.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\tablespace_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\tablespace_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_shell.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\tablespace_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin_license.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_globals_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_globals_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_partition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_partition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\function_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\function_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_external.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_external.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\restore_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_messages.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\preferences_debugger_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\preferences_debugger_display.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\table_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\table_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\underscore-1.3.1.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_constraint_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_constraint_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\rule_condition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\rule_condition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\trigger_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\trigger_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_miscellaneous.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\column_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\column_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_server_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\database_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\database_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\welcome.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\welcome.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_server_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_server_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\maintenance.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\maintenance.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\foreign_table_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_table_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\ce_not_running.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\ce_not_running.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\compound_trigger_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\compound_trigger_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\type_range.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_range.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\editgrid_filter_dialog.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\editgrid_filter_dialog.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\event_trigger_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\event_trigger_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\sequence_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\sequence_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_1_2.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\query_tool_toolbar.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\foreign_key_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\trigger_function_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\preferences.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_4.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_1_6.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\server_deployment.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\pgagent_jobs.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\column_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\config_py.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\procedure_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\tabbed_browser.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\keyboard_shortcuts.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\server_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_12.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_13.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\backup_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\role_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\tree_control.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\foreign_table_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\environment.pickle\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\materialized_view_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\table_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\container_deployment.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_2_0.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_3_0.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\type_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\backup_server_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_3.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\restore_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\query_tool.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_2.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\debugger.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\code_overview.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.buildinfo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\coding_standards.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\deployment.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\database_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\collation_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\code_review.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\backup_and_restore.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\cast_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\change_password_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\compound_trigger_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\check_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\connecting.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\backup_globals_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\change_user_password.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\connect_to_server.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\desktop_deployment.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\connect_error.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\clear_saved_passwords.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\domain_constraint_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\foreign_server_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\extension_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\getting_started.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\fts_template_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\exclusion_constraint_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\editgrid.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\fts_dictionary_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\foreign_data_wrapper_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\fts_configuration_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\event_trigger_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\domain_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\licence.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\login.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\managing_database_objects.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_1_0.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\import_export_data.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\modifying_tables.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\move_objects.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\language_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\grant_wizard.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\import_export_servers.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\index_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\maintenance_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\pgagent.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\master_password.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\pgagent_install.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_0.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_5.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_6.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_3_3.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_9.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_1_3.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_7.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_3_2.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_10.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_8.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_1_4.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_3_6.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_1.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_3_5.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_3_4.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_disable.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\add_restore_point.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\user_management.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\unique_constraint_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\synonym_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\tablespace_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\using_pgagent.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\view_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\user_mapping_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\schema_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\user_interface.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\resource_group_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\trigger_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\ce_timeout.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_disable.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\cast_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_sections.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\change_user_password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_server_objects.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\check_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_objects.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\cast_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\ce_timeout.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_toolbar.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\collation_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_constraint_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\connect_to_server.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\clear_tunnel_password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_stack.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\clear_saved_password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\debug_params.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\database_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\check_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\domain_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\extension_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_constraint_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\exclusion_constraint_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\extension_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_data_wrapper_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\event_trigger_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_constraints.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\extension_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\domain_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_server_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_configuration_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_table_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_server_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_data_wrapper_security.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_key_action.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_server_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\foreign_key_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_configuration_tokens.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_dictionary_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\main_dependencies.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\import_export_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\fts_parser_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_dictionary_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\login.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_template_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\logo-right-128.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\language_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\index_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_dictionary_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_dependencies.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\import_export_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\fts_parser_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\pgagent_schedules.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_properties_icons.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_steps.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\maintenance_complete.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_dependents.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\password.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_properties_table.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\main_query_tool.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\pgagent_schedules.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\resource_group_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\primary_key_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_do_not_save.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_sql_editor.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_toolbar.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_disable.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\role_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\query_tool_editable_columns.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\restore_objects.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\schema_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\synonym_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\synonym_sql.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\server_group.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_enumeration.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\tablespace_parameters.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\tablespace_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\type_shell.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\tablespace_definition.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\schema_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\synonym_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\unique_constraint_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\deployment.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\backup_and_restore.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\connect_error.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\code_review.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\foreign_server_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\event_trigger_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\backup_server_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\foreign_data_wrapper_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\collation_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\fts_template_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\coding_standards.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\fts_dictionary_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\domain_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\database_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\domain_constraint_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\fts_configuration_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\container_deployment.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\compound_trigger_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\connect_to_server.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\change_password_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\clear_saved_passwords.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\config_py.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\getting_started.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\foreign_key_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\foreign_table_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\developer_tools.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\desktop_deployment.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\backup_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\grant_wizard.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\fts_parser_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\function_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\contributions.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\backup_globals_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\code_overview.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\extension_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\add_restore_point_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\import_export_data.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\check_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\cast_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\exclusion_constraint_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\editgrid.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\debugger.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\change_user_password.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\column_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\connecting.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\user_mapping_options.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\user_mapping_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\unique_constraint_general.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_4.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_1_6.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_3_1.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\preferences.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_6.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_3_6.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_10.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\master_password.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_3_4.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\management_basics.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\pgagent_jobs.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\query_tool.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\language_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\maintenance_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\query_tool_toolbar.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\procedure_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_3.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_0.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\menu_bar.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\materialized_view_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_8.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\pgagent_install.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_9.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_2_1.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\package_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_1_4.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_1_3.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\index.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\primary_key_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\keyboard_shortcuts.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\pgagent.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\managing_cluster_objects.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\move_objects.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\import_export_servers.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_7.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_2.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\managing_database_objects.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_2_0.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\login.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_1_0.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_3_2.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_3_3.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_1_1.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\index_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_3_5.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_1_2.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_5.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\modifying_tables.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_1.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_1_5.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_3_0.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\licence.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\user_interface.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\tree_control.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\translations.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_11.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\trigger_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\synonym_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\server_group_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\trigger_function_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\user_management.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_12.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\schema_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\toolbar.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\user_mapping_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\tablespace_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\role_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\unique_constraint_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\view_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\server_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\rule_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\type_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\submitting_patches.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\sequence_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\restore_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\table_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\tabbed_browser.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\server_deployment.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\resource_group_dialog.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\release_notes_4_13.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_sources\\using_pgagent.rst.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_2_1.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_1_1.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\function_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\menu_bar.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\connection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\autocomplete.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\browser.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\node.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\datamodel.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\static\\js\\foreign_key.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\exclusion_constraint\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\exclusion_constraint\\static\\js\\exclusion_constraint.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\js\\table.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\\static\\js\\partition.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\static\\js\\column.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\static\\js\\type.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\js\\server.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\js\\dashboard.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ru\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\de\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ja\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\es\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\fr\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\zh\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\it\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ko\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\pl\\LC_MESSAGES\\messages.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_pgadmin.style.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\backform.pgadmin.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\backgrid.pgadmin.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\file_utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\pgadmin_commons.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\debugger_direct.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\fontawesome-webfont.eot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\Roboto-Regular.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\Roboto-Regular.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\OpenSans-Bold.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\OpenSans-Bold.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\fontawesome-webfont.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\Roboto-Bold.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\Roboto-Bold.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\fontawesome-webfont.woff2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\fontawesome-webfont.woff\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\SourceCodePro-Regular.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\SourceCodePro-Regular.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\OpenSans-Regular.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\OpenSans-Regular.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\Roboto-Medium.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\Roboto-Medium.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\OpenSans-SemiBold.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\OpenSans-SemiBold.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\login.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\logo-256.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\logo-right-256.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\SourceCodePro-Bold.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\fonts\\OpenSans-Italic.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\require\\require.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\backgrid\\backgrid.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\static\\js\\user_management.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\js\\debugger_ui.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\js\\direct.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\package_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\index.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_1_5.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\contributions.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\fts_parser_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\submitting_patches.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\add_restore_point_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_4_11.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\managing_cluster_objects.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\primary_key_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\sequence_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\management_basics.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\server_group_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\rule_dialog.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\release_notes_3_1.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\developer_tools.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\babel.cfg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\css\\browser.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\css\\wizard.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\css\\domain_constraints.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\css\\function.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\templates\\foreign_key\\css\\foreign_key.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\templates\\check_constraint\\css\\check_constraint.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\templates\\triggers\\css\\trigger.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\templates\\compound_triggers\\css\\compound_trigger.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\\css\\rule.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\templates\\rules\\css\\rule.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\css\\edbfunc.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\css\\view.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\css\\mview.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\css\\view.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\css\\database.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\css\\servers.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\css\\pga_job.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\templates\\pga_schedule\\css\\pga_schedule.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\css\\pga_job.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\templates\\pga_jobstep\\css\\pga_step.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\css\\role.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\css\\server_type.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\css\\servers.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\css\\browser.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\css\\collection.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\css\\node.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\css\\dashboard.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\static\\css\\preferences.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\css\\alertify.noanimation.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\css\\style.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\css\\pgadmin.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\backgrid\\backgrid.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\css\\debugger.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\translations.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\.doctrees\\toolbar.doctree\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\alembic.ini\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\collection.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\static\\js\\cast.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\static\\js\\collation.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\static\\js\\constraints.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\static\\js\\check_constraint.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\static\\js\\compound_trigger.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\static\\js\\catalog_object.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\static\\js\\catalog_object_column.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\js\\catalog.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\js\\child.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\static\\js\\database.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\tunnel_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\messages.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\upgrade.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\master_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\js\\charting.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\server_dashboard.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\database_dashboard.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\welcome_dashboard.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\templates\\preferences\\index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\csrf.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\check_node_visibility.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\alertify.pgadmin.defaults.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\codemirror\\extension\\centre_on_line.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\copy_data.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\clipboard.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\active_cell_capture.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\column_selector.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\call_render_after_poll.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\calculate_query_run_time.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\codemirror.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\img\\load-root.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\context-menu-icons.eot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\custom_header_buttons.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\cell_selector.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\nodes\\dashboard.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\bundle\\browser.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\bundle\\codemirror.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\bundle\\app.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\load-node.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\backform\\backform.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\templates\\user_management\\js\\current_user.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\js\\debugger.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\direct.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\frame.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\error.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\static\\js\\domain_constraints.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\static\\js\\domain.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\js\\function.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\static\\js\\fts_template.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\js\\enable_disable_triggers.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\static\\js\\edbvar.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\js\\edbfunc.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\js\\edbproc.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\static\\js\\fts_dictionary.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\static\\js\\fts_parser.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\static\\js\\fts_configuration.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\static\\js\\foreign_table.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\static\\js\\extension.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\static\\js\\event_trigger.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\static\\js\\foreign_server.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\static\\js\\foreign_data_wrapper.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\js\\endpoints.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\help\\static\\js\\help.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\dialog_tab_navigator.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\gettext.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\grid_selector.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\generate_url.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\\index.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\\databases\\external_tables\\external_tables.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\execute_query.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\geometry_viewer.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\filter_dialog_model.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\filter_dialog.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\history\\history_collection.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\editors.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\formatters.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\event_handlers\\handle_query_output_keyboard_event.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\alertify\\dialog.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\alertify\\dialog_wrapper.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\alertify\\dialog_factory.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\js\\debugger_utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\karma.conf.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\preferences.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\panel.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\layout.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\node.ui.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\menu.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\keyboard.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\\static\\js\\index.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\js\\partition.utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\static\\js\\package.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\js\\mview.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\static\\js\\language.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\js\\pga_job.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\static\\js\\pga_schedule.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\static\\js\\pga_jobstep.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\js\\messages.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\static\\js\\preferences.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\pgadmin.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\modify_animation.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\keyboard_shortcuts.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\codemirror\\addon\\fold\\pgadmin-sqlfoldcode.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\index.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\index.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\\model_validation.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\\databases\\index.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\browser\\server_groups\\servers\\databases\\external_tables\\index.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\tree\\pgadmin_tree_node.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\tree\\pgadmin_tree_save_state.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\history\\index.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\pgadmin.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\toolbar.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\static\\js\\server_group.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\js\\procedure.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\static\\js\\primary_key.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\\js\\rule.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\js\\show_advanced_tab.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\js\\schema_child_tree_node.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\js\\schema.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\static\\js\\sequence.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\static\\js\\synonym.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\static\\js\\tablespace.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\js\\privilege.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\static\\js\\resource_group.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\static\\js\\role.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\supported_servers.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\settings\\static\\js\\settings.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor_utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\size_prettify.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\range_boundary_navigator.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\row_selector.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\set_staged_rows.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\range_selection_helper.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\tree\\tree.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\query_tool_notifications.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\query_tool_preferences.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\query_tool_actions.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\query_tool_http_error_handler.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\history\\query_history.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\history\\query_history_details.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\history\\query_sources.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\sqleditor\\history\\query_history_entries.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\style.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\slickgrid\\plugins\\slick.autocolumnsize.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\misc\\statistics\\statistics.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\nodes\\supported_database_node.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\bundle\\slickgrid.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\vendor\\require\\require.min.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\package.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\webpack.test.config.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\webpack.shim.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\webpack.config.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\script.py.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\02b9dccdcfcb_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\3c1e4b6eda55_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\7c56ea250085_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\js\\wizard.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\js\\trigger_function.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\static\\js\\unique_constraint.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\static\\js\\trigger.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\templates\\columns\\macros\\privilege.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\templates\\columns\\macros\\security.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\macros\\constraints.macro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\macros\\db_catalogs.macro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\macros\\privilege.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\macros\\security.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\macros\\get_full_type_sql_format.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\js\\view.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\\functions\\privilege.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\\functions\\variable.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\\functions\\security.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\\schemas\\privilege.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\macros\\schemas\\security.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\vacuum_settings\\vacuum_fields.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\macros\\db_catalogs.macro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\allowed_privs.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\static\\js\\user_mapping.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\js\\variable.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\img\\gpdb.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\macros\\pga_jobstep.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\macros\\pga_schedule.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\macros\\pga_exception.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\macros\\privilege.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\macros\\default_privilege.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\macros\\variable.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\macros\\security.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\macros\\gravatar_icon.macro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\macros\\static_user_icon.macro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\templates\\browser\\js\\utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\url_for.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\selection\\xcell_selection_model.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\logo-128.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\logo-right-128.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\select-all-icon.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\50aad68f99c2_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\09d53fca90c7_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\493cd3e39c0c_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\35f29b1701bd_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\redirects\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\actions\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\9.1_plus\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\help\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\model\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\resources\\_pgadmin.variables.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\css\\webcabin.overrides.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\static\\css\\user_management.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\utils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\javascript\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\children\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\settings\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\config_distro.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\a68b374fe373_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\fdc58d9bd449_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\ece2e76bf60e_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\ca00ec32581b_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\ec1cac3399c9_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\d85a62333272_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\aa86fb60b73d_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\a77a0932a568_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\b5b87fdfcb30_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\f195f9a4923d_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\migrations\\versions\\ef590e979b0d_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\master_password.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\csv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\compile_template_name.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\html.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\crypto.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\csrf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\ajax.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\exception.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\menu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\abstract.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\keywords.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\encoding.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\generate_keywords.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\cursor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\javascript\\javascript_bundler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\completion.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\parseutils\\meta.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\parseutils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\parseutils\\ctes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\collection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\gpdb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\mapping_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\actions\\get_all_nodes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\base_partition_table.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\utils\\debugger_instance.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\command.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\get_column_types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\is_begin_required.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\apply_explain_plan_wrapper.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\constant_definition.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\filter_dialog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\is_query_resultset_updatable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgAdmin4.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\setup.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\route.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\session.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\preferences.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\versioned_template_loader.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\server_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\paths.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\registry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\typecast.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\driver\\psycopg2\\server_manager.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\sqlcompletion.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\parseutils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\prioritization.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\parseutils\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\utils\\sqlautocomplete\\parseutils\\tables.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\register_browser_preferences.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\ppas.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\properties.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\reverse_engineer_ddl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\type.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_aci_tree.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_backform.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_alert.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_alertify.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\query_history.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\start_running_query.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\save_changed_data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\update_session_grid_transaction.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\scss\\_browser.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\static\\scss\\_wizard.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\12_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.6_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.5_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\12_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.6_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\11_plus\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.6_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.5_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\11_plus\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.5_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\begin.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\11_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\gpdb_5.0_plus\\column_details.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\alter.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\column_details.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\11_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\11_plus\\column_details.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\12_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\gpdb_5.0_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\coll_table_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.1_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\alter.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\10_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\alter.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\10_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\alter.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\begin.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\\5_plus\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\\5_plus\\attach.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\\attach.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\\attach.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\begin.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\11_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\10_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\12_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.1_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\gpdb_5.0_plus\\additional_properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\additional_properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\coll_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\backend_support.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\copy_config.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\macros\\catalogs.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\macros\\catalogs.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\macros\\catalogs.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\9.3_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\\extensions\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.3_plus\\alter_offline.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.3_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.3_plus\\alter_online.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\alter_offline.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\alter_online.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.1_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\alter_online.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\\user_mappings\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\9.3_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\9.3_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\9.3_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\9.3_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\gpdb_5.0_plus\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\alter.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\acl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\scss\\_servers.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\\sql\\pre3.4\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\\pre3.4\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\default\\change_password.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\\sql\\10_plus\\check_recovery.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\\sql\\default\\check_recovery.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\connect\\sql\\9.0_plus\\check_recovery.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\scss\\_dashboard.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\10_plus\\activity.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\gpdb_5.0_plus\\activity.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\9.6_plus\\activity.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\default\\activity.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\default\\config.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\preferences\\static\\scss\\_preferences.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_codemirror.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_bootstrap.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_webcabin.pgadmin.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\pgadmin.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_select2.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_backgrid.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_bootstrap4-toggle.overrides.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\_pgadmin.grid.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\resources\\_default.style.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\resources\\_default.variables.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\resources\\_utils.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\scss\\resources\\pgadmin.resources.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\user_management\\static\\scss\\_user_management.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\static\\scss\\_debugger.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\add_breakpoint_edb.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\add_breakpoint_pg.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\clear_breakpoint.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\attach_to_port.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\abort_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\continue.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\add_breakpoint_edb.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\add_breakpoint_pg.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\clear_breakpoint.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\attach_to_port.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\abort_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\continue.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\getsrcandtrgttype.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\\sql\\gpdb_5.0_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\\sql\\gpdb_5.0_plus\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\\sql\\gpdb_5.0_plus\\get_table_information.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\\sql\\gpdb_5.0_plus\\list.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\templates\\sql\\gpdb_5.0_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\gpdb_5.0_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\default\\get_domain.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\9.2_plus\\get_type_category.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\9.2_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\gpdb_5.0_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\gpdb_5.0_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\get_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\9.2_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\9.2_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\9.2_plus\\get_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\get_languages.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\get_out_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\12_plus\\get_support_functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\get_languages.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\get_out_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\12_plus\\get_support_functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\get_languages.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\get_out_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\11_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\11_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\11_plus\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.5_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.5_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.2_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.2_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\get_languages.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\get_out_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\get_languages.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\get_out_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\get_languages.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\get_out_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.5_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.5_plus\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.2_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.2_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.2_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\get_languages.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\get_out_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\11_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\11_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\11_plus\\get_definition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.5_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.2_plus\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.2_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.2_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\get_oid_with_transaction.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\get_constraint_cols.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\end.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\get_indices.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\11_plus\\get_constraint_include.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\gpdb_5.0_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\get_op_class.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\get_am.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\11_plus\\include_details.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\get_inherits.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\get_columns_for_table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\get_table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\12_plus\\get_tables_for_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\gpdb_5.0_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_inherits.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_relations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\depend.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_oftype.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_tables_for_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_schema_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_columns_for_table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_types_where_condition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\enable_disable_trigger.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_table_row_count.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.1_plus\\get_inherits.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\enable_disable_trigger.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\get_triggerfunctions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\9.1_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\enable_disable_trigger.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\get_triggerfunctions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\9.1_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\12_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\enable_disable_trigger.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\get_triggerfunctions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\9.1_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\enable_disable_trigger.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\get_cols.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\get_oid_with_transaction.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\get_constraint_cols.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\get_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\create_index.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\end.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\9.1_plus\\get_oid_with_transaction.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\9.1_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\\5_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\\5_plus\\get_attach_tables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\\5_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\gpdb\\5_plus\\detach.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\\get_attach_tables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\10_plus\\detach.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\pg\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\\get_attach_tables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\10_plus\\detach.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\partitions\\sql\\ppas\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\gpdb_5.0_plus\\get_constraint_cols.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.6_plus\\get_access_methods.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_oid_with_transaction.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_constraint_cols.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_operator.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_access_methods.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\get_oper_class.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\end.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\11_plus\\get_constraint_include.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.1_plus\\get_oid_with_transaction.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.2_plus\\get_constraint_cols.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\9.2_plus\\get_operator.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\gpdb_5.0_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\depend.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\edit_mode_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\is_referenced.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\get_inherited_tables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\get_position.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.2_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.2_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\get_parent.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\get_oid_with_transaction.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\get_oid_with_transaction.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\gpdb_5.0_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\get_external_functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\get_subtypes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\get_scid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\templates\\edbvars\\ppas\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\\get_body.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\\get_body.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\11_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\templates\\catalog_object_column\\sql\\default\\depend.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\templates\\catalog_object_column\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\pg\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\ppas\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\ppas\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\get_view_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\get_view_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\get_view_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\10_plus\\get_def.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\get_dependencies.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\get_def.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\\get_objects.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\\get_parent_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\9.5_plus\\get_objects.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\9.5_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\9.5_plus\\get_schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\9.2_plus\\get_objects.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\dictionaries.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\parser.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\datatype\\sql\\gpdb_5.0_plus\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\datatype\\sql\\default\\get_types.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\create.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\get_name.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\is_catalog.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\12_plus\\get_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\gpdb_5.0_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\node.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\get_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\get_foreign_servers.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\get_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\get_foreign_servers.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\get_tables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\get_table_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\get_collations.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\get_constraints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\get_foreign_servers.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\gpdb_5.0_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\\extensions\\sql\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\\extensions\\sql\\extensions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\get_db.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\get_oid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\eventfunctions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.3_plus\\get_variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.3_plus\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.3_plus\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\delete_multiple.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\get_variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\get_ctypes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\get_encodings.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.1_plus\\get_variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.1_plus\\get_ctypes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.1_plus\\defacl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\get_variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\grant.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\get_encodings.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\\user_mappings\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\\user_mappings\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\9.3_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\default\\dependents.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\\handlers.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\dependents.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\move_objects.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\9.2_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\9.2_plus\\move_objects.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\\default\\getoid.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\\default\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\job_classes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\\sql\\pre3.4\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\\sql\\pre3.4\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\\pre3.4\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\\pre3.4\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\permission.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\dependents.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\nodes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\permission.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\dependents.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\12_plus\\dependents.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\12_plus\\dependencies.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\default\\dependents.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\default\\dependencies.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\9.1_plus\\dependents.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\9.1_plus\\dependencies.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\gpdb_5.0_plus\\dashboard_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\gpdb_5.0_plus\\locks.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\default\\prepared.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\default\\dashboard_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\templates\\dashboard\\sql\\default\\locks.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\get_trigger_function_info.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\get_function_debug_info.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\execute_edbspl.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\execute_plpgsql.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\get_stack_info.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\deposit_value.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\get_variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\get_breakpoints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\debug_plpgsql_execute_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\debug_plpgsql_init.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\create_listener.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\debug_spl_execute_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\debug_spl_init.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\get_function_info.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\get_stack_info.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\deposit_value.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\get_variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\get_breakpoints.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\debug_plpgsql_execute_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\debug_plpgsql_init.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\create_listener.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\debug_spl_execute_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\debug_spl_init.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\get_function_info.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\sql.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.6_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\11_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.5_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.5_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.6_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.5_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.6_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.5_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\sql.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\templates\\fts_templates\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\10_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\gpdb_5.0_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\gpdb_5.0_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\reset_stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\truncate.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.1_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\9.2_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\10_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\10_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\\rule_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\exclusion_constraint\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\10_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\gpdb_5.0_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\templates\\edbvars\\ppas\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbprocs\\ppas\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\templates\\edbfuncs\\ppas\\11_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.1_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\templates\\packages\\ppas\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\templates\\catalog_object_column\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\templates\\catalog_object\\sql\\pg\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\refresh.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\refresh.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\refresh.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\\templates.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\\schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\templates\\synonyms\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\sql.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\tokens.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\tokenDictList.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\sql.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\templates\\fts_configurations\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.2_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\types_condition.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\9.3_plus\\sqlpane.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\sqlpane.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\templates.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\\extensions\\sql\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\templates\\event_triggers\\sql\\9.3_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\templates\\user_mappings\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\9.3_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\9.3_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\9.2_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\\default\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\schedules.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\run_now.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\steps.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_job\\sql\\pre3.4\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\\sql\\pre3.4\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\\pre3.4\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\\pre3.4\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_jobstep\\sql\\pre3.4\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\sql.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\properties.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\sql.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\depends\\sql\\default\\role_dependencies.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\10_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\9.6_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\default\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\templates\\servers\\sql\\9.2_plus\\stats.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\step_over.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\set_breakpoint.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\step_into.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\select_frame.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\step_over.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\set_breakpoint.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\step_into.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\select_frame.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\static\\img\\coll-cast.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\static\\img\\cast.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\casts\\templates\\casts\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\static\\img\\coll-external_table.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\external_tables\\static\\img\\external_table.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\static\\img\\collation.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\static\\img\\coll-collation.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\collations\\templates\\collations\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\static\\img\\domain_constraints.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\static\\img\\coll-domain_constraints.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\static\\img\\domain_constraints-bad.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\domain_constraints\\templates\\domain_constraints\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\static\\img\\coll-domain.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\static\\img\\domain.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\domains\\templates\\domains\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\img\\coll-procedure.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\img\\coll-trigger_function.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\img\\coll-function.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\gpdb\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\12_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\pg\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\12_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\functions\\ppas\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\pg\\sql\\11_plus\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\procedures\\ppas\\sql\\11_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\11_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.5_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\pg\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\templates\\trigger_functions\\ppas\\sql\\9.5_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\static\\img\\coll-fts_template.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\\static\\img\\coll-index.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\static\\img\\coll-constraints.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\static\\img\\foreign_key.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\foreign_key\\static\\img\\foreign_key_no_validate.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\exclusion_constraint\\static\\img\\exclusion_constraint.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\static\\img\\check-constraint.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\check_constraint\\static\\img\\check-constraint-bad.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\static\\img\\coll-trigger.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\static\\img\\compound_trigger-bad.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\static\\img\\coll-compound_trigger.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\compound_triggers\\static\\img\\compound_trigger.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\\img\\coll-rule.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\\coll-table.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\\static\\img\\coll-partition.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\static\\img\\coll-column.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\columns\\static\\img\\column.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\index_constraint\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\indexes\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\12_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\tables\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\gpdb\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\pg\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\triggers\\sql\\ppas\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\compound_triggers\\sql\\ppas\\12_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\rules\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\validate.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\foreign_key\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\10_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\columns\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\validate.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\templates\\check_constraint\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\static\\img\\coll-type.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\static\\img\\coll-edbvar.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbvars\\static\\img\\edbvar.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\static\\img\\coll-package.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\img\\coll-edbfunc.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\img\\edbfunc.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\img\\edbproc.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\edbfuncs\\static\\img\\coll-edbproc.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\static\\img\\coll-catalog_object.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\static\\img\\catalog_object.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\static\\img\\coll-catalog_object_column.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\catalog_objects\\columns\\static\\img\\catalog_object_column.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\img\\coll-mview.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\img\\coll-view.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.3_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\pg\\9.4_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\mviews\\ppas\\9.3_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.3_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.4_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.1_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\pg\\9.2_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.3_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.4_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.1_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\ppas\\9.2_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\view_id.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\templates\\views\\gpdb_5.0_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\img\\catalog.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\img\\coll-schema.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\img\\coll-catalog.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\static\\img\\coll-fts_dictionary.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\templates\\fts_dictionaries\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\static\\img\\coll-sequence.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\templates\\sequences\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\static\\img\\coll-synonym.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\static\\img\\coll-fts_parser.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\templates\\fts_parsers\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\static\\img\\coll-fts_configuration.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.1_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\pg\\9.2_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.1_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\ppas\\9.2_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\catalog\\gpdb_5.0_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\default\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\pg\\9.2_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\ppas\\9.1_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\schemas\\gpdb_5.0_plus\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\templates\\vacuum_settings\\sql\\vacuum_defaults.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\static\\img\\foreign_table.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\static\\img\\coll-foreign_table.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\foreign_tables\\templates\\foreign_tables\\sql\\9.5_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\static\\img\\database.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\static\\img\\coll-database.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\static\\img\\databasebad.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\static\\img\\coll-language.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\templates\\languages\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\static\\img\\coll-extension.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\static\\img\\extension.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\extensions\\templates\\extensions\\sql\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\static\\img\\coll-event_trigger.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\static\\img\\event_trigger.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\templates\\databases\\sql\\9.2_plus\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\static\\img\\foreign_server.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\static\\img\\coll-foreign_server.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\static\\img\\coll-user_mapping.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\templates\\foreign_servers\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\static\\img\\foreign_data_wrapper.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\static\\img\\coll-foreign_data_wrapper.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\templates\\foreign_data_wrappers\\sql\\default\\validators.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\static\\img\\coll-tablespace.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\9.6_plus\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\default\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\templates\\tablespaces\\sql\\9.2_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\static\\img\\coll-resource_group.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\templates\\resource_groups\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\img\\coll-pga_job.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\static\\img\\coll-pga_schedule.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\templates\\pga_schedule\\sql\\pre3.4\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\static\\img\\coll-pga_jobstep.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\static\\img\\coll-role.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.4_plus\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\variables.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\templates\\roles\\sql\\9.1_plus\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\collapse_expand.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\drop_cascade.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\wait_for_breakpoint.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v1\\wait_for_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\wait_for_breakpoint.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\debugger\\templates\\debugger\\sql\\v2\\wait_for_target.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgAdmin4.wsgi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\static\\img\\server_group.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\img\\function.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\img\\trigger_function.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\functions\\static\\img\\procedure.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_templates\\static\\img\\fts_template.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\indexes\\static\\img\\index.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\static\\img\\primary_key.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\constraints\\index_constraint\\static\\img\\unique_constraint.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\static\\img\\trigger-bad.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\triggers\\static\\img\\trigger.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\rules\\static\\img\\rule.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\\table-repl.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\\table-multi-inherit.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\\table-repl-sm.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\\table-inherited.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\\table-inherits.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\static\\img\\table.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\tables\\partitions\\static\\img\\partition.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\static\\img\\type.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\types\\templates\\types\\sql\\postgres_inbuit_types.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\packages\\static\\img\\package.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\img\\view.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\views\\static\\img\\mview.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\static\\img\\schema.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_dictionaries\\static\\img\\fts_dictionary.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\sequences\\static\\img\\sequence.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\synonyms\\static\\img\\synonym.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_parsers\\static\\img\\fts_parser.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\schemas\\fts_configurations\\static\\img\\fts_configuration.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\languages\\static\\img\\language.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\event_triggers\\static\\img\\triggerbad.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\databases\\foreign_data_wrappers\\foreign_servers\\user_mappings\\static\\img\\user_mapping.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\tablespaces\\static\\img\\tablespace.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\img\\server.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\img\\ppas.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\img\\serverbad.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\static\\img\\pg.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\resource_groups\\static\\img\\resource_group.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\img\\pga_job.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\static\\img\\pga_job-disabled.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\schedules\\static\\img\\pga_schedule.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\pgagent\\steps\\static\\img\\pga_jobstep.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\static\\img\\role.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\browser\\server_groups\\servers\\roles\\static\\img\\group.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\dashboard\\static\\img\\welcome_logo.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\img\\save_data_changes.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\context-menu-icons.woff2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\context-menu-icons.woff\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\context-menu-icons.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\img\\forgot_password.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\js\\sqleditor.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\static\\js\\grant_wizard.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\js\\utility.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\pythonw.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\pythonw.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\pygmentize.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\pip.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\pip3.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\pip3.7.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\flask.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\pybabel.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\chardetect.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\alembic.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\wheel.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\python.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\easy_install-3.7.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\easy_install.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\mako-render.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\htmlmin.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\python3.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\sshtunnel.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\sqlformat.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\safe.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\clock.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macJapan.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\big5.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\gb12345.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp932.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp950.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\euc-cn.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\gb2312.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\shiftjis.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp936.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\euc-kr.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\gb2312-raw.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\ksc5601.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\jis0208.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp949.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\jis0212.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\euc-jp.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\console.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\tkfbox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\menu.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\text.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\logo.eps\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\goldberg.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\floor.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\teapot.ppm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\ouster.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\earth.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pathlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\bdb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ipaddress.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\os.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\imaplib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tarfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\subprocess.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\nntplib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\datetime.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\functools.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dataclasses.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\shutil.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\argparse.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\difflib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\cgi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\\loading.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\css\\sqleditor.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\templates\\datagrid\\index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\templates\\datagrid\\filter.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\\static\\js\\backup.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\\static\\js\\backup_dialog_wrapper.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\\static\\js\\backup_dialog.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\static\\js\\bgprocess.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\static\\css\\bgprocess.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\css\\explain.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\js\\create_dialogue.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\css\\file_manager.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\templates\\file_manager\\index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\base.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\forgot_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\fields.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\login_user.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\messages.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\change_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\panel.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\watermark.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\reset_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\email\\change_notice.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\\static\\js\\about.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\\templates\\about\\index.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\.editorconfig\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\.eslintignore\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\.eslintrc.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\activate.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\deactivate.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1255.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macGreek.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp864.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso2022-jp.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp866.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-10.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp855.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macCentEuro.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1251.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-16.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macIceland.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-2.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp737.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-3.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-7.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\jis0201.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macThai.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-9.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp850.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\ebcdic.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-4.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp865.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp852.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\gb1988.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1250.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-14.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-15.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\dingbats.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-5.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macCroatian.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1252.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macRoman.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\koi8-u.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp775.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp863.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp862.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-1.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\tis-620.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1254.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-13.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macRomania.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\ascii.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp857.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-8.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso8859-6.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\koi8-r.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp869.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1257.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1258.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp861.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso2022-kr.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp860.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\symbol.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp437.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macTurkish.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\iso2022.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1253.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macCyrillic.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp1256.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\cp874.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macDingbats.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\encoding\\macUkraine.enc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\pwrdLogo100.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\logo100.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\logo64.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\logoMed.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\pwrdLogo75.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\pwrdLogo200.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\tai-ku.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\pwrdLogo.eps\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\pwrdLogo150.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\pwrdLogo175.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\logoLarge.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\earthris.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\tcllogo.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\static\\js\\import_export.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\static\\js\\datagrid.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\static\\js\\show_query_tool.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\static\\js\\show_data.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\static\\js\\datagrid_panel_title.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\static\\js\\maintenance.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\static\\js\\menu_utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\\static\\js\\restore_dialog_wrapper.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\\static\\js\\restore.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\\static\\js\\restore_dialog.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\\static\\js\\menu_utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\\static\\js\\menu_utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\static\\js\\menu_utils.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\acl.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\acl.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\templates\\js\\translations.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\templates\\sql\\macros\\utils.macros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependents\\static\\js\\dependents.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\sql\\static\\js\\sql.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\js\\svg_downloader.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\js\\explain_statistics.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\js\\image_maper.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\js\\explain.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\js\\file_manager.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\js\\select_dialogue.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\js\\helpers.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\templates\\file_manager\\js\\file_manager_config.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\templates\\file_manager\\js\\languages\\en.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\statistics\\static\\js\\statistics.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependencies\\static\\js\\dependencies.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\de_at.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_cl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_zw.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fa.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_bo.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\el.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ar_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\cs.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ar_lb.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_pr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ca.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_uy.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\bn_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ar.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_ca.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_be.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\de_be.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\af_za.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\be.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\eu.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_sg.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_ve.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ar_sy.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_bw.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_ie.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_ph.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\da.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_py.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\bn.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_sv.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_mx.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_za.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\et.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_ar.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_pa.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ar_jo.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\bg.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_nz.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_pe.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_co.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_cr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\eu_es.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\de.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\af.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_ni.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_hn.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_do.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_hk.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_gb.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\eo.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_ec.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fa_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\en_au.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\es_gt.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\el.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\cs.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\en.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\da.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\es.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\de.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\en_gb.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\eo.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\en.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\datagrid\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\restore\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\backup\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependents\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\sql\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\statistics\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\dependencies\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\setup\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\about\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\activate.ps1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\lv.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fo.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\zh.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\nn.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fi.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\hi_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\mt.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\pl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\zh_hk.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\kok.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ru.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ga_ie.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fr_ch.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\sr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ru_ua.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\id_id.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ms_my.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\sv.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\hr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\gl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\pt.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\mr_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\vi.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\gv.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ko_kr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ms.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ja.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fr_ca.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\te_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\gl_es.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\nb.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\sh.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\lt.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\mr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\sw.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ga.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\kl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\mk.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\he.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\kw_gb.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\zh_tw.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fo_fo.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\kw.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\hi.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\it_ch.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\it.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\nl_be.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\sl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\hu.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\gv_gb.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\sq.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\kok_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\kl_gl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\te.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ta.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\zh_cn.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\is.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\tr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\zh_sg.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\sk.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ro.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ko.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\nl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\pt_br.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\uk.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\ta_in.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\th.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fa_ir.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\id.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\msgs\\fr_be.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\pl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\ru.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\sv.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\fr.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\pt.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\it.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\hu.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgs\\nl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\nl.msg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\earthmenu.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_collections_abc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__future__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\setup\\data_directory.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\setup\\db_version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\setup\\db_upgrade.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\activate_this.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\colorsys.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\getpass.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\copy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\formatter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\csv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_strptime.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\filecmp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncore.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ast.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\bz2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dis.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\fractions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_sitebuiltins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\fileinput.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_osx_support.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\decimal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\calendar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\query_tool_preferences.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\utils\\query_tool_fs_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\process_executor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\processes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\poplib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\opcode.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\quopri.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\gzip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pyclbr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\numbers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\operator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\profile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\mailcap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pkgutil.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lzma.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\keyword.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pipes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\linecache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2odt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2html4.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2latex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2s5.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2xml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2pseudoxml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2html5.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2odt_prepstyles.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2html.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2xetex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rst2man.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\rstpep2html.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\selectors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\symbol.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\string.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\timeit.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sunau.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\random.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\symtable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sre_constants.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\runpy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tabnanny.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\textwrap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\signal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\telnetlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tempfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\statistics.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\scss\\_sqleditor.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\scss\\_history.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\\disconnect.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\\commit.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\\connect.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\10_plus\\explain_plan.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\12_plus\\explain_plan.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\gpdb_5.0_plus\\explain_plan.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\insert.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\delete.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\get_columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\objectname.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\primary_keys.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\explain_plan.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\validate.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\objectquery.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\has_oids.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\select.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\default\\update.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\11_plus\\primary_keys.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\templates\\sqleditor\\sql\\9.2_plus\\explain_plan.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\import_export\\templates\\import_export\\sql\\cmd.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\maintenance\\templates\\maintenance\\sql\\command.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\static\\scss\\_grant_wizard.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\10_plus\\sql\\table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\11_plus\\sql\\function.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\grant_table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\view.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\grant_function.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\function.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\sequence.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\grant_sequence.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\pg\\9.1_plus\\sql\\get_schemas.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\10_plus\\sql\\table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\11_plus\\sql\\function.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\grant_table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\view.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\grant_function.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\function.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\sequence.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\grant_sequence.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\table.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\grant_wizard\\templates\\grant_wizard\\ppas\\9.1_plus\\sql\\get_schemas.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\bgprocess\\static\\scss\\_bgprocess.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\scss\\_explain.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash_semi_join.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_append.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_delete.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash_setop_intersect_all.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_bmp_and.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_bmp_heap.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_broadcast_motion.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_group.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_bmp_or.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_gather_merge.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash_setop_except.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash_anti_join.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_bmp_index.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_aggregate.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash_setop_intersect.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_gather_motion.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_foreign_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash_setop_except_all.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_cte_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\file_manager\\static\\scss\\_file_manager.scss\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\databases.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\foreign_keys.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\keywords.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\schema.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\tableview.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\columns.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\default\\datatypes.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\templates\\sqlautocomplete\\sql\\11_plus\\functions.sql\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\setup\\user_info.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2s5.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rstpep2html.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2pseudoxml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2xml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2html5.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2html.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2html4.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2odt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2man.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2latex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2odt_prepstyles.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\__pycache__\\rst2xetex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\traceback.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\trace.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\weakref.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\zipapp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tracemalloc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wave.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\\rollback.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\\view_data.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\tools\\sqleditor\\static\\img\\save_data_changes.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_insert.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_named_tuplestore_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_nested_loop_semi_join.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_values_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_nested.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_subplan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_sort.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_seek.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_setop.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_merge_append.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_table_func_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_recursive_union.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_index_only_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_unknown.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_window_aggregate.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_result.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_update.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_materialize.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_limit.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_join.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_hash_setop_unknown.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_merge_semi_join.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_merge.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_unique.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_projectset.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_lock_rows.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_merge_anti_join.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_worktable_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_tid_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_nested_loop_anti_join.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_redistribute_motion.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\misc\\static\\explain\\img\\ex_index_scan.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\auto.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\bgerror.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\clrpick.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\choosedir.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\button.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\check.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\anilabel.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\clrpick.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\bind.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\bitmap.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\aniwave.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\colors.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\arrow.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\button.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\classicTheme.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\altTheme.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\clamTheme.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\aquaTheme.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\button.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\init.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\history.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\http1.0\\http.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\comdlg.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\entry.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\dialog.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\listbox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\megawidget.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\icons.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\iconlist.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\fontchooser.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\focus.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\entry1.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\hscale.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\labelframe.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\form.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\entry3.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\entry2.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\image2.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\knightstour.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\label.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\filebox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\items.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ctext.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\cscroll.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\dialog1.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\dialog2.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\menubu.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\icon.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\mclist.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\fontchoose.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\image1.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\menu.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\combo.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\defaults.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\cursors.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\entry.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\fonts.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\combobox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\parray.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\package.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\http1.0\\pkgIndex.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\opt0.4\\optparse.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\opt0.4\\pkgIndex.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\palette.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\tearoff.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\pkgIndex.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\msgbox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\obsolete.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\mkpsenc.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\panedwindow.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\optMenu.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\spinbox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\scale.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\safetk.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\scrlbar.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\sayings.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\radio.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\msgbox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\spin.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\textpeer.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\search.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\paned1.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\plot.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\paned2.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\puzzle.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\style.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\pendulum.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\states.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ruler.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\text.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\scrollbar.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\notebook.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\sizegrip.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\menubutton.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\panedwindow.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\spinbox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\progress.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\scale.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\templates\\security\\email\\change_notice.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\activate.xsh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\activate\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tm.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\word.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Anchorage\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Aruba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Anguilla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Adak\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Araguaina\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Antigua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Andorra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Amsterdam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Apia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Adelaide\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\ACT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Accra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Algiers\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Abidjan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Addis_Ababa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Brazil\\Acre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Antananarivo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Aqtobe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Aqtau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Anadyr\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Amman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Almaty\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Aden\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Aleutian\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Alaska\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Arizona\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\license.terms\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\unsupported.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\xmfbox.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\tk.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ttkprogress.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\vscale.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\license.terms\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\tree.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\toolbar.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\twind.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ttkscale.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ttkbut.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\unicodeout.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ttknote.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ttkmenu.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ttkpane.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\pattern.xbm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\noletter.xbm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\flagdown.xbm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\gray25.xbm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\letters.xbm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\images\\flagup.xbm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\utils.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\vistaTheme.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\ttk.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\treeview.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\xpTheme.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\ttk\\winTheme.tcl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\no-global-site-packages.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\CET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\CST6CDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Cuba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Mexico\\BajaSur\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Mexico\\BajaNorte\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Creston\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Caracas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Buenos_Aires\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Boa_Vista\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Belem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Bahia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Atikokan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Catamarca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Costa_Rica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Barbados\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Boise\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Bahia_Banderas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Cancun\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Cordoba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Asuncion\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Coral_Harbour\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Belize\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Atka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Cambridge_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Campo_Grande\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Cayenne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Cayman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Chicago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Bogota\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Blanc-Sablon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Chihuahua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\North_Dakota\\Beulah\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\North_Dakota\\Center\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Buenos_Aires\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\ComodRivadavia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Catamarca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Cordoba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Bratislava\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Belfast\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Astrakhan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Chisinau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Budapest\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Busingen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Copenhagen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Berlin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Bucharest\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Brussels\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Belgrade\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Athens\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\AST4\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\CST6\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\AST4ADT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\CST6CDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Chile\\Continental\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Chatham\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Auckland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Bougainville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Chuuk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Central\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Atlantic\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Brisbane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Canberra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Broken_Hill\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Ceuta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Asmara\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Banjul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Bujumbura\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Cairo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Brazzaville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Bissau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Conakry\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Asmera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Blantyre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Bamako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Casablanca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Bangui\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Cape_Verde\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Azores\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Bermuda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Canary\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Casey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Cocos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Chagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Christmas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Comoro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Baghdad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Chongqing\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Choibalsan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Bishkek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Atyrau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Colombo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Barnaul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Beirut\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Calcutta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Ashgabat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Bahrain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Bangkok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Chita\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Baku\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Ashkhabad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Chungking\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Brunei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Central\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\browse\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\GMT0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\EST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\EET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Israel\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\GMT-0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Greenwich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Eire\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\GB-Eire\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Iceland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\GMT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Hongkong\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\EST5EDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\HST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Egypt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\GMT+0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Iran\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\GB\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Mexico\\General\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Danmarkshavn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Dominica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Glace_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Hermosillo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Edmonton\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Fort_Nelson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Goose_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Curacao\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Guyana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Eirunepe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Godthab\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Guatemala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Guadeloupe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Detroit\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Iqaluit\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\El_Salvador\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Halifax\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Inuvik\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Grenada\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Fortaleza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Grand_Turk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Fort_Wayne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Cuiaba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Dawson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Guayaquil\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Ensenada\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Dawson_Creek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Havana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indianapolis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Denver\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Indianapolis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Dublin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Guernsey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Helsinki\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Gibraltar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Isle_of_Man\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\EST5\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\HST10\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\EST5EDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Chile\\EasterIsland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Galapagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Guadalcanal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Efate\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Honolulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Guam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Easter\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Gambier\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Fiji\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Funafuti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Enderbury\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Fakaofo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-8\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-5\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-7\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+5\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+12\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\Greenwich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+11\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+10\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-6\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+4\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-13\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+3\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-9\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-10\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-14\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+8\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-11\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+6\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+9\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-12\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-4\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT+7\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\GMT-3\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Eastern\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\East-Saskatchewan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Currie\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Eucla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Darwin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Hobart\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\El_Aaiun\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Gaborone\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Djibouti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Dar_es_Salaam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Harare\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Douala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Dakar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Freetown\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Faroe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Faeroe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Davis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\DumontDUrville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Brazil\\DeNoronha\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Brazil\\East\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Dili\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Dacca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Irkutsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Damascus\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Dhaka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Dushanbe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Ho_Chi_Minh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Istanbul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Gaza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Harbin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Famagusta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Dubai\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Hebron\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Hovd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Hong_Kong\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Eastern\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\East-Indiana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Indiana-Starke\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Hawaii\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\hello\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\MET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Jamaica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Kwajalein\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Japan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Libya\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Knox_IN\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Jamaica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Matamoros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Lower_Princes\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Miquelon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Marigot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Manaus\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Maceio\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Managua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Moncton\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\La_Paz\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Mexico_City\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Louisville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Menominee\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Jujuy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Merida\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Los_Angeles\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Lima\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Mazatlan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Martinique\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Juneau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Metlakatla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Kralendijk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Mendoza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Kentucky\\Louisville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Marengo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Knox\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\La_Rioja\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Jujuy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Mendoza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Arctic\\Longyearbyen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Kaliningrad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Mariehamn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Kiev\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Madrid\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Lisbon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Kirov\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Monaco\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Minsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Malta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Istanbul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\London\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Luxembourg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Jersey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Ljubljana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Kosrae\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Kiritimati\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Kwajalein\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Majuro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Johnston\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Midway\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Marquesas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Melbourne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Lord_Howe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\LHI\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Lindeman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Lome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Lusaka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Monrovia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Maputo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Johannesburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Mogadishu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Khartoum\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Maseru\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Luanda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Kampala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Lubumbashi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Mbabane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Kinshasa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Juba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Malabo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Libreville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Kigali\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Lagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Madeira\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Jan_Mayen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\McMurdo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Macquarie\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Mawson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Kerguelen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Maldives\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Mayotte\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Mahe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Mauritius\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Katmandu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kuching\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kuala_Lumpur\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Macau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kuwait\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Karachi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kashgar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kabul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kolkata\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Jayapura\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Khandyga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Krasnoyarsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Jakarta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Macao\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Magadan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Manila\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kathmandu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Jerusalem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Kamchatka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Makassar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Michigan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\ixset\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\NZ-CHAT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\MST7MDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Portugal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\ROC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Poland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\NZ\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\MST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Navajo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\ROK\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\PST8PDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\PRC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Santo_Domingo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Punta_Arenas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Regina\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Recife\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Rainy_River\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Rankin_Inlet\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Santarem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Porto_Acre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Nipigon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Rosario\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Pangnirtung\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Montreal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Montserrat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Porto_Velho\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Resolute\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Monterrey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Phoenix\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Nome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Rio_Branco\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Montevideo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Santa_Isabel\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Ojinaga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Santiago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Paramaribo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\New_York\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Puerto_Rico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Port-au-Prince\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Noronha\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Panama\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Port_of_Spain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Nassau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\North_Dakota\\New_Salem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Kentucky\\Monticello\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Petersburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Salta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\San_Juan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\San_Luis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Rio_Gallegos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\San_Marino\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Rome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Paris\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Moscow\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Riga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Nicosia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Prague\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Oslo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Podgorica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Samara\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\MST7MDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\PST8\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\MST7\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\PST8PDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Rarotonga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Nauru\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Pago_Pago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Norfolk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Samoa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Pitcairn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Port_Moresby\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Palau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Pohnpei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Ponape\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Niue\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Saipan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Noumea\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Pacific\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Mountain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Newfoundland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\North\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\NSW\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Queensland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Perth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Nairobi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Niamey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Porto-Novo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Ouagadougou\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Nouakchott\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Ndjamena\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Reykjavik\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Rothera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Palmer\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Indian\\Reunion\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Pyongyang\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Riyadh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Novosibirsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Saigon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Qyzylorda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Samarkand\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Sakhalin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Muscat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Novokuznetsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Nicosia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Oral\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Rangoon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Qatar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Phnom_Penh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Pontianak\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Omsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Samoa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Pacific\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Pacific-New\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\US\\Mountain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\images\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\rmt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\rolodex\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tclIndex\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Universal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Singapore\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\UTC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\W-SU\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Turkey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\UCT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Toronto\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Shiprock\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\St_Vincent\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Scoresbysund\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Swift_Current\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Thule\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Virgin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Sao_Paulo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Thunder_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Sitka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\St_Thomas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\St_Kitts\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Vancouver\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Tortola\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\St_Johns\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\St_Lucia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Tegucigalpa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Tijuana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\St_Barthelemy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Tell_City\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Vevay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Vincennes\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Tucuman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Argentina\\Ushuaia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Sofia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Simferopol\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Vatican\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Volgograd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Ulyanovsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Vienna\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Saratov\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Tirane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Vilnius\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Tiraspol\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Tallinn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Uzhgorod\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Vaduz\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Skopje\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Stockholm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Sarajevo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Tarawa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Tahiti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Tongatapu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Wake\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Wallis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Truk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\Universal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\UTC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\UCT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Saskatchewan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\South\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Victoria\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Sydney\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Tasmania\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Tunis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Timbuktu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Tripoli\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Sao_Tome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\St_Helena\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\Stanley\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Atlantic\\South_Georgia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\South_Pole\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Vostok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Syowa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Antarctica\\Troll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Ulan_Bator\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Seoul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Thimphu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Tashkent\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Ujung_Pandang\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Tehran\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Ulaanbaatar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Vladivostok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Tomsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Singapore\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Tokyo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Taipei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Shanghai\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Vientiane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Ust-Nera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Tel_Aviv\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Urumqi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Srednekolymsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Tbilisi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Thimbu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\tclIndex\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\tclIndex\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\tcolor\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\timer\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\square\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\WET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Zulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Whitehorse\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Winnipeg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Yellowknife\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Yakutat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\America\\Indiana\\Winamac\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Zurich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Warsaw\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Zaporozhye\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Europe\\Zagreb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\YST9YDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\SystemV\\YST9\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Pacific\\Yap\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Etc\\Zulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Canada\\Yukon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\Yancowinna\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Australia\\West\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Africa\\Windhoek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Brazil\\West\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Yekaterinburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Yangon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Yerevan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tcl8.6\\tzdata\\Asia\\Yakutsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\tcl\\tk8.6\\demos\\widget\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\mailbox.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pdb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_pyio.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\optparse.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sre_parse.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\configparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_pydecimal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\enum.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\smtpd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\inspect.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\smtplib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\threading.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\typing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\codecs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\aifc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\doctest.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ftplib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\platform.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\locale.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pickletools.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pickle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\zipfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ssl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\difflib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pydoc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_pyio.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\optparse.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\argparse.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\smtplib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\difflib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\doctest.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pydoc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\turtle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\turtle.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_pydecimal.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\codecs.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\codecs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mailbox.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\locale.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\locale.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ssl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ssl.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\datetime.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_pyio.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ipaddress.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ipaddress.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_pydecimal.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_pydecimal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tarfile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tarfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pickle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\zipfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pickletools.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\doctest.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\threading.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\turtle.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pdb.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\typing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imaplib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pydoc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\argparse.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pathlib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pathlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\inspect.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\zipfile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tarfile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\configparser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\configparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mailbox.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pickle.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\zipfile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ipaddress.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imaplib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\subprocess.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\typing.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pdb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_pyio.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\typing.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pathlib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\nntplib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\nntplib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\datetime.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\optparse.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pickle.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\cProfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\codeop.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\fnmatch.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_threading_local.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_bootlocale.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\contextvars.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__phello__.foo.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\crypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\genericpath.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\code.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asynchat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\chunk.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_markupbase.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_py_abc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\copyreg.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\binhex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\abc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_dummy_thread.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\base64.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\cmd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\getopt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_compat_pickle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_weakrefset.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dummy_threading.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\_compression.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\contextlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\bisect.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\cgitb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\compileall.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\antigravity.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\hmac.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\netrc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\mimetypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\plistlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\glob.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\modulefinder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\imghdr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\nturl2path.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\macpath.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\gettext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\io.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pprint.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\imp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ntpath.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\posixpath.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\hashlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\heapq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sre_compile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\socketserver.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sndhdr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\secrets.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pstats.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sched.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sysconfig.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\this.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pty.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\re.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\queue.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\rlcompleter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\shelve.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\reprlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\py_compile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\struct.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\socket.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\token.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\shlex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\stringprep.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\stat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xdrlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\uuid.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\uu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\webbrowser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tokenize.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tty.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\warnings.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_threading_local.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_sitebuiltins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_compat_pickle.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_dummy_thread.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_markupbase.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_osx_support.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\__phello__.foo.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_py_abc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_threading_local.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_collections_abc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_bootlocale.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\__future__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_compression.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_strptime.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_markupbase.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\__future__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_compat_pickle.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_strptime.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_markupbase.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_py_abc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_compression.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_weakrefset.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_weakrefset.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_bootlocale.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\__phello__.foo.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_dummy_thread.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ast.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ast.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\abc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bdb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\asyncore.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\asynchat.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\base64.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\binhex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\aifc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\abc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\base64.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\asynchat.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\asyncore.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bisect.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\binhex.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\calendar.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bz2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\base64.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bisect.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bdb.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bdb.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\asynchat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\asyncore.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ast.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\binhex.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\antigravity.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bisect.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\copyreg.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\code.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\contextlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cgitb.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\copyreg.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cgitb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\contextlib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\contextvars.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cgi.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\crypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\chunk.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cgi.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\codeop.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\copy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\crypt.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\codeop.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\colorsys.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\codecs.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\compileall.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\code.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\csv.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cgi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\chunk.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\calendar.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cProfile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\colorsys.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\contextvars.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\crypt.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\contextvars.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cmd.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\configparser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\code.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dataclasses.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fnmatch.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\formatter.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dummy_threading.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\formatter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\difflib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\csv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\decimal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dis.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fractions.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\filecmp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\filecmp.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fnmatch.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fileinput.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dummy_threading.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ftplib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dis.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\enum.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\filecmp.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fractions.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dataclasses.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fileinput.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fnmatch.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dataclasses.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\formatter.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\getpass.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imghdr.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\hashlib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\functools.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\heapq.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\io.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\gzip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\glob.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\linecache.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\keyword.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\genericpath.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\io.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\hashlib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\io.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\getpass.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imp.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\functools.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imaplib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ftplib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\linecache.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\genericpath.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\heapq.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\gettext.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\hmac.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\hmac.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\getopt.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\getopt.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\glob.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imghdr.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\heapq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\gzip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\linecache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\hashlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\netrc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\netrc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\locale.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\operator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\numbers.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\lzma.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\nturl2path.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\macpath.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\opcode.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\nturl2path.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mimetypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\os.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\macpath.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\opcode.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\modulefinder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\nntplib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mailcap.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ntpath.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\modulefinder.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\numbers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mimetypes.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\lzma.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\macpath.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ntpath.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\numbers.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\os.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pkgutil.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pkgutil.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\plistlib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pipes.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\platform.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pprint.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\poplib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\plistlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\posixpath.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pprint.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pkgutil.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\platform.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\plistlib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pipes.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\poplib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\posixpath.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pty.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\profile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\reprlib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pty.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shlex.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\secrets.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\reprlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\reprlib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\quopri.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shutil.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\selectors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\profile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sched.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\queue.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\runpy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pyclbr.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\random.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\quopri.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\selectors.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\runpy.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\selectors.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shelve.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pyclbr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\re.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pty.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\quopri.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pyclbr.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\re.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\runpy.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shelve.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shlex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\queue.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sched.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\secrets.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pstats.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\rlcompleter.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\queue.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_compile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\smtplib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\socket.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_compile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_compile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\socket.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\socketserver.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\site.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\signal.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\signal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\smtpd.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\signal.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\socket.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_constants.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\smtpd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sndhdr.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sndhdr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\socketserver.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shutil.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shutil.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\stat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\subprocess.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\stat.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\symbol.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\statistics.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\string.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_parse.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tabnanny.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\telnetlib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\symtable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sunau.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sunau.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\string.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_constants.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_constants.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sysconfig.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\symbol.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\string.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tabnanny.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\stringprep.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\symtable.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tabnanny.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\statistics.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sunau.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\struct.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\telnetlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\struct.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sysconfig.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sysconfig.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\struct.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\token.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\trace.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tty.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\uu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tempfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tracemalloc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\traceback.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\uu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\textwrap.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\trace.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\traceback.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\threading.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tokenize.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tokenize.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tokenize.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\types.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\this.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tracemalloc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\traceback.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tty.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\types.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\textwrap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\token.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\timeit.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\orig-prefix.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\xdrlib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\uuid.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\weakref.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\wave.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\zipapp.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\wave.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\uuid.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\webbrowser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\xdrlib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\uuid.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\webbrowser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pickletools.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pdb.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\smtplib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\argparse.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\datetime.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\inspect.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mailbox.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\inspect.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\subprocess.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\threading.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pickletools.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\optparse.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\doctest.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\message.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\_header_value_parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_header_value_parser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\message.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\message.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_header_value_parser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_header_value_parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\minidom.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\expatbuilder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\minidom.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\minidom.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\minidom.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\ElementTree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementTree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementTree.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\unix_events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\base_events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\selector_events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_events.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_events.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\parse.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\request.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\request.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\request.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\request.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\schema.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\schema.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\schema.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\schema.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\_bootstrap_external.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\_bootstrap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\dump.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\dbapi2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\_encoded_words.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\contentmanager.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\encoders.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\base64mime.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\_parseaddr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\charset.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\_policybase.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\errors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\feedparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\audio.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\application.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\expat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\_exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\expatreader.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\domreg.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\cElementTree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\ElementPath.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\ElementInclude.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\base_subprocess.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\base_tasks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\base_futures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\constants.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\coroutines.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\error.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\abc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\headers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\handlers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\header.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\generator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\quoprimime.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\policy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\headerregistry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\iterators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\image.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\multipart.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\nonmultipart.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\message.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\handler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\NodeFilter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\pulldom.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\minicompat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\queues.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\format_helpers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\protocols.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\proactor_events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\runners.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\log.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\futures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\locks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\robotparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\response.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\resources.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\machinery.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\__future__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\validate.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\simple_server.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\text.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\xmlreader.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\saxutils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\xmlbuilder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\subprocess.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\windows_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\windows_events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\transports.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\sslproto.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\tasks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\streams.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\text.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\sequence.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_dummy_thread.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_osx_support.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_osx_support.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_sitebuiltins.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_strptime.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_bootlocale.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_threading_local.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_compat_pickle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\__phello__.foo.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_collections_abc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_sitebuiltins.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_compression.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_weakrefset.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_collections_abc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\_py_abc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_encoded_words.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_policybase.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_encoded_words.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_encoded_words.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_parseaddr.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_policybase.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_parseaddr.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_policybase.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\_parseaddr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\_exceptions.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\_exceptions.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\_exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\_bootstrap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\aifc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cgitb.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\contextlib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\antigravity.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bz2.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cmd.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\aifc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\codeop.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\abc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\chunk.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\compileall.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\antigravity.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\bz2.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\calendar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cmd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\colorsys.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\compileall.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\charset.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\charset.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\charset.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\contentmanager.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\base64mime.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\contentmanager.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\base64mime.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\contentmanager.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\base64mime.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\base.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\audio.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\application.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\base.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\application.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\audio.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\audio.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\application.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\cElementTree.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\cElementTree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\cElementTree.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_tasks.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_tasks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_futures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\constants.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_subprocess.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_futures.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_futures.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\constants.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_subprocess.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_subprocess.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\constants.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\base_tasks.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\abc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\enum.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cProfile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\decimal.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\cProfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\csv.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\enum.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\decimal.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dis.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\copy.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\copy.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\dummy_threading.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\copyreg.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\dump.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\dbapi2.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\dbapi2.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\dump.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\dbapi2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\sqlite3\\__pycache__\\dump.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\encoders.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\encoders.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\errors.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\errors.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\errors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\encoders.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__pycache__\\expat.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__pycache__\\expat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\parsers\\__pycache__\\expat.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\domreg.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\domreg.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\domreg.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementPath.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementInclude.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementTree.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementInclude.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementInclude.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementPath.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\etree\\__pycache__\\ElementPath.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\coroutines.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\events.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\coroutines.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\coroutines.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\events.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\error.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\error.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\error.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fileinput.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\fractions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\functools.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ftplib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\feedparser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\generator.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\feedparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\generator.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\feedparser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\generator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\expatreader.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\expatreader.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\expatreader.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\expatbuilder.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\expatbuilder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\expatbuilder.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\futures.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\futures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\format_helpers.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\format_helpers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\futures.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\format_helpers.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\getopt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\gettext.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\gzip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\getpass.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\hmac.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imp.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\keyword.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\keyword.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\glob.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\genericpath.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\imghdr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\gettext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\handlers.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\headers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\handlers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\headers.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\handlers.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\headers.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\iterators.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\header.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\header.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\headerregistry.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\headerregistry.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\iterators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\headerregistry.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\header.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\iterators.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\image.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\image.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\image.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\handler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\handler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\handler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ntpath.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\platform.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\lzma.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pipes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mailcap.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\netrc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\operator.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\os.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mailcap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\modulefinder.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\operator.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\mimetypes.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\nturl2path.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\opcode.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\message.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\parser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\parser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\message.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\multipart.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\message.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\multipart.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\multipart.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\nonmultipart.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\nonmultipart.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\message.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\nonmultipart.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\minicompat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\NodeFilter.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\minicompat.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\NodeFilter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\minicompat.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\NodeFilter.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\locks.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\locks.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\locks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\log.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\log.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\log.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\parse.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\parse.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\parse.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\profile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pprint.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\poplib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pstats.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\py_compile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\py_compile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\posixpath.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\random.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\py_compile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\pstats.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\policy.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\quoprimime.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\policy.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\policy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\quoprimime.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\quoprimime.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\pulldom.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\pulldom.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\pulldom.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\protocols.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\proactor_events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\protocols.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\proactor_events.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\protocols.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\queues.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\proactor_events.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\queues.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\queues.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shelve.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\smtpd.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sndhdr.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\rlcompleter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sched.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\rlcompleter.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\site.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\random.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\secrets.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\re.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\site.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\shlex.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\simple_server.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\simple_server.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\simple_server.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\saxutils.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\saxutils.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\saxutils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\runners.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\runners.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\selector_events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\selector_events.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\runners.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\selector_events.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\robotparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\response.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\robotparser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\response.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\response.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\urllib\\__pycache__\\robotparser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\sequence.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\sequence.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\sequence.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\ssl.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\statistics.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\stringprep.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_parse.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\stat.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\socketserver.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\symtable.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\stringprep.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\symbol.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\sre_parse.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\streams.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\sslproto.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\subprocess.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\sslproto.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\subprocess.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\subprocess.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\streams.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\sslproto.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\streams.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tempfile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tracemalloc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\this.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\timeit.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\types.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\timeit.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\textwrap.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\this.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tty.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\token.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\trace.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\tempfile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\telnetlib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\text.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\text.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\mime\\__pycache__\\text.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\transports.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\transports.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\tasks.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\transports.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\unix_events.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\tasks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\tasks.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\text.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\text.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\msilib\\__pycache__\\text.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\weakref.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\warnings.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\wave.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\weakref.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\uu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\warnings.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\webbrowser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\warnings.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\validate.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\validate.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\wsgiref\\__pycache__\\validate.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\utils.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\__pycache__\\utils.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\unix_events.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\windows_events.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\unix_events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\xdrlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\zipapp.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\__pycache__\\zipapp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\email\\architecture.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\xmlreader.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\xmlreader.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\sax\\__pycache__\\xmlreader.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\xmlbuilder.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\xmlbuilder.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xml\\dom\\__pycache__\\xmlbuilder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\windows_utils.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\windows_utils.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\windows_events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\windows_utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\asyncio\\__pycache__\\windows_events.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\_bootstrap_external.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\_bootstrap_external.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\managers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\managers.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\managers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\entities.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\entities.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\entities.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\entities.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\server.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\client.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\cookiejar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\client.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\cookiejar.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\server.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\server.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\client.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\cookiejar.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\cookiejar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\case.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\mock.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\mock.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\mock.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\case.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\mock.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\case.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\case.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\handlers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\handlers.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\handlers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\dist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\ccompiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\ccompiler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\ccompiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\bdist_msi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-9.0.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-8.0.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-10.0.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-6.0.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-10.0-amd64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-9.0-amd64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-7.1.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\btm_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\btm_matcher.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\Grammar3.7.4.final.0.pickle\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\PatternGrammar3.7.4.final.0.pickle\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\_uninstall.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\_pydoc.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\_endian.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\_aix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\fetch_macholib.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\README.ctypes\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\abc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\_base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\_msvccompiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\bcppcompiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\distutils.cfg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\archive_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\bdist_dumb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\build_clib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\bdist_wininst.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\build.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\bdist_rpm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\bdist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\dumb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\connection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\context.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\connection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\driver.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\conv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\cookies.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\dyld.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\dylib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\debug.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\extension.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\cygwinccompiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\cmd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\dir_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\dep_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\errors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\check.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\clean.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\build_py.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\build_ext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\build_scripts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\gnu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\forkserver.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\heap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixer_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixer_base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_ws_comma.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_asserts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_apply.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_urllib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_paren.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_throw.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_raise.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_zip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_unicode.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_idioms.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_reduce.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_except.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_raw_input.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_isinstance.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_imports2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_exitfunc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_xreadlines.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_itertools_imports.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_long.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_import.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_numliterals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_reload.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_itertools.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_imports.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_operator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_set_literal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_basestring.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_dict.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_tuple_params.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_methodattrs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_buffer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_renames.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_filter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_has_key.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_sys_exc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_funcattrs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_metaclass.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_xrange.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_map.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_print.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_ne.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_exec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_future.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_next.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_execfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_nonzero.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_input.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_intern.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_repr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_standarderror.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_getcwdu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\fix_types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\literals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\grammar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\framework.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\filelist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\fancy_getopt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\file_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\install_scripts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\install_headers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\install_lib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\install.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\install_egg_info.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\install_data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\ndbm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\popen_spawn_win32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\popen_spawn_posix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\pool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\process.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\popen_fork.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\popen_forkserver.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pytree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\patcomp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pygram.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\main.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\parse.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\pgen.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\process.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\main.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\loader.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\msvccompiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\log.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\msvc9compiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\queues.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\sharedctypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\synchronize.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\reduction.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\spawn.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\semaphore_tracker.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\resource_sharer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\refactor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\tokenize.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\token.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\thread.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\signals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\suite.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\runner.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\result.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\text_file.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\sysconfig.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\spawn.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\unixccompiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\sdist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\upload.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\register.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\_bootstrap.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\_bootstrap.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\__main__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\__main__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\__main__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\__main__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\wintypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\_aix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\_aix.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\_aix.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\_base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\_base.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\_base.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\__main__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\__main__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\versionpredicate.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\abc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\abc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\_bootstrap_external.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\btm_matcher.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\btm_utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\btm_matcher.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\btm_utils.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\btm_utils.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\btm_matcher.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\_uninstall.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\_uninstall.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\__pycache__\\_uninstall.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\_endian.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\_endian.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\_endian.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__pycache__\\abc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__pycache__\\abc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\collections\\__pycache__\\abc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\_msvccompiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\archive_util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\archive_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\bcppcompiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\bcppcompiler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\archive_util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\bcppcompiler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\_msvccompiler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\_msvccompiler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_clib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_dumb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_msi.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_msi.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_clib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_rpm.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_dumb.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_ext.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_rpm.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_msi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\connection.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\connection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\connection.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__pycache__\\connection.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__pycache__\\connection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\dummy\\__pycache__\\connection.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\client.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\config.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\config.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\config.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\ccompiler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\cmd.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\config.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\cmd.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\cmd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\config.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\check.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_py.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\clean.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_scripts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\check.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_scripts.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\dumb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\dumb.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\dumb.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\context.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\context.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\context.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\conv.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\conv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\conv.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\driver.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\driver.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\driver.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\cookies.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\cookies.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\cookies.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\dylib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\dylib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\dyld.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\dyld.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\dylib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\dyld.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\fancy_getopt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dir_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\cygwinccompiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\debug.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\filelist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\extension.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\core.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dir_util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\errors.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\cygwinccompiler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\errors.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\file_util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dep_util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dir_util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\file_util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dep_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\extension.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\extension.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\fancy_getopt.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\file_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\debug.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\errors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\fancy_getopt.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\cygwinccompiler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\debug.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\dep_util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\core.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\fixer_util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\fixer_base.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\fixer_util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\fixer_base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\fixer_base.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\fixer_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_itertools.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_has_key.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_unicode.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_tuple_params.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_filter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_sys_exc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_basestring.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_numliterals.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_future.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_filter.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_asserts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_filter.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_long.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_reduce.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_renames.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_throw.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_standarderror.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_types.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_ne.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_has_key.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_intern.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_idioms.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_sys_exc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_reduce.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_methodattrs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_set_literal.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_exec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_ne.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_print.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_map.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_throw.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_intern.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_renames.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_exec.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_xrange.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_long.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_asserts.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_xreadlines.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_itertools.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_import.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_input.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_execfile.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_itertools_imports.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_types.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_types.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_isinstance.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_except.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_raw_input.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_exec.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_apply.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_set_literal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_input.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_asserts.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_dict.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_repr.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_buffer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_apply.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_reload.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_imports.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_idioms.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_input.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_metaclass.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_map.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_intern.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_repr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_import.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_operator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_getcwdu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_unicode.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_exitfunc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_metaclass.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_ws_comma.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_nonzero.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_import.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_xreadlines.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_basestring.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_reduce.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_funcattrs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_itertools_imports.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_methodattrs.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_future.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_raw_input.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_has_key.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_idioms.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_standarderror.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_operator.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_raise.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_dict.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_operator.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_zip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_throw.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_unicode.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_isinstance.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_getcwdu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_print.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_nonzero.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_basestring.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_next.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_dict.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_isinstance.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_xreadlines.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_paren.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_xrange.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_nonzero.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_long.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_metaclass.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_renames.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_imports2.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_repr.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_urllib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_ne.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_imports.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_imports2.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_except.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_execfile.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_sys_exc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_next.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_paren.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_urllib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_reload.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_buffer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_getcwdu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_print.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_itertools.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_urllib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_numliterals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_exitfunc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_zip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_funcattrs.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_execfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_zip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_imports.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_reload.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_itertools_imports.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_tuple_params.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_tuple_params.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_xrange.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_funcattrs.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_buffer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_methodattrs.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_paren.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_set_literal.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_ws_comma.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_imports2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_raise.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_exitfunc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_ws_comma.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_except.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_apply.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_map.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_raw_input.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_numliterals.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_future.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_standarderror.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_next.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\fixes\\__pycache__\\fix_raise.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\filelist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\filelist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\machinery.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\machinery.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\machinery.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\gnu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\gnu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\gnu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\managers.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\heap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\forkserver.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\heap.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\heap.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\forkserver.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\forkserver.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\main.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\main.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\main.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\literals.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\grammar.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\grammar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\grammar.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\literals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\literals.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\framework.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\framework.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\__pycache__\\framework.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\main.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\main.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\loader.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\loader.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\main.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\loader.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\logging\\__pycache__\\handlers.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\msvc9compiler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\msvc9compiler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\log.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\msvc9compiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\log.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\log.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_egg_info.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_data.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_data.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_scripts.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_data.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_scripts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\ndbm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\ndbm.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\dbm\\__pycache__\\ndbm.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_spawn_win32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\pool.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_fork.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_fork.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\pool.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_forkserver.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_spawn_posix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_spawn_posix.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_spawn_win32.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\pool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\process.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_forkserver.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\process.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_spawn_win32.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_forkserver.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_fork.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\popen_spawn_posix.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\process.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\patcomp.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\patcomp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\patcomp.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\pgen.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\parse.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\pgen.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\parse.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\parse.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\pgen.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\parser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\html\\__pycache__\\parser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\process.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\process.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\msvccompiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\msvccompiler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\msvccompiler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\resources.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\resources.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\resources.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\reduction.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\resource_sharer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\reduction.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\resource_sharer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\resource_sharer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\queues.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\queues.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\queues.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\reduction.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\refactor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\refactor.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\pytree.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\pytree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\pytree.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\pygram.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\refactor.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\pygram.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\__pycache__\\pygram.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\process.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\result.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\result.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\runner.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\result.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\register.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\register.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\spawn.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\semaphore_tracker.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\synchronize.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\synchronize.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\spawn.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\semaphore_tracker.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\sharedctypes.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\spawn.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\semaphore_tracker.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\sharedctypes.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\sharedctypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\synchronize.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\token.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\tokenize.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\tokenize.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\tokenize.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\token.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\pgen2\\__pycache__\\token.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\http\\__pycache__\\server.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\thread.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\thread.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\concurrent\\futures\\__pycache__\\thread.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\signals.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\signals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\signals.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\runner.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\suite.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\suite.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\runner.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\suite.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\spawn.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\text_file.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\text_file.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\text_file.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\spawn.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\sysconfig.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\unixccompiler.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\spawn.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\unixccompiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\sysconfig.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\sysconfig.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\unixccompiler.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\sdist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\importlib\\__pycache__\\util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\multiprocessing\\__pycache__\\util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\Grammar.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\lib2to3\\PatternGrammar.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\wintypes.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\wintypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\__pycache__\\wintypes.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ctypes\\macholib\\fetch_macholib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\unittest\\__pycache__\\util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\versionpredicate.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\versionpredicate.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\util.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\util.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\version.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\versionpredicate.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\__pycache__\\version.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\command_template\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp866.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp852.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp862.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp857.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp775.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp860.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp864.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp865.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_arabic.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp861.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp737.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp869.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp850.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp855.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp863.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1125.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp858.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp437.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\server.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\client.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\client.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\client.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\_cffi_backend.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\virtualenv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\pyparsing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\sshtunnel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\models.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\tzdata.zi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\big5hkscs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp875.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp424.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\big5.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp037.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp273.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp500.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp932.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\aliases.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp720.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\charmap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\bz2_codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\ascii.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\base64_codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp856.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp874.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\ascii.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\bytedesign.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\chaos.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\turtle.cfg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\colormixer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\clock.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1-py3.7-nspkg.pth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1-py3.7-nspkg.pth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3-py3.7-nspkg.pth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2-py3.7-nspkg.pth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1-py3.7-nspkg.pth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2-py3.7-nspkg.pth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\certs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\auth.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\cookies.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\_internal_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__version__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\adapters.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\command.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\gb18030.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp65001.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp950.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1254.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\gb2312.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1253.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1258.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\euc_jisx0213.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1257.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1250.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\gbk.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1140.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\euc_jp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1251.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1006.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1026.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\euc_jis_2004.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1256.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1252.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp1255.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\cp949.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\euc_kr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\encoder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\decoder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\fractalcurves.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\forest.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_mail.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\editor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_principal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\easy_install.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\escape.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\decorator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_8.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso2022_jp_1.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_6.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso2022_jp_2004.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\idna.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_13.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso2022_jp_2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_7.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_1.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_3.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\johab.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\hex_codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_10.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_14.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\hz.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_4.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso2022_jp_ext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_5.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_15.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_16.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso2022_jp_3.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_9.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso2022_kr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\hp_roman8.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso8859_11.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\iso2022_jp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\has_key.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\help.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\hooks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_turkish.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\koi8_t.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_cyrillic.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_romanian.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\palmos.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\oem.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_greek.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_latin2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_iceland.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\koi8_r.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\kz1048.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mbcs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\latin_1.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_centeuro.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_roman.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\koi8_u.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_farsi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\mac_croatian.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\panel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\minimal_hanoi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\nim.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\paint.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\lindenmayer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\packages.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\lazy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\main.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\middleware.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\undefined.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\tis_620.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\rot_13.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\ptcp154.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\punycode.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_32_be.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_7.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\shift_jisx0213.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_16_le.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_8.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\shift_jis_2004.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_16_be.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\unicode_escape.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\unicode_internal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\zlib_codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\raw_unicode_escape.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\shift_jis.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_8_sig.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\quopri_codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\uu_codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_32_le.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\utf_16.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\textpad.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\tool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\scanner.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\two_canvases.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\penrose.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\round_dance.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\rosette.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\yinyang.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\peace.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\tree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\sorting_animate.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\planet_and_moon.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\sessions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\structures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\status_codes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\tzinfo.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\tzfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\reference.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\clean.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\check.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_dumb.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_rpm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_wininst.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_ext.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_py.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\clean.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_clib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_wininst.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_scripts.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\bdist_wininst.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_ext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\build_py.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\big5hkscs.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\bz2_codec.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\big5.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\ascii.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\big5hkscs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\aliases.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\aliases.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\ascii.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\aliases.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\ascii.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\charmap.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\charmap.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\charmap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\base64_codec.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\bz2_codec.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\big5.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\big5.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\bz2_codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\big5hkscs.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\base64_codec.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\base64_codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\ascii.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\ascii.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\ascii.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\chaos.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\__main__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\bytedesign.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\bytedesign.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\chaos.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\bytedesign.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\chaos.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\__main__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\__version__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\auth.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\certs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\_internal_utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\adapters.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\config.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp856.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp850.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp273.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp500.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp500.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp858.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp856.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp862.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp775.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp855.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp775.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp775.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp852.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp858.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp437.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp500.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp862.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp855.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp737.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp737.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp720.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp860.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp856.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp424.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp424.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp037.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp850.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp860.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp855.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp852.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp861.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp857.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp273.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp273.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp437.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp424.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp860.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp037.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp857.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp720.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp852.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp861.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp861.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp857.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp437.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp737.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp037.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp858.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp720.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp850.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\client.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\clock.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\colormixer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\clock.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\clock.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\colormixer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\colormixer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\cookies.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_kr.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp932.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp874.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp866.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1250.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp866.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1251.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1250.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1251.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jis_2004.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1026.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1140.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1252.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp863.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp65001.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp865.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1125.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1255.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1255.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_kr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1257.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1253.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp875.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1255.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1257.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jp.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1258.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1026.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp65001.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jp.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp864.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1256.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1026.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp863.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp950.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_kr.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1253.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jis_2004.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1254.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp949.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1140.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1256.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1125.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1006.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1251.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jisx0213.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp949.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp949.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp863.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jisx0213.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1252.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp862.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1006.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp865.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp869.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1258.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp864.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1006.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp869.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp950.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp874.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1253.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp950.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp875.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jisx0213.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1252.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1250.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1257.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1254.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp932.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1125.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1140.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp65001.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\euc_jis_2004.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1256.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp865.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp866.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1258.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp864.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp932.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp874.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp869.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp1254.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\cp875.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\decoder.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\encoder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\decoder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\encoder.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\encoder.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\decoder.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\editor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\flask_mail.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\easy_install.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_lib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_egg_info.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_headers.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_egg_info.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_scripts.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_lib.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_headers.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_headers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\install_lib.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_16.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_centeuro.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_cyrillic.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_4.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hz.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_1.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_10.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_t.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_latin2.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_greek.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_u.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_latin2.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gb2312.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_kr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_11.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hz.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_13.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_7.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_roman.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_3.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_8.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_u.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_10.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_iceland.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_1.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_arabic.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\johab.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_6.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_3.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_11.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_16.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_farsi.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_10.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_centeuro.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_9.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_t.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gb18030.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_3.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gb2312.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_u.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_2.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_8.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_2004.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_1.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_farsi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_9.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_14.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\idna.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_2004.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_centeuro.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_3.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\latin_1.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gb2312.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\latin_1.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\kz1048.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hp_roman8.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_13.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\latin_1.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_2.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\kz1048.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_roman.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_15.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_r.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_ext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gbk.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_iceland.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gbk.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_greek.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hex_codec.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_5.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_9.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_arabic.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_iceland.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_6.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_latin2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_croatian.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hp_roman8.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hz.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hex_codec.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hp_roman8.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_r.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_7.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_5.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_roman.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_t.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gb18030.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_3.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gb18030.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_greek.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_15.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_1.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_14.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\koi8_r.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_13.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_2004.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_11.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_6.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\idna.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_14.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\johab.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_4.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_3.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_4.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_ext.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\idna.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_arabic.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_8.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_2.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\gbk.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_croatian.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_5.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_2.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_16.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_1.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_farsi.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_cyrillic.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\hex_codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_kr.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_croatian.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_cyrillic.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\johab.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_7.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_ext.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_jp_1.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso2022_kr.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\iso8859_15.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\kz1048.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\has_key.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\has_key.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\has_key.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\forest.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\forest.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\fractalcurves.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\lindenmayer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\lindenmayer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\fractalcurves.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\forest.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\lindenmayer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\fractalcurves.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\flask_principal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\imagesize.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\help.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\hooks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__pycache__\\lazy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\register.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\sdist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\sdist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\raw_unicode_escape.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\rot_13.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mbcs.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_turkish.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\punycode.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\palmos.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\ptcp154.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\oem.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\palmos.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mbcs.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\quopri_codec.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\oem.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\punycode.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\ptcp154.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\raw_unicode_escape.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\palmos.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_romanian.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_romanian.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_turkish.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mbcs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\punycode.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\ptcp154.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\raw_unicode_escape.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\quopri_codec.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\quopri_codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\oem.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_romanian.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\rot_13.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\rot_13.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\mac_turkish.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\server.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\server.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\xmlrpc\\__pycache__\\server.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\panel.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\panel.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\panel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\scanner.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\scanner.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\scanner.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\planet_and_moon.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\paint.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\round_dance.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\penrose.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\minimal_hanoi.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\minimal_hanoi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\peace.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\rosette.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\round_dance.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\rosette.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\rosette.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\planet_and_moon.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\nim.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\penrose.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\peace.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\paint.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\planet_and_moon.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\nim.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\paint.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\minimal_hanoi.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\peace.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\round_dance.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\penrose.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\nim.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\packages.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\models.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__pycache__\\reference.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\\main.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\upload.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\upload.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\__pycache__\\upload.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\undefined.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jis_2004.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32_le.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jisx0213.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\undefined.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jisx0213.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\zlib_codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\unicode_escape.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16_le.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32_le.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jis_2004.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\tis_620.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_8_sig.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16_le.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_7.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_8.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32_be.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\uu_codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_8_sig.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\zlib_codec.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jis.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jis.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_7.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32_be.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\tis_620.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\zlib_codec.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\uu_codec.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_8.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_8_sig.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\undefined.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\unicode_internal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\tis_620.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jisx0213.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jis_2004.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\unicode_internal.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_8.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16_be.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16_le.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32_le.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\unicode_escape.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\shift_jis.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16_be.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_7.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\unicode_internal.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\unicode_escape.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_32_be.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\uu_codec.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\encodings\\__pycache__\\utf_16_be.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\textpad.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\textpad.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\curses\\__pycache__\\textpad.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\tool.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\tool.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\json\\__pycache__\\tool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\tree.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\yinyang.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\two_canvases.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\sorting_animate.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\tree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\two_canvases.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\yinyang.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\two_canvases.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\yinyang.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\sorting_animate.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\sorting_animate.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\turtledemo\\__pycache__\\tree.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\speaklater.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\__pycache__\\six.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Werkzeug-0.16.0.dist-info\\LICENSE.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\status_codes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\sessions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests\\__pycache__\\structures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__pycache__\\tzfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\__pycache__\\tzinfo.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\zone.tab\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\iso3166.tab\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna-2.8.dist-info\\LICENSE.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\README.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Mail-0.9.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Mail-0.9.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Mail-0.9.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests-2.22.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests-2.22.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests-2.22.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson-3.16.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson-3.16.0.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson-3.16.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2-2.8.3.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2-2.8.3.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2-2.8.3.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Werkzeug-0.16.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Werkzeug-0.16.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\AUTHORS\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\GMT0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\EST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\zone1970.tab\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\EET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Israel\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Jamaica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\GMT-0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Greenwich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Kwajalein\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Eire\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\CET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\GB-Eire\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Japan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Iceland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\GMT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Hongkong\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\EST5EDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\leapseconds\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\HST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Egypt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Factory\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\GMT+0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Libya\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Iran\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\CST6CDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\GB\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Cuba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Mexico\\General\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Mexico\\BajaSur\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Mexico\\BajaNorte\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Knox_IN\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Danmarkshavn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Anchorage\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Creston\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Caracas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Buenos_Aires\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Aruba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Dominica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Anguilla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Jamaica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Glace_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\St_Vincent\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Boa_Vista\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Lower_Princes\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Belem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Bahia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Atikokan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Hermosillo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Marigot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Edmonton\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Catamarca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Costa_Rica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Fort_Nelson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Goose_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Barbados\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Curacao\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Boise\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Adak\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Araguaina\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Bahia_Banderas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Guyana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Cancun\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Virgin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Eirunepe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\La_Paz\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Asuncion\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Godthab\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Guatemala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Mexico_City\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Montserrat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Guadeloupe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Detroit\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Iqaluit\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Coral_Harbour\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Belize\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Antigua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Atka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\El_Salvador\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\St_Thomas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\St_Kitts\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Cambridge_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Jujuy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Halifax\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Inuvik\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Campo_Grande\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Grenada\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Fortaleza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Lima\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Santa_Isabel\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Grand_Turk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Fort_Wayne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Tortola\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Mazatlan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Cayenne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Cuiaba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\St_Lucia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Dawson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Cayman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Chicago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Juneau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Panama\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Tijuana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Guayaquil\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Ensenada\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Dawson_Creek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Havana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\St_Barthelemy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Bogota\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Port_of_Spain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Kralendijk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indianapolis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Blanc-Sablon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Chihuahua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\North_Dakota\\Beulah\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\North_Dakota\\Center\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Knox\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Indianapolis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Buenos_Aires\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\ComodRivadavia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\La_Rioja\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Catamarca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Jujuy\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Arctic\\Longyearbyen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Bratislava\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Kaliningrad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Dublin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Belfast\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Kiev\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Guernsey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Kirov\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Astrakhan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Chisinau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Budapest\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\London\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Jersey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Prague\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Oslo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Tiraspol\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Gibraltar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Copenhagen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Berlin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Bucharest\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Andorra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Brussels\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Amsterdam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Isle_of_Man\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Athens\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Chile\\EasterIsland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Galapagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Guadalcanal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Kosrae\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Efate\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Kiritimati\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Honolulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Kwajalein\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Guam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Johnston\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Apia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Easter\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Gambier\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Fiji\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Funafuti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Enderbury\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Fakaofo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Saipan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Bougainville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-8\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-5\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-7\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+5\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+12\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\Greenwich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+11\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+10\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-6\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+4\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-13\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+2\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+3\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-9\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-10\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-14\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+8\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-11\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+0\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+6\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+9\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-12\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-4\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT+7\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\GMT-3\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Mountain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Atlantic\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Brisbane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Currie\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Eucla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Queensland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Lord_Howe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\LHI\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Lindeman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Hobart\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Tasmania\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Ceuta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Lome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\El_Aaiun\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Johannesburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Accra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Banjul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Khartoum\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Maseru\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Algiers\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Abidjan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Ouagadougou\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Timbuktu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Nouakchott\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Cairo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Mbabane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Bissau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Tripoli\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Conakry\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Dakar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Juba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Freetown\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Bamako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Casablanca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\St_Helena\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Cape_Verde\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Faroe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Reykjavik\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Faeroe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Azores\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Bermuda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Jan_Mayen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Canary\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Davis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\DumontDUrville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Casey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Cocos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Kerguelen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Chagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Christmas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Katmandu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kuching\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Baghdad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Dili\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kuala_Lumpur\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Choibalsan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Dacca\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Bishkek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Irkutsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Karachi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Atyrau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Damascus\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Dhaka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Dushanbe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Colombo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kashgar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Tehran\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Barnaul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kabul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kolkata\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Aqtobe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Jayapura\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Khandyga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Krasnoyarsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Jakarta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Beirut\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Gaza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Calcutta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Tokyo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Aqtau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Anadyr\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Ashgabat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Bahrain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Famagusta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Chita\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Tel_Aviv\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Urumqi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Hebron\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kathmandu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Baku\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Amman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Ashkhabad\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Jerusalem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kamchatka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Almaty\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Hovd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Qatar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Hong_Kong\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Brunei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Central\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Aleutian\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Michigan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Alaska\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\East-Indiana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Indiana-Starke\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Hawaii\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna-2.8.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna-2.8.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Mail-0.9.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Mail-0.9.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests-2.22.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests-2.22.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson-3.16.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson-3.16.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2-2.8.3.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2-2.8.3.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Werkzeug-0.16.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Werkzeug-0.16.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\NZ-CHAT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\MET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\MST7MDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\posixrules\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Portugal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\ROC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Poland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\NZ\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\MST\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Singapore\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Navajo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\ROK\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\PST8PDT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\PRC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Toronto\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Santo_Domingo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Shiprock\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Punta_Arenas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Regina\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Recife\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Matamoros\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Scoresbysund\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Miquelon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Swift_Current\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Rainy_River\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Rankin_Inlet\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Manaus\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Santarem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Porto_Acre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Nipigon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Rosario\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Maceio\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Thule\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Managua\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Moncton\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Pangnirtung\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Montreal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Cordoba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Sao_Paulo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Thunder_Bay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Sitka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Porto_Velho\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Resolute\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Monterrey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Phoenix\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Louisville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Menominee\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Nome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Rio_Branco\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Montevideo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Merida\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Los_Angeles\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Ojinaga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Santiago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Paramaribo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\St_Johns\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Martinique\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\New_York\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Puerto_Rico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Port-au-Prince\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Tegucigalpa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Metlakatla\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Noronha\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Nassau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Mendoza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Denver\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\North_Dakota\\New_Salem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Kentucky\\Monticello\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Kentucky\\Louisville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Marengo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Petersburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Tell_City\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Cordoba\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Tucuman\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Salta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\San_Juan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Mendoza\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\San_Luis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Rio_Gallegos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Sofia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Mariehamn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Simferopol\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Madrid\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Lisbon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Monaco\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Warsaw\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Saratov\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Paris\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Minsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Malta\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Tirane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Luxembourg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Helsinki\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Riga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Nicosia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Tallinn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Skopje\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Podgorica\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Samara\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Belgrade\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Stockholm\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Sarajevo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Zagreb\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Ljubljana\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Chile\\Continental\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Tarawa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Rarotonga\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Nauru\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Pago_Pago\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Tahiti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Chatham\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Norfolk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Auckland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Majuro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Tongatapu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Midway\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Samoa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Pitcairn\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Port_Moresby\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Palau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Pohnpei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Ponape\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Niue\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Marquesas\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Noumea\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Eastern\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Saskatchewan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Newfoundland\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\South\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\North\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\NSW\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Canberra\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Sydney\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Adelaide\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\ACT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Darwin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Perth\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\West\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Nairobi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Lusaka\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Monrovia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Asmara\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Maputo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Tunis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Gaborone\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Mogadishu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Niamey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Porto-Novo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Luanda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Djibouti\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Kampala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Dar_es_Salaam\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Harare\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Douala\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Bujumbura\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Lubumbashi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Brazzaville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Kinshasa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Ndjamena\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Asmera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Addis_Ababa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Malabo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Blantyre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Libreville\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Kigali\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Sao_Tome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Bangui\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Lagos\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Madeira\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\Stanley\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Atlantic\\South_Georgia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Rothera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\McMurdo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\South_Pole\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Macquarie\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Syowa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Palmer\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Troll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Mawson\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Brazil\\Acre\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Brazil\\DeNoronha\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Brazil\\East\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Brazil\\West\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Maldives\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Mayotte\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Reunion\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Mahe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Comoro\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Mauritius\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Indian\\Antananarivo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Pyongyang\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Seoul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Qostanay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Thimphu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Chongqing\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Riyadh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Novosibirsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Tashkent\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Macau\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Kuwait\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Saigon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Qyzylorda\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Samarkand\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Ho_Chi_Minh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Sakhalin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Macao\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Muscat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Novokuznetsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Tomsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Magadan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Singapore\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Nicosia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Harbin\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Taipei\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Oral\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Shanghai\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Dubai\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Manila\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Aden\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Srednekolymsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Tbilisi\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Pontianak\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Thimbu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Chungking\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Omsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Eastern\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Samoa\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Pacific\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Mountain\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\US\\Arizona\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna-2.8.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna-2.8.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Mail-0.9.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse-0.2.4.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\requests-2.22.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson-3.16.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_SQLAlchemy-2.3.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2-2.8.3.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Werkzeug-0.16.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Mako-1.1.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet-3.0.4.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Paranoid-0.2.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Universal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\WET\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Zulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\UTC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\W-SU\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Turkey\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\UCT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Whitehorse\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Winnipeg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Yellowknife\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Vancouver\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Yakutat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Winamac\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Vevay\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Indiana\\Vincennes\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\America\\Argentina\\Ushuaia\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Zurich\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Vatican\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Volgograd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\San_Marino\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Rome\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Ulyanovsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Vienna\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Moscow\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Zaporozhye\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Istanbul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Busingen\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Vilnius\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Uzhgorod\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Europe\\Vaduz\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Wake\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Yap\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Wallis\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Truk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Pacific\\Chuuk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\Universal\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\Zulu\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\UTC\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Etc\\UCT\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Central\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Pacific\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Canada\\Yukon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Yancowinna\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Victoria\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Melbourne\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Australia\\Broken_Hill\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Africa\\Windhoek\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Antarctica\\Vostok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Ulan_Bator\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Yekaterinburg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Ujung_Pandang\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Yangon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Istanbul\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Ulaanbaatar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Vladivostok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Yerevan\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Yakutsk\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Vientiane\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Bangkok\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Ust-Nera\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Rangoon\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Phnom_Penh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz\\zoneinfo\\Asia\\Makassar\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna-2.8.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\numbers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\global.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\dates.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\dates.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ha.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dsb.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\et.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sv.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ps.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nb.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kk.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\smn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mzn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ne.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\brx.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lv.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dz.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\km.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lb.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mk.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bs.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hant_HK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tr.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ee.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\root.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\or.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hr.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kok.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\si.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\da.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bs_Cyrl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ast.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fa.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_PT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hi.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hy.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\az_Cyrl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\it.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\my.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ckb.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fur.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\am.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yue.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uz.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ti.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hsb.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ro.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\br.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ce.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\eu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\id.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\he.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yue_Hans.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\te.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ko.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sw.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ka.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fi.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_CA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ks.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mr.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sah.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hant.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\el.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\chr.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ksh.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fy.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nds.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tk.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gsw.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ja.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kea.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\se_FI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\th.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\af.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ia.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ha_NE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\az.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\to.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uz_Cyrl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ca.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ta.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pa.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\so.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ky.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vi.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ug.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\as.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sk.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\jv.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sq.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\be.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\se.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mt.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sd.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rm.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Latn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ur.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ms.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\is.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\qu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kab.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tt.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\eo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yo_BJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fil.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\frontend.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\test.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\urls.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\serving.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wsgi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\datastructures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\routing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\http.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\routing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\urls.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\http.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\datastructures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\test.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\ubuntu.ttf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\totp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\context.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\apache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\context.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\totp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\apache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\handlers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\chr_US.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\as_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_QA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ca_AD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ca_ES_VALENCIA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_SS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_KM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_JO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_TD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_SD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_SA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_SY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\az_Latn_AZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bg_BG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bas.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ceb.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bem_ZM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\af_ZA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bn_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_LB.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ca_ES.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_DJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_EG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_IQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_MR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_001.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\az_Cyrl_AZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_OM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_LY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_IL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bo_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_SO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cgg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\az_Latn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_ER.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\asa_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bez.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bs_Latn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cs_CZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ak.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_MA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ckb_IQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_EH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_TN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bas_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ak_GH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_YE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bo_CN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bn_BD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ce_RU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\brx_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\asa.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_AE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bm.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cgg_UG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bem.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ast_ES.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_PS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\agq.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ccp_BD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bs_Cyrl_BA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bez_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ca_FR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bs_Latn_BA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\be_BY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ceb_PH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\agq_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_BH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_KW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ca_IT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar_DZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\br_FR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\am_ET.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ckb_IR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ccp_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\af_NA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\bm_ML.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\style.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\theme.conf\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\static\\alabaster.css_t\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\static\\custom.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_IO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_JE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_ER.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dua.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_HK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_FI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_DE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_AG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_CK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dz_BT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_NZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ebu_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ee_TG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_CA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_PG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dje.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_BM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_BI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_AS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_NG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ebu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_JM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_NU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_CC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_BZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_NF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_KI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de_AT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_CX.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de_BE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_CY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_150.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de_LU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\da_DK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_KN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_LS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_AI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ee_GH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_FJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_AU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de_DE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_NL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\da_GL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_BS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_BW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_AE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dav_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MP.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_NR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dsb_DE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_BE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_NA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cy_GB.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_IL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_FM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_DG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_AT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_FK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de_LI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_IE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dje_NE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dua_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dyo_SN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dav.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_LR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\el_GR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_DK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_IM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de_IT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\de_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_BB.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\dyo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\el_CY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_001.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cu_RU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_MY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_DM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GB.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_GU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_LC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_KY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_PW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SB.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fi_FI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_PR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_UY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_DO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_TC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_GM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_ZM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fo_DK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_CU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_CO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_SN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_TO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_VI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_SV.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_VU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_UM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fil_PH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_419.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_NE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fa_IR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_MR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SX.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_BR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_ES.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_RW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_PN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_TT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_BZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_PY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_GT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_ZW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_PR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_GW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_VC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_EA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fa_AF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_PA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_MX.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\et_EE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_VE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_LR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_AR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_PK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_TK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_CR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_US_POSIX.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_US.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ewo_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_IC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_UG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_NG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_ZA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_EC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_NI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_SL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_GH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_US.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\eo_001.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_GQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_WS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_VG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_TV.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_BE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_BO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\eu_ES.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fo_FO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_PE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_PH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_GN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ewo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\en_SH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_HN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_CL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff_Latn_BF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\es_PH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ff.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_ML.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\is_IS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_KM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_GQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_CD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kam_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\jgo_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ki.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ii_CN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\khq.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ki_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_SC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hu_HU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_RW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kde_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\jmc_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gsw_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hsb_DE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_GN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hy_AM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\khq_ML.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\guz.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_GA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ko_KP.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_PF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\id_ID.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gv_IM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kkj_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kl_GL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ko_KR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_GP.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ha_NG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kln.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\guz_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\jgo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\he_IL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kab_DZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_MR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\km_KH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gv.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gd_GB.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_YT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kk_KZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fur_IT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ks_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_HT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_CI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_BF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_SN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hr_BA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_LU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gsw_FR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fy_NL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hr_HR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_MF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kde.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\jmc.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_CF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_MA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ga_IE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_SY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\haw_US.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_TN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kam.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ia_001.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_TG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ha_GH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kok_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_DJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ja_JP.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_NC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kln_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_MQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_WF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ii.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_MG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gsw_LI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kkj.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\it_VA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gl_ES.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ig.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kn_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\haw.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_CG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_GF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_MU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_NE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ka_GE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_DZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_RE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\it_IT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_FR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_MC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gu_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_BI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kea_CV.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_BL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_TD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_PM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\it_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\hi_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\it_SM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_BJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ig_NG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\jv_ID.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\fr_VU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lag_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lg_UG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mas_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ksb.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mas.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lkt_US.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kw.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lt_LT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\luy_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lo_LA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\luo_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lb_LU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lv_LV.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ku_TR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\luy.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lrc.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ku.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ln_AO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\kw_GB.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ln_CF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\luo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ksb_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ksf.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mas_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ksh_DE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lkt.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mer.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ln.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lrc_IR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ksf_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ky_KG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lu_CD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mer_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lrc_IQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lu.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lag.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ln_CG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ln_CD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl_BE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\my_MM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\os_RU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mgo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mt_MT.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nyn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mr_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nmg_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pa_Guru.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl_NL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nds_NL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl_BQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl_SR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl_SX.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nus.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl_CW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mn_MN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\om.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nmg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mg_MG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ms_MY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mi_NZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mua.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nds_DE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mfe.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\om_ET.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pa_Arab_PK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nn_NO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mua_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\os_GE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nl_AW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\os.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mk_MK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\naq_NA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nnh_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mgh.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mgh_MZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ms_BN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nus_SS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\naq.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\or_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nb_SJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nyn_UG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mzn_IR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nnh.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mgo_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\om_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mi.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pa_Guru_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nb_NO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pl_PL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ms_SG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nd.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ne_NP.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\mfe_MU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ne_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ml_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\nd_ZW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pa_Arab.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_GW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\shi_Tfng_MA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_TL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rn_BI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rof.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rof_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_LU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\se_NO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\seh_MZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\prg_001.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\prg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\shi_Latn_MA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\shi_Tfng.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rw.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_ST.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\smn_FI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sah_RU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\qu_BO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sbp.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sd_PK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sg_CF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ru_MD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ps_AF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sk_SK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_BR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\si_LK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rwk_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sbp_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\se_SE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_MO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_GQ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\saq.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rm_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rwk.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ro_MD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\qu_EC.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ru_KG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\seh.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\shi_Latn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\qu_PE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ses.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ru_UA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ru_BY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sl_SI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ps_PK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\saq_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ro_RO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_MZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_CV.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ru_KZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\shi.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ses_ML.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\rw_RW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ru_RU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\pt_AO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tk_TM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vi_VN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vun.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vai.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tt_RU.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tzm_MA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ta_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sw_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vai_Latn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uz_Latn_UZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sn_ZW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ur_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tr_CY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sw_CD.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uz_Arab_AF.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Cyrl_RS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ug_CN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\to_TO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tg.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Latn_RS.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uz_Arab.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uk_UA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Latn_XK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ti_ET.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\teo_UG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ta_MY.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vai_Vaii_LR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tr_TR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sv_SE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sw_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sq_AL.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uz_Latn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sn.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Cyrl_XK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\teo_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\so_ET.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vun_TZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Cyrl.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Cyrl_ME.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vai_Vaii.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Latn_BA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sq_MK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\twq_NE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ta_LK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\teo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sv_AX.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\te_IN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\th_TH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sq_XK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ta_SG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\so_SO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vai_Latn_LR.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ur_PK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\so_KE.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\so_DJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ti_ER.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sv_FI.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\twq.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uz_Cyrl_UZ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sw_UG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Latn_ME.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr_Cyrl_BA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\vo_001.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tg_TJ.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\tzm.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\wo.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hant_TW.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\xog_UG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yue_Hant_HK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zgh_MA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\xog.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yue_Hans_CN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yue_Hant.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hans_HK.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yi.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zu_ZA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yav_CM.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yo_NG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hant_MO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\wae_CH.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yi_001.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\xh_ZA.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\wae.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hans_MO.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\wo_SN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\yav.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\xh.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hans_CN.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hans.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zgh.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\zh_Hans_SG.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__about__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\more.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\source.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\console.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\less.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\debugger.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask-multidb\\script.py.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask-multidb\\alembic.ini.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask\\script.py.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask\\alembic.ini.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\layout.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\donate.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\relations.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\about.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\navigation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_babelex\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\python3html\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\\_win32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\\_unix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\checkers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\catalog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\_internal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\_reloader.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\atom.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\auth.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\base_request.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\base_response.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\accept.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_paranoid\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\_version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_babelex\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\apps.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\binary.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\form.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\csrf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\file.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\i18n.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\html5.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\lists.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\languages.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\jslexer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\extract.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\formparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\local.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\filesystem.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\fixers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\lint.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\iterio.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\http_proxy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\lint.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\dispatcher.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\etag.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\common_descriptors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\json.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\console.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\cli.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask-multidb\\env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask\\env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\exc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ifc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\hash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\hosts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\decor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\python3html\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\support.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\plural.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localedata.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\pofile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\plurals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\mofile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\signals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\login_manager.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\mixins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\security.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\posixemulation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\profiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\sessions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\securecookie.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\profiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\shared_data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\proxy_fix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\request.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\response.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\repr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_paranoid\\paranoid.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\pwd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\registry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\pbkdf2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\python3html\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\widgets.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\validators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\units.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\\__pycache__\\_unix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\localtime\\__pycache__\\_win32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\__about__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\useragents.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\testapp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\_internal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\_reloader.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\wrappers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\atom.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\user_agent.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\auth.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\accept.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\tbtools.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_paranoid\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\support.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\__pycache__\\_version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_babelex\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_babelex\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\win32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\apps.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\\escape.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\\decorator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\\command.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\\form.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\\csrf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\\file.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\extract.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\catalog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\checkers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\filesystem.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\fixers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\\dispatcher.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\base_response.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\common_descriptors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\etag.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\base_request.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\__pycache__\\console.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\__pycache__\\cli.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask-multidb\\__pycache__\\env.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask\\__pycache__\\env.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\exc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\__pycache__\\middleware.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin\\python3html\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\\i18n.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\__pycache__\\html5.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\plural.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\numbers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\languages.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\lists.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\localedata.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\mofile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\frontend.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\pofile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\jslexer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\messages\\__pycache__\\plurals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\mixins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\login_manager.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\posixemulation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\local.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\formparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\iterio.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\lint.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\profiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\\http_proxy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\\lint.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\\profiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\json.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_paranoid\\__pycache__\\paranoid.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\ifc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\hosts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\hash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\__pycache__\\widgets.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_wtf\\recaptcha\\__pycache__\\validators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\support.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\__pycache__\\units.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\signals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_login\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\testapp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\useragents.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\security.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\serving.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\sessions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\wrappers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\contrib\\__pycache__\\securecookie.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\\proxy_fix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\middleware\\__pycache__\\shared_data.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\response.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\user_agent.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\wrappers\\__pycache__\\request.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\__pycache__\\tbtools.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\__pycache__\\repr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster\\__pycache__\\support.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\win32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\pwd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\__pycache__\\registry.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sshtunnel-0.1.5.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi-2019.9.11.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\\namespace_packages.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_jsmath-1.0.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_BabelEx-0.9.3.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_BabelEx-0.9.3.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_BabelEx-0.9.3.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_BabelEx-0.9.3.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_BabelEx-0.9.3.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_BabelEx-0.9.3.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize-1.1.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize-1.1.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize-1.1.0.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize-1.1.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize-1.1.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\imagesize-1.1.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing-2.4.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils-0.15.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils-0.15.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils-0.15.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils-0.15.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils-0.15.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils-0.15.2.dist-info\\COPYING.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto-0.24.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\__pycache__\\wsgi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\werkzeug\\debug\\shared\\FONT_LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask-multidb\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_migrate\\templates\\flask\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_editor-1.0.4.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\dependency_links.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\zip-safe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-41.2.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\\handlers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\bcrypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\_data\\wordsets\\eff_long.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\des.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\unrolled.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\compiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\filters.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\environment.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\filters.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\environment.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\nodes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\compiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\_speedups.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\vengine_cpy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\recompiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\backend_ctypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\cparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\backend_ctypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\recompiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\vengine_cpy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\sftp_client.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\channel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\transport.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\channel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\transport.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\pyparsing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\pyparsing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\README.md\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ca\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ca\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de_CH\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de_CH\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ru\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\hu\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\hu\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fi\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fi\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nl\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\de\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nb\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pt\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\tr\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\tr\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ja\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ja\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\he\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\he\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\bg\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\bg\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\es\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fa\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fa\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sk\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fr\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\fr\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\en\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\el\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\it\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\it\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sv\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sv\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cy\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\et\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\et\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cs_CZ\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cs_CZ\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ar\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh_TW\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\zh_TW\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ko\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ko\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pl\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\uk\\LC_MESSAGES\\wtforms.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\uk\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\parse_c_type.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\_cffi_include.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\_embedding.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\_cffi_errors.h\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\compat\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\compat\\_ordered_dict.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_md4.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\_salsa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\_builtin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\_gen_files.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\_gen_files.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\_argon2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\_identifier.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\wtforms.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ru\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nl\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\nb\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pt\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\es\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\sk\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\en\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\el\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\cy\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\ar\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\locale\\pl\\LC_MESSAGES\\wtforms.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\dateutil\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\templatetags\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\zip-safe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\_termui_impl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\_bashcomplete.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\_textwrap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\_unicodefun.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\zip-safe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\_version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\_structures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__about__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\extern\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\cisco.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\argon2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\argon2id.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\argon2i.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\asyncfilters.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\asyncsupport.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\bccache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\_winconsole.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\cffi_opcode.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\commontypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\agent.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\compress.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\auth_handler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\common.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\buffered_pipe.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\client.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\ber.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\_winapi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\appdirs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\des.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\django.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\digests.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\des_crypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\digest.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_shorthash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_secretstream.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_scalarmult.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_box.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_pwhash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_aead.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_generichash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_kx.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_secretbox.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_hash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\crypto_sign.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\debug.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\defaults.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\constants.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\db.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\decoder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\decorators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\fshp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\hash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\encoding.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\hashlib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\ext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\idtracking.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\form.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\i18n.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\html5.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\\html5.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\\form.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\dateutil\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\i18n.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\form.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\encoder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\errors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\formatting.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\globals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\error.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\ffiplatform.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\kex_group16.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\kex_gex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\ed25519key.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\ecdsakey.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\hostkeys.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\dsskey.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\kex_ecdh_nist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\kex_group1.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\file.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\kex_curve25519.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\kex_group14.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\md4.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\mssql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\mysql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\oracle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\ldap_digests.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\misc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\md5_crypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\\models.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\meta.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\optimizer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\nativetypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\loaders.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\lexer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\nodes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\meta.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\ndb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\orm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\\orm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\ordered_dict.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\lock.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\model.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\kex_gss.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\message.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\markers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\postgres.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\pbkdf2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\scrypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\phpass.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\scram.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\roundup.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\public.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\secret.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\randombytes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\scrypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\runtime.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\sandbox.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\raw_json.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\scanner.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\pkgconfig.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\packet.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\server.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\pkey.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\rsakey.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\proxy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\pipe.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\py3compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\primes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\requirements.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\py31compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\sha2_crypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\sha1_crypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\sun_md5_crypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\signing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\sodium_core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\simple.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\session.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\\session.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\setuptools_ext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\sftp_handle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\sftp_file.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\sftp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\ssh_exception.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\ssh_gss.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\sftp_attr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\sftp_server.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\sftp_si.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\tags.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\specifiers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\six.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\compat\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\windows.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\visitor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\tests.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\validators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\templatetags\\wtforms.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\tool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\testing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\termui.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\verifier.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\vengine_gen.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\win_pageant.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\__about__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\extern\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\compat\\__pycache__\\_ordered_dict.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\argon2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\__pycache__\\_md4.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\__pycache__\\_salsa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\__pycache__\\_builtin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\scrypt\\__pycache__\\_gen_files.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\__pycache__\\_gen_files.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\__pycache__\\argon2i.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\__pycache__\\argon2id.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\__pycache__\\_argon2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\asyncfilters.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\asyncsupport.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\_identifier.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\dateutil\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\templatetags\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\_winconsole.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\_unicodefun.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\_bashcomplete.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\_termui_impl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\_textwrap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\agent.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\_winapi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\auth_handler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\_version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\_structures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\__pycache__\\appdirs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\\binary.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\bcrypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\cisco.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_hash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_shorthash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_aead.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_secretbox.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_pwhash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_kx.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_sign.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_box.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_secretstream.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_scalarmult.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\crypto_generichash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\bccache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\debug.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\constants.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\__pycache__\\db.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\cffi_opcode.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\commontypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\cparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\client.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\compress.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\common.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\ber.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\buffered_pipe.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\\decor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\\des.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\des_crypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\django.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\digests.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\__pycache__\\digest.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\__pycache__\\des.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\encoding.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\defaults.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\ext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\\form.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\\__pycache__\\form.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\dateutil\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\__pycache__\\form.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\encoder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\decoder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\errors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\decorators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\formatting.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\error.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\ffiplatform.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\ed25519key.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\file.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\dsskey.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\ecdsakey.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\\md4.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\ldap_digests.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\md5_crypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\oracle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\misc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\mssql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\mysql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\fshp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\\__pycache__\\models.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\hashlib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\hash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\lexer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\optimizer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\nativetypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\loaders.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\meta.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\idtracking.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\\i18n.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\\meta.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\__pycache__\\html5.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\widgets\\__pycache__\\html5.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\appengine\\__pycache__\\ndb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\__pycache__\\orm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\__pycache__\\i18n.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\sqlalchemy\\__pycache__\\orm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\ordered_dict.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\globals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\model.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\lock.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\hostkeys.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\kex_group16.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\kex_gex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\message.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\kex_gss.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\kex_group1.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\kex_group14.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\kex_ecdh_nist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\kex_curve25519.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\markers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\utils\\__pycache__\\pbkdf2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\phpass.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\scram.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\roundup.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\postgres.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\scrypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\pbkdf2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\public.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\secret.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\randombytes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\pwhash\\__pycache__\\scrypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\sandbox.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\runtime.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\csrf\\__pycache__\\session.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\csrf\\__pycache__\\session.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\raw_json.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\scanner.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\pkgconfig.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\setuptools_ext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\primes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\sftp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\pkey.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\server.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\packet.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\pipe.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\proxy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\rsakey.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\sftp_attr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\py3compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\requirements.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\__pycache__\\py31compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\sha1_crypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\sun_md5_crypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\sha2_crypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\signing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\sodium_core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\tests.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\fields\\__pycache__\\simple.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\simplejson\\__pycache__\\tool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\termui.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\testing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\types.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\sftp_client.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\sftp_handle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\ssh_gss.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\sftp_si.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\ssh_exception.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\sftp_server.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\sftp_file.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\specifiers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\tags.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\__pycache__\\six.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\handlers\\__pycache__\\windows.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\_data\\wordsets\\eff_short.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\_data\\wordsets\\bip39.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\_data\\wordsets\\eff_prefixed.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\ext\\django\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib\\crypto\\_blowfish\\__pycache__\\unrolled.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\bindings\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\\namespace_packages.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\jinja2\\__pycache__\\visitor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt-3.1.7.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt-3.1.7.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt-3.1.7.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt-3.1.7.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\\validators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\i18n\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wtforms\\ext\\django\\templatetags\\__pycache__\\wtforms.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\\namespace_packages.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\click\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\vengine_gen.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi\\__pycache__\\verifier.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\dependency_links.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko\\__pycache__\\win_pageant.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\SQLAlchemy-1.3.8.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\SQLAlchemy-1.3.8.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\SQLAlchemy-1.3.8.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_devhelp-1.0.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt-3.1.7.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt-3.1.7.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_qthelp-1.0.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pytz-2018.9.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.2.3.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools-40.8.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cffi-1.12.3.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\SQLAlchemy-1.3.8.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\SQLAlchemy-1.3.8.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\SQLAlchemy-1.3.8.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\__pycache__\\pyparsing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\uts46data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\uts46data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\idnadata.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\idnadata.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\uts46data.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\index.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\download.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\wheel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\misc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\req_install.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\ipaddress.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pyparsing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distro.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\\distro.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\\pyparsing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\\ipaddress.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\models.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pkg_resources\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pkg_resources\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\uts46data.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_tokenizer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\html5parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\constants.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\_tokenizer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\html5parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\constants.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\connectionpool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\fallback.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\database.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\w64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\t64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\locators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\w32.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\metadata.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\t32.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\wheel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\database.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\locators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\tarfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\__pycache__\\tarfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\_structures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__about__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\appdirs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\_internal_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__version__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\adapters.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\_cmd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\adapter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\_structures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__about__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\_in_process.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_inputstream.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_ihatexml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\alphabeticalattributes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\_base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\ansitowin32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\ansi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\_collections.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_appengine_environ.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\abnf_regexp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\_mixin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\backports\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\ssl_match_hostname\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\ssl_match_hostname\\_implementation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\_version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\sysconfig.cfg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\bdist_wheel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\build_env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\bazaar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\check.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\candidate.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\base_command.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\cmdoptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\autocompletion.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\check.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\appdirs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\certs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\auth.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\bar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\check.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\colorlog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\build.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\appengine.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\\bindings.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\builder.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\charsetgroupprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\chardistribution.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\codingstatemachine.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\big5prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\charsetprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\big5freq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\convert.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\configuration.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\deprecation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\encoding.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\constructors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\debug.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\configuration.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\completion.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\download.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\cookies.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\counter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\controller.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\envbuild.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\etree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\etree_lxml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\dom.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\datrie.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\etree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\etree_lxml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\dom.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\connection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\connection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\euckrfreq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\eucjpprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\euctwprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\euckrprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\euctwfreq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\cp949prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\enums.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\escsm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\escprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\intranges.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\glibc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\hashes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\filesystem.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\git.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\freeze.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\index.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\format_control.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\installed.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\help.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\install.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\hash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\freeze.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\labels.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\help.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\hooks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\helpers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\filewrapper.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\heuristics.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\\file_cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\intranges.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\inject_meta_charset.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\genshi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\\genshi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\initialise.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\filepost.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\iri.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\index.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\langbulgarianmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\jisfreq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\hebrewprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\jpcntx.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\gb2312prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\gb2312freq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\markers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\metadata.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\locations.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\legacy_resolve.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\logging.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\models.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\marker_files.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\mercurial.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\link.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\main_parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\list.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\mklabels.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\linklockfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\mkdirlockfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\markers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\lint.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\optionaltags.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\ntlmpool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\\low_level.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\normalizers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\misc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\backports\\makefile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\markers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\manifest.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\misc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\latin1prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\mbcsgroupprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\mbcssm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\langhebrewmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\langcyrillicmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\langhungarianmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\mbcharsetprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\langthaimodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\langgreekmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\langturkishmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\requirements.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\pep425tags.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\pkginfo.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\pack.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\package_data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\pyproject.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\pep425tags.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\resolve.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\outdated.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\packaging.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\prepare.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\req_uninstall.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\req_set.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\req_tracker.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\req_file.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\packages.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\pidlockfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\\redis_cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\requirements.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pkg_resources\\py31compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\package_data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\py.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\poolmanager.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\request.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\response.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\pyopenssl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\request.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\queue.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\retry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\response.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\parseresult.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\resources.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\setuptools_build.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\selection_prefs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\search_scope.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\search.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\show.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\six.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\retrying.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\sessions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\serialize.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\serializer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\sanitizer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\\sax.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\socks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\securetransport.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\six.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\scripts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\shutil.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\sbcharsetprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\sjisprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\sbcsgroupprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\specifiers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\unpack.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\temp_dir.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\ui.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\typing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\subversion.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\target_python.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\status_codes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\source.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\uninstall.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\tests.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\structures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\status_codes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\spinner.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\symlinklockfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\sqlitelockfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\specifiers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\ssl_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\url.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\timeout.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\uri.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\validators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\sysconfig.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\universaldetector.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\utf8prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\_structures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\__about__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\wheelfile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\virtualenv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\versioncontrol.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\wheel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\wheel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\x_user_defined.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\__version__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\_internal_utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\wrapper.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\_cmd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\_structures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\__about__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\wrappers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\_in_process.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\_inputstream.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\_ihatexml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\whitespace.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\__pycache__\\_base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\winterm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\win32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\_collections.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\\_appengine_environ.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\wait.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\_mixin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\backports\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\ssl_match_hostname\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\ssl_match_hostname\\__pycache__\\_implementation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\bdist_wheel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\build_env.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\appdirs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__pycache__\\bazaar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\__pycache__\\check.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\candidate.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\\cmdoptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\\autocompletion.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\\base_command.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\check.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\\appdirs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\auth.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\certs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\adapters.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\__pycache__\\bar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\adapter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\check.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\build.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\colorlog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\_utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\alphabeticalattributes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__pycache__\\ansi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__pycache__\\ansitowin32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\\appengine.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\\__pycache__\\bindings.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\builder.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\abnf_regexp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\__pycache__\\_version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\codingstatemachine.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\charsetgroupprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\__pycache__\\convert.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\configuration.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\download.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\encoding.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\deprecation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\\constructors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\completion.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\configuration.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\download.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\debug.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\cookies.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\__pycache__\\counter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\controller.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\envbuild.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__pycache__\\etree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__pycache__\\etree_lxml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__pycache__\\dom.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\__pycache__\\datrie.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\__pycache__\\etree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\__pycache__\\etree_lxml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treebuilders\\__pycache__\\dom.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\connectionpool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\connection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\connection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\escprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\cp949prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\eucjpprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\enums.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\idnadata.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\index.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\filesystem.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\glibc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\hashes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__pycache__\\git.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\__pycache__\\freeze.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\index.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\format_control.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\help.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\freeze.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\hash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\help.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\hooks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\__pycache__\\helpers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\filewrapper.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\heuristics.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\\__pycache__\\file_cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\idnadata.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\inject_meta_charset.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treewalkers\\__pycache__\\genshi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\\__pycache__\\genshi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__pycache__\\initialise.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\filepost.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\__pycache__\\fallback.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\msgpack\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\index.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\hebrewprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\gb2312freq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\euctwprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\euctwfreq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\markers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\metadata.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\intranges.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\legacy_resolve.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\locations.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\logging.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\marker_files.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__pycache__\\mercurial.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\link.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\\main_parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\__pycache__\\installed.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\install.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\list.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\__pycache__\\labels.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__pycache__\\linklockfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\markers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\intranges.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\lint.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\_securetransport\\__pycache__\\low_level.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\iri.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\backports\\__pycache__\\makefile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\markers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\manifest.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\langhungarianmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\langgreekmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\langthaimodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\mbcssm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\langcyrillicmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\jisfreq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\pkginfo.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\pep425tags.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\__pycache__\\pack.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\idna\\__pycache__\\package_data.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\pep425tags.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\pyproject.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\misc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\packaging.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\models.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\outdated.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\operations\\__pycache__\\prepare.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\\req_set.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\\req_install.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\\req_file.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\\req_uninstall.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\req\\__pycache__\\req_tracker.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\__pycache__\\mklabels.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\packages.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\models.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__pycache__\\mkdirlockfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__pycache__\\pidlockfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\caches\\__pycache__\\redis_cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pkg_resources\\__pycache__\\py31compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\idna\\__pycache__\\package_data.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\optionaltags.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\_trie\\__pycache__\\py.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\poolmanager.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\request.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\\ntlmpool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\\pyopenssl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\queue.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\request.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\parseresult.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\misc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\normalizers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\metadata.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\__pycache__\\misc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\requirements.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\resolve.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\setuptools_build.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\selection_prefs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\search_scope.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\__pycache__\\source.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\search.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\show.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\\six.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\__pycache__\\retrying.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\sessions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\serialize.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\requirements.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\__pycache__\\serializer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\sanitizer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\treeadapters\\__pycache__\\sax.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\__pycache__\\response.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\\socks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\contrib\\__pycache__\\securetransport.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\response.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\retry.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\__pycache__\\six.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\scripts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\resources.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\__pycache__\\shutil.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\sbcharsetprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\sbcsgroupprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\specifiers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pkg_resources\\_vendor\\packaging\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\cli\\__pycache__\\unpack.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\temp_dir.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\typing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\virtualenv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\utils\\__pycache__\\ui.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__pycache__\\subversion.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\vcs\\__pycache__\\versioncontrol.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\models\\__pycache__\\target_python.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\cli\\__pycache__\\status_codes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\uninstall.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\__pycache__\\tests.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\status_codes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\requests\\__pycache__\\structures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\progress\\__pycache__\\spinner.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__pycache__\\symlinklockfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\lockfile\\__pycache__\\sqlitelockfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\specifiers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\packaging\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\url.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\timeout.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\ssl_.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\util\\__pycache__\\wait.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\validators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\urllib3\\packages\\rfc3986\\__pycache__\\uri.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\_backport\\__pycache__\\sysconfig.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\PyNaCl-1.3.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\PyNaCl-1.3.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\PyNaCl-1.3.0.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\PyNaCl-1.3.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\PyNaCl-1.3.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\PyNaCl-1.3.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alabaster-0.7.12.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel\\__pycache__\\wheelfile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\\wheel-0.33.6-py2.py3-none-any.whl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\__pycache__\\wheel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\distributions\\__pycache__\\wheel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_internal\\commands\\__pycache__\\wheel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\webencodings\\__pycache__\\x_user_defined.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\cachecontrol\\__pycache__\\wrapper.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pep517\\__pycache__\\wrappers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\html5lib\\filters\\__pycache__\\whitespace.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__pycache__\\win32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\colorama\\__pycache__\\winterm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\distlib\\__pycache__\\wheel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\jpcntx.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\extras.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\extras.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\rrule.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\rrule.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\tz.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\__pycache__\\tz.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\zoneinfo\\dateutil-zoneinfo.tar.gz\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\\_parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\\__pycache__\\_parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\connectionpool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\unistring.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_lasso_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_scilab_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_vim_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ncl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_cocoa_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\modula2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\graphics.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\javascript.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\int_fiction.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\webmisc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_openedge_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\shell.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\lisp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_mapping.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\dsls.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\scripting.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\python.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_php_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\templates.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\jvm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_lasso_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ncl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\templates.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_openedge_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_cocoa_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\cli\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\_ipaddress.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\_range.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\_json.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\_lru_cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\_constants.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\_native.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\_json.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt\\__about__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\zip-safe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\\version_info.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\_common.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\_common.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\_factories.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\zoneinfo\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\_collections.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_appengine_environ.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\backports\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\ssl_match_hostname\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\ssl_match_hostname\\_implementation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_htmlmin\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\_mapping.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\filters\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_cl_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_lua_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_asy_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_postgres_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_mql_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_csound_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\ansitowin32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\ansi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\_version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\appengine.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\\bindings.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\bbcode.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\abap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\algol.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\algol_nu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\arduino.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\autumn.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\actionscript.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_tsql_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ampl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_sourcemod_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_stata_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\algebra.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\basic.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\apl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\archetype.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ambient.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_stan_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\asm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\automation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\_vbscript_builtins.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\bibtex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\agile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\cli\\chardetect.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\connection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\connection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\cmdline.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\console.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\borland.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\colorful.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\bw.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\c_like.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\capnproto.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\dalvik.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\boa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\clean.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\business.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\c_cpp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\configs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\compiled.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\crystal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\console.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\csound.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\d.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\chapel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\css.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\errorcodes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\extensions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\errors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\exc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\encoding.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\easter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\filepost.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\fields.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\filter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\default.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\emacs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ezhil.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\forth.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\felix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ecl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\esoteric.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\eiffel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\floscript.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\dotnet.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\factor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\fortran.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\diff.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\elm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\erlang.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\foxpro.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\data.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\fantom.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\dylan.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\initialise.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\html.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\img.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\friendly.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\fruity.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\igor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\hexdump.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\html.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\hdl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\igor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\go.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\haskell.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\idl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\haxe.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\functional.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\graph.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\grammar_notation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\inferno.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\freefem.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\jws.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\\isoparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\ntlmpool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\\low_level.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\backports\\makefile.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\modeline.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\latex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\irc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\lovelace.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\native.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\manni.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\monokai.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\murphy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\nimrod.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\monte.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\math.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\make.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\markup.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\nix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\julia.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\iolang.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\matlab.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\nit.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\j.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\modeling.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\installers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\pool.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\poolmanager.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\pyopenssl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\queue.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\plugin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\other.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\paraiso_dark.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\paraiso_light.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\pastie.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\rainbow_dash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\perldoc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\objective.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\prolog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\praat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\pony.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ooc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\parsers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\pascal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\other.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\qvt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\oberon.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\parasail.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\r.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\php.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\pawn.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\perl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\serializer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\signer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\relativedelta.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\zoneinfo\\rebuild.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\request.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\response.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\securetransport.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\request.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\retry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\response.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\regexopt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\scanner.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\rtf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\rrt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\sas.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\rebol.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\roboconf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\rust.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\resource.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\ruby.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\sas.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\rdf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\rnc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\robotframework.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\sgf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\test.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\sql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\timed.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\socks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\ssl_.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\timeout.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\six.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\sphinxext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\style.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\svg.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\terminal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\terminal256.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\tango.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\solarized.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\stata_light.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\stata_dark.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\text.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\theorem.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\sql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\slash.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\snobol.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\smalltalk.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\textfmts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\testing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\tcl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\supercollider.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\smv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\stata.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\textedit.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\special.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\teraterm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\cli\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\writer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\tz.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\_ipaddress.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\__pycache__\\_constants.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\url_safe.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt\\__pycache__\\__about__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\winterm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\win32.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tzwin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\_common.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\win.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\__pycache__\\_factories.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\__pycache__\\_common.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\zoneinfo\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\_collections.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\\_appengine_environ.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\url.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\wait.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\backports\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\ssl_match_hostname\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\ssl_match_hostname\\__pycache__\\_implementation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_htmlmin\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\token.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\trac.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\vs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\xcode.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\vim.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\filters\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\unicon.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\trafficscript.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\xorg.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\urbi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\verification.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\web.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\whiley.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\x10.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\varnish.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\typoscript.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_asy_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_cl_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\big5freq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\big5prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\chardistribution.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\charsetprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\cli\\__pycache__\\chardetect.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\_json.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\_range.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\_lru_cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\__pycache__\\_native.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\_json.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__pycache__\\ansi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__pycache__\\ansitowin32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\_version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\\appengine.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\\__pycache__\\bindings.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\bbcode.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\_mapping.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\algol_nu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\borland.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\algol.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\arduino.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\autumn.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\bw.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\abap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\agile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\chapel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\asm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\apl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_postgres_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ambient.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\business.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\algebra.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\automation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_vim_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\boa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_tsql_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\c_like.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\clean.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\escsm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\euckrprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\euckrfreq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\errorcodes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\extensions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\errors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\exc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\encoding.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\easter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\connectionpool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\connection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\filepost.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\fields.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\connection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\formatter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\cmdline.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\filter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\console.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\default.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\colorful.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\emacs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\floscript.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\dotnet.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\d.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\configs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ezhil.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\dylan.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\data.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\crystal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\dalvik.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\css.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\compiled.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\felix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\erlang.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\eiffel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\console.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\langturkishmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\gb2312prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\langbulgarianmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\langhebrewmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\jws.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__pycache__\\initialise.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\parser\\__pycache__\\isoparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\irc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\img.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\html.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\fruity.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\friendly.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\igor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\hdl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\functional.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\idl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\j.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\haxe.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\hexdump.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\foxpro.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\installers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\inferno.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\html.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\graphics.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\mbcsgroupprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\latin1prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\mbcharsetprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\pool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\poolmanager.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\\ntlmpool.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\\pyopenssl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\_securetransport\\__pycache__\\low_level.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\queue.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\backports\\__pycache__\\makefile.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\lexer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\modeline.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\plugin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\latex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\other.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\perldoc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\paraiso_dark.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\pastie.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\monokai.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\murphy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\native.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\paraiso_light.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\lovelace.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\manni.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\rainbow_dash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\make.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\oberon.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\markup.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\nimrod.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\parasail.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\prolog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\pawn.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\r.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\parsers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\nit.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\php.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\rdf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\other.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\perl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\matlab.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\sjisprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\sql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\serializer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\signer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\relativedelta.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\zoneinfo\\__pycache__\\rebuild.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\response.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\__pycache__\\request.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\\socks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\contrib\\__pycache__\\securetransport.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\response.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\retry.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\request.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\packages\\__pycache__\\six.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\sphinxext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\regexopt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\scanner.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\rtf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\sas.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\rrt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\solarized.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\slash.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\sgf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\shell.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\smalltalk.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\rust.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\snobol.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\sas.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\rebol.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\rnc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\resource.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\special.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\robotframework.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\smv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\universaldetector.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\chardet\\__pycache__\\utf8prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__pycache__\\writer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\pytoml\\__pycache__\\test.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\__pycache__\\tz.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\url_safe.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous\\__pycache__\\timed.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\bcrypt\\_bcrypt.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__pycache__\\win32.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama\\__pycache__\\winterm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\tzwin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\dateutil\\tz\\__pycache__\\win.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\url.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\timeout.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\ssl_.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3\\util\\__pycache__\\wait.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\token.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\unistring.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\__pycache__\\style.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\terminal256.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\svg.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\formatters\\__pycache__\\terminal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\stata_dark.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\vim.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\vs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\stata_light.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\trac.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\tango.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\styles\\__pycache__\\xcode.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\stata.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\verification.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\supercollider.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\varnish.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\teraterm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\testing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\urbi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\sql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\text.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\textedit.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\textfmts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\x10.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\\namespace_packages.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_serializinghtml-1.1.3.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Click-7.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Click-7.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Click-7.0.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Click-7.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Click-7.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Click-7.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\markupsafe\\_speedups.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\MarkupSafe-1.1.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\MarkupSafe-1.1.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\MarkupSafe-1.1.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\MarkupSafe-1.1.1.dist-info\\LICENSE.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\MarkupSafe-1.1.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\MarkupSafe-1.1.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater-1.3.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater-1.3.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater-1.3.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater-1.3.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater-1.3.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\speaklater-1.3.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv-16.7.4.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Login-0.4.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\AUTHORS\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Pygments-2.4.2.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic-1.2.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_WTF-0.14.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\scripting.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\javascript.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_scilab_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\lisp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\jvm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_php_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_mapping.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3-1.25.6.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\keys.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\algos.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\x509.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\x509.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\app.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\helpers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\app.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\helpers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\yacctab.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\c_parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\yacctab.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\c_parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\c_ast.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\lex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\yacc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\cpp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__pycache__\\yacc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\revision.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\compare.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\ops.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__pycache__\\ops.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\\migration.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\\environment.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\\__pycache__\\migration.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\\__pycache__\\environment.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\cli.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\cli-32.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\gui-32.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\gui.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\dist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\cli-64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\gui-64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\package_index.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\msvc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\dist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\msvc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\easy_install.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\easy_install.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\__pycache__\\pyparsing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\codegen.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\codegen.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\LICENSE.APACHE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\LICENSE.BSD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\_c_ast.cfg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.devhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cak\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ur\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.applehelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\templates\\_access.html_t\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr_RS\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cak\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ur\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.htmlhelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\templates\\project.hhc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\templates\\project.hhp\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\generic\\script.py.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\generic\\alembic.ini.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\pylons\\script.py.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\pylons\\alembic.ini.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\multidb\\script.py.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\multidb\\alembic.ini.mako\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\LICENSE.APACHE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\LICENSE.BSD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.qthelp.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.devhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cak\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ur\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.applehelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr_RS\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\de\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\si\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\da\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\he\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\es\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\id\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cak\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\el\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\it\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ur\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\et\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.serializinghtml.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ur\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_perf\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\json\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\_ast_gen.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sphinxcontrib.qthelp.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\he\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\es\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\id\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\el\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\et\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.qthelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\sphinxcontrib.devhelp.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\sphinxcontrib.applehelp.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\jsmath\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sphinxcontrib.serializinghtml.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ca\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_BR\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ru\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hu\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fi\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr_RS\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nl\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\nb_NO\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\de\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\si\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\da\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pt_PT\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ja\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\he\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ro\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\es\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fa\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\id\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\fr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ne\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cak\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\el\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\it\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ur\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cy\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_CN\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sr\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\bn\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\et\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\vi\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\cs\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\pl\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.serializinghtml.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sphinxcontrib.htmlhelp.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr_RS\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\de\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\si\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\da\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sr@latin\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\tr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hr\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\mk\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eo\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sk\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\eu\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\uk_UA\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lt\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\cak\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\it\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sv\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\sl\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi_IN\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ar\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ta\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\zh_TW\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\hi\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\ko\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\lv\\LC_MESSAGES\\sphinxcontrib.htmlhelp.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_gravatar\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\LICENSE.PSF\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\extern\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__about__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\_utilities.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\_saferef.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_teletex_codec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_ordereddict.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_inet.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_elliptic_curve.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_iri.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_int.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_errors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_ffi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_perf\\_big_num_ctypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\blueprints.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ast_transforms.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\_build_tables.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\batch.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\assertions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_deprecation_warning.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\archive_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\build_clib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\bdist_wininst.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\bdist_egg.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\alias.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\bdist_rpm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\_structures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ast.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\_ast_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\babelplugin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\beaker_cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\autohandler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\csr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\crl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\cms.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\cli.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\c_generator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\c_lexer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\c_ast.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\ctokens.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\command.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\context.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\build_meta.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\build_py.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\build_ext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\cache.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\cmd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\ctx.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\logging.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\globals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\debughelpers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\\model.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\lextab.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\jsonimpl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\messaging.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\exc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\langhelpers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\mssql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\impl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\fixtures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\generic\\env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\pylons\\env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\multidb\\env.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\depends.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\glibc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\extension.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\monkey.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\glob.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\lib2to3_ex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\dep_util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\launch.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\install_scripts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\develop.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\dist_info.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\install_lib.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\install.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\install_egg_info.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\egg_info.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\markers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\lookup.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\filters.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\lexer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\linguaplugin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\extract.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\exc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\pem.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\pdf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\parser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\pkcs12.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\ocsp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\plyparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\op.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\pyfiles.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\postgresql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\mysql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\oracle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\rewriter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\render.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\schemaobj.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\requirements.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\pep425tags.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\py27compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\sandbox.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\py33compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\py31compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\namespaces.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\sdist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\saveopts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\py36compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\register.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\rotate.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\requirements.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\runtime.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\pygen.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\parsetree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\pyparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\preprocessors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\pygmentplugin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\tsp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\views.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\signals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\testing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\sessions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\wrappers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\templating.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\json\\tag.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\ygen.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\jsmath\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\sqla_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\write_hooks.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\sqlite.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\toimpl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_gravatar\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\ssl_support.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\unicode_utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\windows_support.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\site-patch.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\wheel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\test.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\upload.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\setopt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\upload_docs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\six.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\specifiers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\template.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\turbogears.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_stan_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_lua_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_mql_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_stata_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_csound_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_sourcemod_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\__pycache__\\_saferef.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_ordereddict.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_int.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_ffi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_inet.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_iri.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_teletex_codec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_errors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_types.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\_elliptic_curve.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_perf\\__pycache__\\_big_num_ctypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\_perf\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\json\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\_ast_gen.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\_build_tables.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\jsmath\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\runtime\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_gravatar\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\_deprecation_warning.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\extern\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\_structures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\__about__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\_ast_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ampl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\c_cpp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\archetype.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\_vbscript_builtins.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\basic.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\bibtex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\actionscript.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker\\__pycache__\\_utilities.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\algos.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\blueprints.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\ast_transforms.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\__pycache__\\api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__pycache__\\batch.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\__pycache__\\assertions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\build_meta.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\archive_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\build_py.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\alias.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\build_clib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\bdist_egg.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\bdist_rpm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\bdist_wininst.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\build_ext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\ast.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\babelplugin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\autohandler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\beaker_cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\csound.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\capnproto.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\csr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\crl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\cms.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\ctx.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\debughelpers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\cli.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\c_lexer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\c_generator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__pycache__\\ctokens.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__pycache__\\cpp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\__pycache__\\context.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\__pycache__\\command.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\__pycache__\\compare.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\depends.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\dep_util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\develop.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\cache.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\cmd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\grammar_notation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\haskell.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\dsls.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\factor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\igor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\graph.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ecl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\forth.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\freefem.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\go.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\elm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\diff.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\esoteric.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\fortran.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\fantom.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\globals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\\exc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\impl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\__pycache__\\env.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\__pycache__\\fixtures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\generic\\__pycache__\\env.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\pylons\\__pycache__\\env.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\multidb\\__pycache__\\env.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\glibc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\extension.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\glob.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\install.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\dist_info.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\install_scripts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\install_lib.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\install_egg_info.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\egg_info.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\filters.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\extract.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\iolang.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\julia.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\modeling.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\objective.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\nix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ooc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\int_fiction.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\modula2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\monte.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\math.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\keys.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\ocsp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\logging.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_sqlalchemy\\__pycache__\\model.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\lextab.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__pycache__\\lex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\__pycache__\\jsonimpl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\__pycache__\\op.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\\langhelpers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\\messaging.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\oracle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\mssql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\mysql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\lib2to3_ex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\monkey.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\launch.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\namespaces.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\markers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\lexer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\lookup.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\linguaplugin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\pascal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\python.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\roboconf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\praat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\pony.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\qvt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\parser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\pdf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\pem.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\pkcs12.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\__pycache__\\plyparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\\pyfiles.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\__pycache__\\revision.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\postgresql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\__pycache__\\rewriter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\autogenerate\\__pycache__\\render.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\testing\\__pycache__\\requirements.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\package_index.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\pep425tags.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\py31compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\py33compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\py27compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\register.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\rotate.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\py36compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\requirements.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\pygen.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\pyparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\parsetree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\pygmentplugin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\preprocessors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\typoscript.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\unicon.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\trafficscript.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\theorem.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\tcl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\ruby.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\tsp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\signals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\testing.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\templating.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\sessions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\json\\__pycache__\\tag.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\util\\__pycache__\\sqla_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\ddl\\__pycache__\\sqlite.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__pycache__\\schemaobj.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\operations\\__pycache__\\toimpl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\unicode_utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\sandbox.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\site-patch.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\ssl_support.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\setopt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\test.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\saveopts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\upload.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\sdist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\__pycache__\\six.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\specifiers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\template.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\runtime.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\ext\\__pycache__\\turbogears.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\whiley.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\webmisc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\web.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pygments\\lexers\\__pycache__\\xorg.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_HTMLmin-1.5.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_HTMLmin-1.5.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_HTMLmin-1.5.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_HTMLmin-1.5.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\\AUTHORS\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3-1.25.6.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3-1.25.6.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3-1.25.6.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\asn1crypto\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\WTForms-2.2.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\WTForms-2.2.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\WTForms-2.2.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\wrappers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask\\__pycache__\\views.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer-1.9.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer-1.9.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer-1.9.1.dist-info\\COPYING\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer-1.9.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser\\ply\\__pycache__\\ygen.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six-1.12.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six-1.12.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six-1.12.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six-1.12.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\locales\\.tx\\config\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\templates\\project.qhp\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\qthelp\\templates\\project.qhcp\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\devhelp\\locales\\.tx\\config\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\applehelp\\locales\\.tx\\config\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\jsmath\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\serializinghtml\\locales\\.tx\\config\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\locales\\.tx\\config\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib\\htmlhelp\\templates\\project.stp\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\script\\__pycache__\\write_hooks.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\generic\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\pylons\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\alembic\\templates\\multidb\\README\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous-1.1.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous-1.1.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous-1.1.0.dist-info\\LICENSE.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous-1.1.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Principal-0.4.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Principal-0.4.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Principal-0.4.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Principal-0.4.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_gravatar\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil-5.5.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil-5.5.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil-5.5.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil-5.5.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\AUTHORS.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\\namespace_packages.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\script.tmpl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\script (dev).tmpl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\windows_support.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\__pycache__\\wheel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\launcher manifest.xml\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\command\\__pycache__\\upload_docs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\setuptools\\_vendor\\packaging\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\\namespace_packages.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama-0.4.1.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama-0.4.1.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama-0.4.1.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama-0.4.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama-0.4.1.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\\LICENSE.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\mako\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_HTMLmin-1.5.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_HTMLmin-1.5.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\blinker-1.4.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\packaging-19.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\wheel-0.33.6.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3-1.25.6.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\urllib3-1.25.6.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\WTForms-2.2.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\WTForms-2.2.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\paramiko-2.6.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\zip-safe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\passlib-1.7.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Gravatar-0.5.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Jinja2-2.10.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer-1.9.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer-1.9.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Migrate-2.4.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six-1.12.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\six-1.12.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous-1.1.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\itsdangerous-1.1.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Principal-0.4.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask-1.0.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil-5.5.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil-5.5.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography-2.7.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_applehelp-1.0.1.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sphinxcontrib_htmlhelp-1.0.2.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\colorama-0.4.1.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip-19.0.3.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Babel-2.7.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\langhelpers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\\langhelpers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\\_collections.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\selectable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\compiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\operators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\functions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\dml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\elements.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\schema.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\ddl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\sqltypes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\type_api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\operators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\type_api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\schema.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\elements.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\ddl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\functions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\sqltypes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\selectable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\compiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\psycopg2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\cx_oracle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\__pycache__\\cx_oracle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\interfaces.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\reflection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\default.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\result.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\interfaces.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\default.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\result.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\session.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\persistence.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\relationships.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\strategy_options.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\collections.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\mapper.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\dependency.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\events.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\attributes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\query.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\strategies.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\strategies.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\query.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\mapper.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\persistence.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\attributes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\events.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\collections.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\strategy_options.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\session.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\relationships.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\hybrid.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\automap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\associationproxy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\mutable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\automap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\associationproxy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\hybrid.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\requirements.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_reflection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_psutil_windows.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_pslinux.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_pswindows.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_pslinux.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\jpcntx.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\_collections.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\databases\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_psaix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_common.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\flask_security.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\da_DK\\LC_MESSAGES\\flask_security.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\da_DK\\LC_MESSAGES\\flask_security.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\fr_FR\\LC_MESSAGES\\flask_security.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\fr_FR\\LC_MESSAGES\\flask_security.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\nl_NL\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\ru_RU\\LC_MESSAGES\\flask_security.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\ru_RU\\LC_MESSAGES\\flask_security.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\de_DE\\LC_MESSAGES\\flask_security.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\translations\\de_DE\\LC_MESSAGES\\flask_security.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\forgot_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\login_user.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\change_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\register_user.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\_menu.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\_messages.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\send_confirmation.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\reset_password.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\_macros.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\send_login.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\change_notice.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\reset_instructions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\confirmation_instructions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\reset_notice.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\welcome.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\login_instructions.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\\metadata.json\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\annotation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\adodbapi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\array.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\attr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\baked.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\api.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\assertsql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\assertions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_psosx.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_pssunos.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_psposix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\_psbsd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\babel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\crud.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\cymysql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\compiler.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\clsregistry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\bootstrap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\charsetgroupprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\chardistribution.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\cp949prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\codingstatemachine.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\big5prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\charsetprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\big5freq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\cli\\chardetect.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\cli.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\changeable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\confirmable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\deprecations.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\dbapi_proxy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\default_comparator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\dml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\enumerated.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\dml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\descriptor_props.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\exc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\deprecated_interfaces.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\dynamic.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\evaluator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\engines.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\entities.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\euckrfreq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\eucjpprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\euctwprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\euckrprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\euctwfreq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\enums.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\escsm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\escprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\datastore.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\decorators.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\english_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\interfaces.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\inspection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\impl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\expression.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\information_schema.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\fdb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\gaerdbms.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\ext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\hstore.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\instrumentation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\interfaces.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\identity.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\indexable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\instrumentation.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\horizontal_shard.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\fixtures.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\exclusions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\jisfreq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\hebrewprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\gb2312prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\gb2312freq.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\forms.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\log.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\json.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\mxodbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\mxodbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\kinterbasdb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\json.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\json.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\legacy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\mxodbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\loading.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\mutable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\mock.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\latin1prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\langbulgarianmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\mbcsgroupprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\mbcssm.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\langhebrewmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\langcyrillicmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\jpcntx.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\langhungarianmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\mbcharsetprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\langthaimodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\langgreekmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\langturkishmodel.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\processors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\queue.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\naming.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\pysqlite.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\pysqlcipher.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\pymssql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\pyodbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\pyodbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\pysybase.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\mysqldb.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\pyodbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\reflection.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\mysqlconnector.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\oursql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\pymysql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\pygresql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\pypostgresql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\ranges.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\psycopg2cffi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\pg8000.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\registry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\pyodbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\properties.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\path_registry.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\orderinglist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\provision.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\profiling.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\replay_fixture.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\pickleable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\plugin_base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\pytestplugin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\recoverable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\passwordless.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\registerable.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\schema.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\topological.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\threadlocal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\strategies.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\scoping.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\state.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\sync.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\unitofwork.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\serializer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\schema.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\requirements.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_results.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_update_delete.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_dialect.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_ddl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_types.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_insert.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_select.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_sequence.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\test_cte.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\sbcharsetprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\sjisprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\sbcsgroupprober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\signals.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\script.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\databases\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\visitors.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\zxjdbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\zxjdbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\zxjdbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\zxjdbc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\url.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\zxJDBC.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\util.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\warnings.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_psposix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_psaix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_psosx.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_psbsd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_common.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\universaldetector.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\version.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\utf8prober.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\cli\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\views.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\annotation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\adodbapi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\array.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__pycache__\\attr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__pycache__\\api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\baked.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\__pycache__\\api.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\assertsql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\assertions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_pswindows.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psutil\\__pycache__\\_pssunos.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\babel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\__pycache__\\dbapi_proxy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\default_comparator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\crud.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\cymysql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\compiler.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\declarative\\__pycache__\\clsregistry.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\__pycache__\\bootstrap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\codingstatemachine.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\charsetgroupprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\cp949prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\big5freq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\big5prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\chardistribution.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\charsetprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\cli\\__pycache__\\chardetect.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\confirmable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\decorators.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\changeable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\datastore.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\cli.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\exc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\\deprecations.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\dml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\dml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\enumerated.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\dml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\evaluator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\exc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\dependency.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\descriptor_props.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\deprecated_interfaces.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\dynamic.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\entities.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\engines.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\exclusions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\escprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\euctwprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\eucjpprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\euctwfreq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\enums.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\escsm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\euckrprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\euckrfreq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\interfaces.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\inspection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\pool\\__pycache__\\impl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\expression.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\__pycache__\\json.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\information_schema.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\__pycache__\\fdb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\firebird\\__pycache__\\kinterbasdb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\json.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\gaerdbms.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\ext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\hstore.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\json.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\interfaces.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\instrumentation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\identity.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\instrumentation.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\indexable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\horizontal_shard.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\fixtures.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\hebrewprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\gb2312freq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\jisfreq.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\gb2312prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\forms.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\log.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\naming.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\mxodbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\__pycache__\\mxodbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\mysqlconnector.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\mysqldb.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__pycache__\\legacy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\__pycache__\\mxodbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\loading.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\mock.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\langhungarianmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\langgreekmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\langthaimodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\mbcssm.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\langcyrillicmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\langturkishmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\mbcsgroupprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\latin1prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\mbcharsetprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\langbulgarianmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\langhebrewmodel.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\processors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\\queue.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\__pycache__\\pysqlite.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sqlite\\__pycache__\\pysqlcipher.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\pymssql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\pyodbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\__pycache__\\pyodbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\sybase\\__pycache__\\pysybase.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\pymysql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\reflection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\pyodbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\oursql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\ranges.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\pg8000.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\psycopg2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\pypostgresql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\psycopg2cffi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\pygresql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\reflection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\__pycache__\\pyodbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\properties.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\path_registry.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\orderinglist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\pickleable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\profiling.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\provision.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\__pycache__\\pytestplugin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\plugin\\__pycache__\\plugin_base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\registerable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\passwordless.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\recoverable.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\schema.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\__pycache__\\types.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\util\\__pycache__\\topological.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\types.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\strategies.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\threadlocal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\event\\__pycache__\\registry.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\sync.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\scoping.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\unitofwork.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\orm\\__pycache__\\state.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\ext\\__pycache__\\serializer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\replay_fixture.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\schema.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_update_delete.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_sequence.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_select.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_cte.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_reflection.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_ddl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_insert.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_results.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_types.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\suite\\__pycache__\\test_dialect.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\sbcharsetprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\sbcsgroupprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\sjisprober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\signals.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\script.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\cutils.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\cprocessors.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\cresultproxy.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\sql\\__pycache__\\visitors.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mssql\\__pycache__\\zxjdbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\mysql\\__pycache__\\zxjdbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\postgresql\\__pycache__\\zxjdbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\dialects\\oracle\\__pycache__\\zxjdbc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\engine\\__pycache__\\url.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\connectors\\__pycache__\\zxJDBC.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\util.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlalchemy\\testing\\__pycache__\\warnings.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\\zip-safe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\python_dateutil-2.8.0.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\version.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\universaldetector.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\__pycache__\\utf8prober.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\chardet\\cli\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\__pycache__\\views.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\reset_notice.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\welcome.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\change_notice.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\login_instructions.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\reset_instructions.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\flask_security\\templates\\security\\email\\confirmation_instructions.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\\DESCRIPTION.rst\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\Flask_Security-3.0.0.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\tamil_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\greek_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\italian_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\french_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\turkish_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\arabic_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\turkish_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\greek_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\tamil_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\backend.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\decode_asn1.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\backend.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\extensions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\extensions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\frontend.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\statemachine.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\nodes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\statemachine.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\nodes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\smartquotes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\math2html.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\tex2unichar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__pycache__\\math2html.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\states.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\__pycache__\\states.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\mmlalias.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\_html_base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\manpage.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__pycache__\\_html_base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__pycache__\\manpage.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\latex2e\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\latex2e\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html4css1\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\odf_odt\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\odf_odt\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\references.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\tix.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\ttk.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\tix.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\tix.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\ttk.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\ttk.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\tix.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\NEWS.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\help.html\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\configdialog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\editor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\ChangeLog\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\pyshell.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\configdialog.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__about__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html4css1\\html4css1.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\\math.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\\minimal.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\\plain.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\medium-black\\pretty.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\big-white\\pretty.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\big-white\\framing.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\big-black\\pretty.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\big-black\\framing.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\small-black\\pretty.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\s5-core.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\slides.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\iepngfix.htc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\pretty.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\print.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\slides.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\blank.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\framing.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\opera.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\default\\outline.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\small-white\\pretty.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\small-white\\framing.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\medium-white\\pretty.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\medium-white\\framing.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\odf_odt\\styles.odt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\pep_html\\pep.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\pep_html\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\nt\\activate.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\nt\\deactivate.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\nt\\Activate.ps1\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\posix\\activate.fish\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\posix\\activate.csh\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\config-main.def\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\config-extensions.def\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\config-highlight.def\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\config-keys.def\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\among.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\basque_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\basestemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\catalan_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\aligned_indent.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\_oid.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\aead.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\algorithms.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\aead.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\\binding.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\\_conditional.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\base.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\certificate_transparency.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\_compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\af.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\ca.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\body.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\admonitions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\xetex\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\af.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\ca.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\autocomplete.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\calltip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\calltip_w.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__main__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\autoexpand.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\autocomplete_w.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\browser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\dutch_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\danish_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\cli.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\compat.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\ciphers.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\cmac.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\dh.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\dsa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\constant_time.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\cmac.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\dh.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\dsa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\concatkdf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\core.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\code_analyzer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\cs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\da.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\de.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\doctree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\docutils_xml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\cs.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\da.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\de.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\components.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\dnd.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\constants.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\commondialog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\colorchooser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\dialog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\debugobj_r.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\dynoption.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\codecontext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\config_key.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\debugobj.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\debugger_r.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\colorizer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\debugger.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\delegator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\finnish_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\german_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\formatter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\grouping.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\filter_stack.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\exceptions.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\fernet.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\ed25519.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\encode_asn1.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\hashes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\ec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\ed448.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\hashes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\ed25519.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\ec.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\ed448.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\general_name.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\examples.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\error_reporting.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\gl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\eo.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\es.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\fa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\fi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\fr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\en.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\gl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\eo.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\es.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\fa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\fi.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\fr.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\en.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\frontmatter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\font.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\filedialog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\filelist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\grep.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\irish_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\hindi_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\hungarian_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\lithuanian_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\indonesian_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\keywords.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\lexer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\interfaces.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\hmac.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\hmac.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\keywrap.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\hotp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\hkdf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\kbkdf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\io.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\latex2mathml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\he.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\ko.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\ja.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\it.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\html.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\images.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\he.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\ko.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\ja.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\lt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\it.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\help.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\hyperparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\history.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\help_about.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\iomenu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\norwegian_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\porter_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\nepali_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\output.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\others.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\poly1305.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\ocsp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\poly1305.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\padding.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\padding.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\modes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\pbkdf2.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\pkcs12.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\name.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\oid.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\ocsp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\null.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\nl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\lv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\lt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\pl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\parts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\misc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\pep.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\null.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\nl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\lv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\pl.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\peps.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\parts.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\misc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\messagebox.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\macosx.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\pathbrowser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\outwin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\mainmenu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\parenmatch.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\paragraph.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\multicall.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\percolator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\russian_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\romanian_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\portuguese_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\reindent.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\right_margin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\rsa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\rsa.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\roman.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\punctuation_chars.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\roles.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\ru.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\pt_br.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\references.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\pseudoxml.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\odf_odt\\pygmentsformatter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\ru.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\pt_br.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\replace.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\run.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\pyparse.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\rpc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\rstrip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\redirector.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\query.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\runscript.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\spanish_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\swedish_stemmer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\tokens.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\sql.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\tokens.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\statement_splitter.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\totp.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\scrypt.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\ssh.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\tex2mathml_extern.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\unichar2tex.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\tableparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\sv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\sk.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\tables.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\standalone.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\sv.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\sk.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\scrolledtext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\simpledialog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\scrolledlist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\tooltip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\search.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\searchengine.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\undo.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\textview.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\tree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\statusbar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\squeezer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\stackviewer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\searchbase.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\x25519.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\x448.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\x509.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\x25519.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\x448.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\utils.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\x963kdf.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__pycache__\\__about__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\urischemes.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\zh_cn.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\zh_tw.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\xetex\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html4css1\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\pep_html\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\zh_cn.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\zh_tw.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\universal.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\writer_aux.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\zoomheight.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\zzdummy.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\window.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\catalan_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\arabic_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\basestemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\basque_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\among.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\cli.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\\aligned_indent.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\__pycache__\\_oid.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\cmac.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\ciphers.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\aead.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\cmac.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\__pycache__\\algorithms.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\__pycache__\\aead.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\\concatkdf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\\__pycache__\\_conditional.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\openssl\\__pycache__\\binding.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\base.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\certificate_transparency.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\_compat.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\\code_analyzer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\ca.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\af.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\admonitions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\body.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\ca.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\af.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\components.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\commondialog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\__main__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\commondialog.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\commondialog.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\colorchooser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\colorchooser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\colorchooser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\__main__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__pycache__\\__main__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\__pycache__\\__main__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\config.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autocomplete_w.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\__main__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\browser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autoexpand.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\dutch_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\danish_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\ec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\dsa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\ed448.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\decode_asn1.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\dh.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\encode_asn1.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\ed25519.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\ec.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\dsa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\ed448.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\dh.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\ed25519.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\constant_time.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\core.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\de.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\da.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\cs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\en.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\__pycache__\\doctree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__pycache__\\docutils_xml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\de.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\da.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\cs.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\en.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\dnd.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\dnd.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\constants.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\dnd.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\constants.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\dialog.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\dialog.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\constants.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\dialog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugger_r.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\delegator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugger.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugobj_r.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\english_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\finnish_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\german_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\hindi_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\indonesian_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\french_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\hungarian_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\formatter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\__pycache__\\filter_stack.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\__pycache__\\grouping.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\hmac.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\hashes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\hmac.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\hashes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\__pycache__\\hotp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\\hkdf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__pycache__\\exceptions.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__pycache__\\fernet.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\general_name.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\frontend.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\examples.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\\error_reporting.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\gl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\es.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\fr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\fi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\eo.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\he.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\fa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\images.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\html.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\gl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\es.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\fr.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\fi.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\eo.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\he.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\fa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\frontmatter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\font.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\filedialog.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\font.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\filedialog.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\font.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\filedialog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\idle.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\irish_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\italian_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\norwegian_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\lithuanian_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\nepali_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\lexer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\keywords.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\\others.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\\output.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\__pycache__\\interfaces.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\ocsp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\padding.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\keywrap.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\padding.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\ciphers\\__pycache__\\modes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\\kbkdf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\name.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\ocsp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\x509\\__pycache__\\oid.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\__pycache__\\io.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__pycache__\\latex2mathml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\__pycache__\\null.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\lt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\nl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\it.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\ko.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\ja.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\lv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\misc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__pycache__\\null.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\lt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\nl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\it.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\ko.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\ja.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\lv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\misc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\messagebox.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\messagebox.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\messagebox.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\outwin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\outwin.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\multicall.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\macosx.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\portuguese_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\russian_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\porter_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\romanian_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\\right_margin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\\reindent.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\rsa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\poly1305.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\rsa.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\__pycache__\\poly1305.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\\scrypt.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\\pbkdf2.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\__pycache__\\pkcs12.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\\punctuation_chars.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\\roman.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\__pycache__\\roles.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\pt_br.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\pl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\ru.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\parts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\references.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\__pycache__\\pep.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\__pycache__\\pseudoxml.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\odf_odt\\__pycache__\\pygmentsformatter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\pt_br.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\pl.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\ru.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\parts.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\peps.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\references.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\scrolledtext.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\scrolledtext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\scrolledtext.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\simpledialog.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\simpledialog.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\simpledialog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\search.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pyparse.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\percolator.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\scrolledlist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\query.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\searchengine.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\searchbase.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\parenmatch.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\redirector.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\rstrip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\redirector.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\swedish_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\snowballstemmer\\__pycache__\\spanish_stemmer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\tokens.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\sql.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\filters\\__pycache__\\tokens.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\sqlparse\\engine\\__pycache__\\statement_splitter.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\twofactor\\__pycache__\\totp.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\serialization\\__pycache__\\ssh.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\__pycache__\\utils.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\\urischemes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\__pycache__\\smartquotes.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__pycache__\\tex2unichar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__pycache__\\unichar2tex.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\utils\\math\\__pycache__\\tex2mathml_extern.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\__pycache__\\tableparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\sk.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\sv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\directives\\__pycache__\\tables.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\readers\\__pycache__\\standalone.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\sk.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\sv.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\universal.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\tkinter\\__pycache__\\ttk.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\stackviewer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\tree.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\squeezer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\tooltip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\textview.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\tooltip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\entry_points.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\x509.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\x25519.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\backends\\openssl\\__pycache__\\x448.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\x25519.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\asymmetric\\__pycache__\\x448.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\primitives\\kdf\\__pycache__\\x963kdf.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\_constant_time.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\_padding.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isomscr-wide.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isogrk4-wide.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isopub.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\mmlextra.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isotech.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isomfrk.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isocyr2.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\mmlextra-wide.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isoamsn.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isolat2.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isoamso.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isomfrk-wide.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\s5defs.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isoamsb.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isogrk4.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isoamsc.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isocyr1.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isobox.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isolat1.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isomscr.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isodia.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isonum.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isomopf.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isomopf-wide.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isogrk2.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isoamsa.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isogrk3.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isoamsr.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\README.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\isogrk1.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\zh_tw.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\languages\\__pycache__\\zh_cn.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\latex2e\\default.tex\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\latex2e\\titlepage.tex\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\latex2e\\xelatex.tex\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html4css1\\template.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\html5_polyglot\\template.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\README.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\pep_html\\template.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\zh_tw.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\languages\\__pycache__\\zh_cn.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\transforms\\__pycache__\\writer_aux.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\NEWS2x.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\CREDITS.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle.pyw\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\extend.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\HISTORY.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\README.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\zzdummy.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\zzdummy.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\window.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\zip-safe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\htmlmin-0.1.12.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\xhtml1-symbol.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\xhtml1-special.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\parsers\\rst\\include\\xhtml1-lat1.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\medium-black\\__base__\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\big-black\\__base__\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\docutils\\writers\\s5_html\\themes\\small-black\\__base__\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser-2.19.dist-info\\top_level.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser-2.19.dist-info\\WHEEL\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser-2.19.dist-info\\INSTALLER\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser-2.19.dist-info\\RECORD\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser-2.19.dist-info\\METADATA\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pycparser-2.19.dist-info\\LICENSE\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\common\\activate\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\TODO.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pyshell.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\editor.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\editor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\configdialog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\configdialog.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pyshell.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\editor.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pyshell.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle.icns\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_config.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_configdialog.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_configdialog.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_configdialog.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_configdialog.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\python_lib.cat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_msi.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\pyexpat.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_testcapi.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_asyncio.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\py.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_bz2.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\pyc.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_socket.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_elementtree.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_ssl.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_ctypes.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_sqlite3.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_lzma.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_testbuffer.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\pyd.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_hashlib.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_tkinter.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_overlapped.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\python_tools.cat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\openfolder.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle_16.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle_32.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\folder.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle_48.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle_16.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\tk.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle_48.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\plusnode.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\idle_32.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\python.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\Icons\\minusnode.gif\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__init__.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\scripts\\images\\pgadmin-help.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_redirector.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_delegator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_calltip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_autoexpand.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_config_key.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_searchbase.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_help_about.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_codecontext.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_query.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_hyperparser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_paragraph.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_autocomplete.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_colorizer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\mock_idle.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_editor.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\htest.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_editmenu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_help.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_run.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_percolator.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_searchengine.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_parenmatch.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_pathbrowser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_calltip_w.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_debugger_r.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_iomenu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_rstrip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_autocomplete_w.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_mainmenu.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_debugobj.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_rpc.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_debugger.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_debugobj_r.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_filelist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_replace.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_scrolledlist.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_grep.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_search.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_macosx.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_pyshell.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_pyparse.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_outwin.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_history.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\template.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_multicall.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_browser.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_runscript.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\mock_tk.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\codecontext.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\colorizer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\calltip_w.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autocomplete.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\browser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\browser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugger.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\calltip_w.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\colorizer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\config_key.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autocomplete_w.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\config_key.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\__main__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autocomplete.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\codecontext.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugger.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\config.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\config_key.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autocomplete_w.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\colorizer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autoexpand.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\calltip_w.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\codecontext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autoexpand.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\autocomplete.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\calltip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\calltip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\__main__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\calltip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugger_r.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_tree.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_tooltip.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_text.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_zoomheight.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_window.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_squeezer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_stackviewer.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_warning.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_textview.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_statusbar.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\test_undo.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\__init__.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\__init__.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\__init__.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\filelist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\percolator.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\help.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\help_about.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pathbrowser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\hyperparser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\help.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\idle.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\parenmatch.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\grep.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\delegator.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\filelist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\iomenu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\iomenu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\mainmenu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\help_about.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugger_r.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\mainmenu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\grep.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\idle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\dynoption.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\grep.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugobj_r.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\dynoption.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugobj_r.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\macosx.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\help_about.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\multicall.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\paragraph.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\percolator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\filelist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\paragraph.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\history.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\macosx.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\iomenu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\multicall.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\history.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\delegator.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pathbrowser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\hyperparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\dynoption.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugobj.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\parenmatch.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugobj.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\debugobj.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\paragraph.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\mainmenu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\history.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\outwin.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\help.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pathbrowser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\hyperparser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\mock_tk.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\htest.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\mock_idle.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\mock_idle.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\htest.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\mock_idle.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\mock_tk.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\mock_tk.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\htest.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\rpc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\run.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\rpc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\rpc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\searchbase.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\squeezer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\stackviewer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\runscript.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\replace.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\statusbar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pyparse.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\replace.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\replace.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\searchengine.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\run.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\search.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\redirector.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\run.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\stackviewer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\runscript.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\query.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\statusbar.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\searchengine.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\rstrip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\search.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\rstrip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\runscript.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\searchbase.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\scrolledlist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\statusbar.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\scrolledlist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\query.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\pyparse.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\squeezer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autocomplete.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autocomplete_w.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\template.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\template.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autocomplete_w.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\template.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autocomplete_w.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autoexpand.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autocomplete.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autocomplete.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_colorizer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_history.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_help_about.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_calltip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_editmenu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_history.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_config_key.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_help.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_editmenu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_browser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugobj_r.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_help_about.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_calltip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_delegator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_help_about.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugobj.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_calltip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_hyperparser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_config_key.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_codecontext.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_codecontext.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_colorizer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_codecontext.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_config.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_editor.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_hyperparser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_filelist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autoexpand.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_calltip_w.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_config_key.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_editor.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugobj.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugger_r.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_delegator.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_browser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_config.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_grep.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_help.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugger.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugger_r.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_history.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_config.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_filelist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_browser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugobj_r.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_hyperparser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugger.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_macosx.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_calltip_w.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_delegator.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_autoexpand.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_iomenu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_iomenu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugobj_r.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_calltip_w.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_filelist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_grep.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_colorizer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugobj.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugger.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_help.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_iomenu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_grep.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_editmenu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_debugger_r.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_editor.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pathbrowser.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_rstrip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_mainmenu.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_query.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_query.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_parenmatch.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_searchengine.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_parenmatch.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_runscript.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_runscript.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_rstrip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pyparse.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_replace.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_outwin.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_search.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_search.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_squeezer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_multicall.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_stackviewer.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_outwin.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_paragraph.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_rstrip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_redirector.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_percolator.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_runscript.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pyshell.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_paragraph.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_rpc.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pyshell.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_run.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_replace.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_multicall.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_scrolledlist.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_scrolledlist.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_macosx.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_parenmatch.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_searchengine.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_percolator.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_outwin.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_search.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_replace.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_squeezer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_run.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pyparse.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_searchbase.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_multicall.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_mainmenu.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_redirector.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_scrolledlist.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_stackviewer.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_macosx.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_rpc.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_mainmenu.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_percolator.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pyparse.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pathbrowser.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_squeezer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_stackviewer.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pyshell.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_searchbase.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_searchbase.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_pathbrowser.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_searchengine.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_paragraph.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_run.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_redirector.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_query.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_rpc.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\window.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\undo.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\undo.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\tree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\zoomheight.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\textview.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\window.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\zzdummy.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\zoomheight.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\tooltip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\textview.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\zoomheight.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\undo.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\__pycache__\\tree.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_tree.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_zoomheight.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_warning.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_window.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_window.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_tooltip.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_text.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_text.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_statusbar.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_undo.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_undo.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_tree.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_text.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_warning.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_statusbar.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_warning.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_tree.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_tooltip.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_tooltip.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_textview.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_window.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_textview.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_zoomheight.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_undo.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_zoomheight.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_statusbar.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\__pycache__\\test_textview.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_ctypes_test.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_multiprocessing.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_testconsole.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_testimportmultiple.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_queue.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\idlelib\\idle_test\\README.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\winsound.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\select.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_testmultiphase.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Network.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5PrintSupport.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Quick.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\pgAdmin4.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Multimedia.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Core.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5OpenGL.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\opengl32sw.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Widgets.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Gui.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\python37.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\libiconv-2.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Svg.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Qml.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\Qt5Positioning.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\libintl-8.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\libGLESv2.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\libpq.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\bin\\platforms\\qwindows.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\backup_globals_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\geometry_viewer.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\jquery-3.4.1.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_static\\geometry_viewer_property_table.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\backup_globals_process_watcher.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\geometry_viewer.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\docs\\en_US\\html\\_images\\geometry_viewer_property_table.png\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\yarn.lock\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\messages.pot\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ru\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\de\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ja\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\es\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\fr\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\zh\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\it\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\ko\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\translations\\pl\\LC_MESSAGES\\messages.po\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\style.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\pgadmin.css\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\app.bundle.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\vendor.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\sqleditor.js\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\web\\pgadmin\\static\\js\\generated\\fonts\\fontawesome-webfont.svg\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Scripts\\python37.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\_bundled\\pip-19.0.3-py2.py3-none-any.whl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\ensurepip\\_bundled\\setuptools-40.8.0-py2.py3-none-any.whl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\topics.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__pycache__\\topics.cpython-37.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__pycache__\\topics.cpython-37.opt-2.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\pydoc_data\\__pycache__\\topics.cpython-37.opt-1.pyc\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-14.0.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\distutils\\command\\wininst-14.0-amd64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pyparsing.py\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cy.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\lt.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\uk.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ru.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ccp.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ar.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ml.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\gd.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\sr.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\cs.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\babel\\locale-data\\ga.dat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\nacl\\_sodium.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\\setuptools-41.2.0-py2.py3-none-any.whl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\\pip-19.1.1-py2.py3-none-any.whl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\virtualenv_support\\pip-19.2.2-py2.py3-none-any.whl\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\pip\\_vendor\\certifi\\cacert.pem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\psycopg2\\_psycopg.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\certifi\\cacert.pem\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\site-packages\\cryptography\\hazmat\\bindings\\_openssl.cp37-win_amd64.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\nt\\python.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\Lib\\venv\\scripts\\nt\\pythonw.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\tk86t.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\_decimal.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\tcl86t.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\unicodedata.pyd\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\libcrypto-1_1.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\sqlite3.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\pgAdmin 4\\venv\\DLLs\\libssl-1_1.dll\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\share\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\fr_FR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\tr_TR\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\ru_RU\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\ja_JP\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\de_DE\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\sv_SE\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\zh_CN\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\bin\n> Unpacking files\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\fr_FR\\wxstd.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\tr_TR\\wxstd.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\ru_RU\\wxstd.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\ja_JP\\wxstd.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\de_DE\\wxstd.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\sv_SE\\wxstd.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\zh_CN\\wxstd.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxbase28u_xml_vc_custom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxbase28u_net_vc_custom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\fr_FR\\StackBuilder.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\tr_TR\\StackBuilder.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\ru_RU\\StackBuilder.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\ja_JP\\StackBuilder.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\de_DE\\StackBuilder.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\sv_SE\\StackBuilder.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\share\\i18n\\zh_CN\\StackBuilder.mo\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\StackBuilder_3rd_party_licenses.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libcurl.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxbase28u_vc_custom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\stackbuilder.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxmsw28u_core_vc_custom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxmsw28u_aui_vc_custom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxmsw28u_xrc_vc_custom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxmsw28u_html_vc_custom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\wxmsw28u_adv_vc_custom.dll\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\bin\n> Creating directory C:\\Program Files\\PostgreSQL\\12\\lib\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\installer\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\installer\\server\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\scripts\n> Directory already exists: C:\\Program Files\\PostgreSQL\\12\\scripts\\images\n> Unpacking files\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libpgtypes.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libecpg.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_basebackup.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\vacuumlo.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\createdb.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_dumpall.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pgbench.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libwinpthread-1.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_restore.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_isready.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\createuser.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\dropdb.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\vacuumdb.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\zlib1.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\reindexdb.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\dropuser.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\clusterdb.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_euc_jp.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_sjis.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_johab.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\amcheck.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\plperl.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pltcl.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libxslt.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pg_stat_statements.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\postgres_fdw.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_win.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_euc_cn.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_uhc.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pgcrypto.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pgevent.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libssl.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_euc_tw.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_gbk.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\plugin_debugger.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\dblink.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\seg.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\ltree.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_euc_kr.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\cube.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\plpython3.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\_int.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_sjis2004.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\uuid-ossp.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_big5.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_iso8859.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\isn.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\regress.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_euc2004.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pageinspect.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\btree_gist.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\hstore.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\plpgsql.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxbase28u_net.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pg_trgm.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libpq.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\fuzzystrmatch.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\cyrillic_and_mic.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\auth_delay.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\dict_int.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\adminpack.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\bloom.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\btree_gin.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\dict_xsyn.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\ascii_and_mic.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\citext.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\earthdistance.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\auto_explain.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\euc2004_sjis2004.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\autoinc.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\scripts\\runpsql.bat\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\insert_username.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\hstore_plperl.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\euc_jp_and_sjis.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\hstore_plpython3.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libpqwalreceiver.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\euc_kr_and_mic.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\file_fdw.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\latin2_and_win1250.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\jsonb_plpython3.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\jsonb_plperl.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\latin_and_mic.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\euc_cn_and_mic.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libecpg_compat.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\euc_tw_and_big5.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pg_visibility.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pg_freespacemap.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\ltree_plpython3.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pgrowlocks.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\refint.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pgstattuple.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pg_prewarm.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\passwordcheck.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pgoutput.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\moddatetime.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pg_buffercache.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\lo.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\pgxml.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\test_bloomfilter.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\test_decoding.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_cyrillic.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\test_predtest.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_ascii.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\tsm_system_time.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\sslinfo.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\tsm_system_rows.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\tcn.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\unaccent.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\test_integerset.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_iso8859_1.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\test_rbtree.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\tablefunc.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\iconv.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libecpg_compat.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libpgtypes.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxbase28u_xml.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libecpg.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libintl.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\getlocales.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\validateuser.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\createuser.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\scripts\\images\\pg-psql.ico\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\commandlinetools_3rd_party_licenses.txt\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\server\\createshortcuts_clt.vbs\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\psql.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libssl-1_1-x64.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libcrypto-1_1-x64.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libiconv-2.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\pg_dump.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libintl-8.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\bin\\libpq.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxmsw28u_html.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libpgport.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\zlib.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libpgcommon.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxmsw28u_core.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libcrypto.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\dict_snowball.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxmsw28u_adv.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxmsw28u_aui.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libxml2.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\utf8_and_gb18030.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\postgres.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxbase28u.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\wxmsw28u_xrc.lib\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\lib\\libpq.dll\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\vcredist_x64.exe\n> Unpacking C:\\Program Files\\PostgreSQL\\12\\installer\\vcredist_x86.exe\n> Executing icacls \"C:\\temp/postgresql_installer_c24b846fc9\" /inheritance:r\n> Script exit code: 0\n> \n> Script output:\n> processed file: C:\\temp/postgresql_installer_c24b846fc9\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Executing icacls \"C:\\temp/postgresql_installer_c24b846fc9\" /T /Q /grant \"CENSORED\\censored:(OI)(CI)F\"\n> Script exit code: 0\n> \n> Script output:\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> [10:22:50] Running the post-installation/upgrade actions:\n> [10:22:50] Write the base directory to the ini file...\n> [10:22:50] Write the version number to the ini file...\n> Initialising the database cluster (this may take a few minutes)...\n> Executing cscript //NoLogo \"C:\\Program Files\\PostgreSQL\\12/installer/server/initcluster.vbs\" \"NT AUTHORITY\\NetworkService\" \"postgres\" \"****\" \"C:\\temp/postgresql_installer_c24b846fc9\" \"C:\\Program Files\\PostgreSQL\\12\" \"C:\\Program Files\\PostgreSQL\\12\\data\" 5432 \"NorwegianBokmål,Norway\" 0\n> Script exit code: 1\n> \n> Script output:\n> WScript.Shell Initialized...\n> Scripting.FileSystemObject initialized...\n> \n> Called CreateDirectory(C:\\Program Files\\PostgreSQL\\12\\data)...\n> Called CreateDirectory(C:\\Program Files\\PostgreSQL\\12)...\n> Called ClearAcl (C:\\Program Files\\PostgreSQL\\12\\data)...\n> Executing batch file 'rad1304C.bat'...\n> Output file does not exists...\n> Removing inherited ACLs on (C:\\Program Files\\PostgreSQL\\12\\data)\n> Executing batch file 'rad1304C.bat'...\n> processed file: C:\\Program Files\\PostgreSQL\\12\\data\n> Successfully processed 1 files; Failed processing 0 files\n> \n> WScript.Network initialized...\n> strParentOfDataDirC:\\Program Files\\PostgreSQL\\12\n> logged in userCENSORED\\censored\n> Called AclCheck(C:\\Program Files\\PostgreSQL\\12\\data)\n> Called IsVistaOrNewer()...\n> 'winmgmts' object initialized...\n> Version:10.\n> MajorVersion:10\n> Executing icacls to ensure the CENSORED\\censored account can read the path C:\\Program Files\\PostgreSQL\\12\\data\n> Executing batch file 'rad1304C.bat'...\n> processed file: C:\\Program Files\\PostgreSQL\\12\\data\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Called IsVistaOrNewer()...\n> 'winmgmts' object initialized...\n> Version:10.\n> MajorVersion:10\n> Ensuring we can write to the data directory (using icacls) to CENSORED\\censored:\n> Executing batch file 'rad1304C.bat'...\n> processed file: C:\\Program Files\\PostgreSQL\\12\\data\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Called IsVistaOrNewer()...\n> 'winmgmts' object initialized...\n> Version:10.\n> MajorVersion:10\n> Granting full access to (NT AUTHORITY\\NetworkService) on (C:\\Program Files\\PostgreSQL\\12\\data)\n> Executing batch file 'rad1304C.bat'...\n> processed file: C:\\Program Files\\PostgreSQL\\12\\data\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Called IsVistaOrNewer()...\n> 'winmgmts' object initialized...\n> Version:10.\n> MajorVersion:10\n> Granting full access to CREATOR OWNER on (C:\\Program Files\\PostgreSQL\\12\\data)\n> Executing batch file 'rad1304C.bat'...\n> processed file: C:\\Program Files\\PostgreSQL\\12\\data\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Called IsVistaOrNewer()...\n> 'winmgmts' object initialized...\n> Version:10.\n> MajorVersion:10\n> Granting full access to SYSTEM on (C:\\Program Files\\PostgreSQL\\12\\data)\n> Executing batch file 'rad1304C.bat'...\n> processed file: C:\\Program Files\\PostgreSQL\\12\\data\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Called IsVistaOrNewer()...\n> 'winmgmts' object initialized...\n> Version:10.\n> MajorVersion:10\n> Granting full access to Administrators on (C:\\Program Files\\PostgreSQL\\12\\data)\n> Executing batch file 'rad1304C.bat'...\n> processed file: C:\\Program Files\\PostgreSQL\\12\\data\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Executing batch file 'rad1304C.bat'...\n> The files belonging to this database system will be owned by user \"censored\".\n> This user must also own the server process.\n> \n> initdb: error: invalid locale name \"NorwegianBokm†l,Norway\"\n> \n> Called Die(Failed to initialise the database cluster with initdb)...\n> Failed to initialise the database cluster with initdb\n> \n> Script stderr:\n> Program ended with an error exit code\n> \n> Error running cscript //NoLogo \"C:\\Program Files\\PostgreSQL\\12/installer/server/initcluster.vbs\" \"NT AUTHORITY\\NetworkService\" \"postgres\" \"****\" \"C:\\temp/postgresql_installer_c24b846fc9\" \"C:\\Program Files\\PostgreSQL\\12\" \"C:\\Program Files\\PostgreSQL\\12\\data\" 5432 \"NorwegianBokmål,Norway\" 0: Program ended with an error exit code\n> Problem running post-install step. Installation may not complete correctly\n> The database cluster initialisation failed.\n> Executing icacls \"C:\\temp/postgresql_installer_baa40bb6af\" /inheritance:r\n> Script exit code: 0\n> \n> Script output:\n> processed file: C:\\temp/postgresql_installer_baa40bb6af\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Executing icacls \"C:\\temp/postgresql_installer_baa40bb6af\" /T /Q /grant \"CENSORED\\censored:(OI)(CI)F\"\n> Script exit code: 0\n> \n> Script output:\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> [10:23:12] Delete the temporary scripts directory...\n> Executing icacls \"C:\\temp/postgresql_installer_ad017e50d0\" /inheritance:r\n> Script exit code: 0\n> \n> Script output:\n> processed file: C:\\temp/postgresql_installer_ad017e50d0\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Executing icacls \"C:\\temp/postgresql_installer_ad017e50d0\" /T /Q /grant \"CENSORED\\censored:(OI)(CI)F\"\n> Script exit code: 0\n> \n> Script output:\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Executing icacls \"C:\\temp/postgresql_installer_b721700c1a\" /inheritance:r\n> Script exit code: 0\n> \n> Script output:\n> processed file: C:\\temp/postgresql_installer_b721700c1a\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Executing icacls \"C:\\temp/postgresql_installer_b721700c1a\" /T /Q /grant \"CENSORED\\censored:(OI)(CI)F\"\n> Script exit code: 0\n> \n> Script output:\n> Successfully processed 1 files; Failed processing 0 files\n> \n> Script stderr:\n> \n> \n> Creating menu shortcuts...\n> Executing cscript //NoLogo \"C:\\Program Files\\PostgreSQL\\12\\installer\\server\\createshortcuts_clt.vbs\" \"PostgreSQL 12\" \"C:\\Program Files\\PostgreSQL\\12\"\n> Script exit code: 0\n> \n> Script output:\n> Start FixupFile(C:\\Program Files\\PostgreSQL\\12\\scripts\\runpsql.bat)...\n> Opening file for reading...\n> Closing file (reading)...\n> Replacing placeholders...\n> Opening file for writing...\n> Closing file...\n> End FixupFile()...\n> createshortcuts_clt.vbs ran to completion\n> \n> Script stderr:\n> \n> \n> [10:23:15] Write the server description to the ini file...\n> [10:23:15] Write the server branding to the ini file...\n> Creating Uninstaller\n> Creating uninstaller 25%\n> Creating uninstaller 50%\n> Creating uninstaller 75%\n> Creating uninstaller 100%\n> Installation completed\n> Log finished 10/21/2019 at 10:23:27\n\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 4 Nov 2019 21:48:19 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12 installation fails because locale name contained\n non-english characters" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, Oct 24, 2019 at 11:06:01AM +0000, Skjalg A. Skagen wrote:\n>> I tried to install PostgreSQL 12 with the \"Norwegian Bokmål, Norway\" locale in\n>> hope that it would, among other things, provide proper support for Norwegian\n>> characters out-of-the-box.\n>> But initcluster.vbs appear to fail during post-install because the locale name\n>> contains a Norwegian character that is being mishandled (full log in attached\n>> zip file):\n>> initdb: error: invalid locale name \"NorwegianBokm†l,Norway\"\n\n> This has been fixed with the this patch:\n> \thttps://www.postgresql.org/message-id/E1iMcHC-0007Ci-7G@gemulon.postgresql.org\n\nHm, I'm not entirely sure that it has been. The original code supposed\nthat the locale name is spelled \"Norwegian (Bokmål)_Norway\", and the\nrecent patch you mention extended that to allow \"Norwegian Bokmål_Norway\".\nBut this report, if accurate, shows yet another variant. Skjalg,\nwould you confirm that there's a comma and space before \"Norway\" in\nthe locale name as you see it? (Your log clearly shows it with a\nspace in the registry entries, but it looks like initdb might be\nseeing it as not having any spaces, which is why I'm confused.)\n\nI wonder whether we need to relax the matching code to be entirely\nagnostic about spaces and punctuation in the Windows locale name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Nov 2019 12:22:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12 installation fails because locale name contained\n non-english characters" }, { "msg_contents": "Hi Tom,\n\nIf you mean what I see in the drop-down menu during installation, I see \"Norwegian Bokmål, Norway\" -- identical to what getlocales.exe reported in my installation log file on line 222:\n\nNorwegianxxSPxxBokmålxxCOMMAxxxxSPxxNorway=Norwegian Bokmål, Norway\n\nAfter installation with the default locale, PgAdmin 4 was giving me encoding errors when it tried to select data from pg_database, so I followed all the steps in this wiki article: https://wiki.postgresql.org/wiki/Changes_To_Norwegian_Locale#What_do_I_need_to_do.3F\n\nHowever, I had to modify the UPDATE query and remove all the parentheses from the query given by the wiki article, like this:\n\nUPDATE pg_database\nSET datcollate = 'Norwegian_Norway' || substr(datcollate, position('.' in datcollate))\nWHERE datcollate LIKE 'Norwegian Bokm%' OR datcollate LIKE 'norwegian-bokmal%';\n\nUPDATE pg_database\nSET datctype = 'Norwegian_Norway' || substr(datctype, position('.' in datctype))\nWHERE datctype LIKE 'Norwegian Bokm%' OR datctype LIKE 'norwegian-bokmal%';\n\nLikewise, lc_messages, lc_monetary, lc_numeric, lc_time in my unmodified postgresql.conf file had no parentheses either, like the wiki article said they should have had:\n\n...\n# These settings are initialized by initdb, but they can be changed.\nlc_messages = 'Norwegian Bokmål_Norway.1252' # locale for system error message\n # strings\nlc_monetary = 'Norwegian Bokmål_Norway.1252' # locale for monetary formatting\nlc_numeric = 'Norwegian Bokmål_Norway.1252' # locale for number formatting\nlc_time = 'Norwegian Bokmål_Norway.1252' # locale for time formatting\n\nThe unmodified datcollate and datctypes in pg_database, right after installation with default locale, were as follows:\n\nQuery:\n\nSELECT encode(datcollate::bytea, 'escape') as datcollate, encode(datctype::bytea, 'escape') as datctype\nFROM pg_database;\n\nResult:\n\ndatcollate\tdatctype\nNorwegian Bokm\\345l_Norway.1252\tNorwegian Bokm\\345l_Norway.1252\nNorwegian Bokm\\345l_Norway.1252\tNorwegian Bokm\\345l_Norway.1252\nNorwegian Bokm\\345l_Norway.1252\tNorwegian Bokm\\345l_Norway.1252\nNorwegian Bokm\\345l_Norway.1252\tNorwegian Bokm\\345l_Norway.1252\n\nI hope this will be helpful.\n\nBest regards,\nSkjalg\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, November 5, 2019 6:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Bruce Momjian bruce@momjian.us writes:\n>\n> > On Thu, Oct 24, 2019 at 11:06:01AM +0000, Skjalg A. Skagen wrote:\n> >\n> > > I tried to install PostgreSQL 12 with the \"Norwegian Bokmål, Norway\" locale in\n> > > hope that it would, among other things, provide proper support for Norwegian\n> > > characters out-of-the-box.\n> > > But initcluster.vbs appear to fail during post-install because the locale name\n> > > contains a Norwegian character that is being mishandled (full log in attached\n> > > zip file):\n> > > initdb: error: invalid locale name \"NorwegianBokm†l,Norway\"\n>\n> > This has been fixed with the this patch:\n> > https://www.postgresql.org/message-id/E1iMcHC-0007Ci-7G@gemulon.postgresql.org\n>\n> Hm, I'm not entirely sure that it has been. The original code supposed\n> that the locale name is spelled \"Norwegian (Bokmål)_Norway\", and the\n> recent patch you mention extended that to allow \"Norwegian Bokmål_Norway\".\n> But this report, if accurate, shows yet another variant. Skjalg,\n> would you confirm that there's a comma and space before \"Norway\" in\n> the locale name as you see it? (Your log clearly shows it with a\n> space in the registry entries, but it looks like initdb might be\n> seeing it as not having any spaces, which is why I'm confused.)\n>\n> I wonder whether we need to relax the matching code to be entirely\n> agnostic about spaces and punctuation in the Windows locale name.\n>\n> regards, tom lane\n\n\n\n\n", "msg_date": "Wed, 06 Nov 2019 09:30:34 +0000", "msg_from": "\"Skjalg A. Skagen\" <skjalg.skagen@pm.me>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 12 installation fails because locale name contained\n non-english characters" }, { "msg_contents": "\"Skjalg A. Skagen\" <skjalg.skagen@pm.me> writes:\n> If you mean what I see in the drop-down menu during installation, I see \"Norwegian Bokmål, Norway\" -- identical to what getlocales.exe reported in my installation log file on line 222:\n\nOK, thanks for confirming. So the committed patch does *not* add enough\nflexibility to cover this case.\n\nI wrote:\n>> I wonder whether we need to relax the matching code to be entirely\n>> agnostic about spaces and punctuation in the Windows locale name.\n\nAfter googling a little bit, I could not find any indication that\nMicrosoft promises anything at all about the stability of these\nlong-form locale names. They document short names similar to the\nUnix conventions, e.g. \"en-US\" and \"nb-NO\", as being the stable\nforms that applications are encouraged to use. So somewhere there\nis code that converts these long-form names to the standardized\nrepresentation, and it would be entirely reasonable for that code\nto try to be forgiving. Thus, it's no surprise that we're getting\nbit by small variations like these.\n\nI'm inclined to think that we ought to ignore anything that isn't\nan ASCII letter while trying to match these locale names. That's\na little bit problematic in terms of what win32setlocale.c does\ntoday, because it tries to replace \"just the matched string\",\nbut it'd be unclear where the match ends if there are ignorable\ncharacters. But probably we could change it so that it just takes\nthe translation and then tacks on \".NNNN\" if the input ends with\na dot and digits.\n\nMaybe case insensitivity would be a good idea too? The existing\ncode hasn't got that refinement, so maybe it's not important,\nbut the examples I'm seeing in places like\n\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/language-strings?view=vs-2019\n\nare all-lower-case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Nov 2019 10:29:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12 installation fails because locale name contained\n non-english characters" }, { "msg_contents": "I wrote:\n>> I wonder whether we need to relax the matching code to be entirely\n>> agnostic about spaces and punctuation in the Windows locale name.\n\n> After googling a little bit, I could not find any indication that\n> Microsoft promises anything at all about the stability of these\n> long-form locale names. They document short names similar to the\n> Unix conventions, e.g. \"en-US\" and \"nb-NO\", as being the stable\n> forms that applications are encouraged to use. So somewhere there\n> is code that converts these long-form names to the standardized\n> representation, and it would be entirely reasonable for that code\n> to try to be forgiving. Thus, it's no surprise that we're getting\n> bit by small variations like these.\n\n> I'm inclined to think that we ought to ignore anything that isn't\n> an ASCII letter while trying to match these locale names. That's\n> a little bit problematic in terms of what win32setlocale.c does\n> today, because it tries to replace \"just the matched string\",\n> but it'd be unclear where the match ends if there are ignorable\n> characters. But probably we could change it so that it just takes\n> the translation and then tacks on \".NNNN\" if the input ends with\n> a dot and digits.\n\n> Maybe case insensitivity would be a good idea too? The existing\n> code hasn't got that refinement, so maybe it's not important,\n> but the examples I'm seeing in places like\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/language-strings?view=vs-2019\n> are all-lower-case.\n\nHere's a draft patch for that. I've checked that the logic does\nwhat I expect, but I don't have a way to actually test this thing\nin a Windows build. Anyone?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 06 Nov 2019 16:24:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12 installation fails because locale name contained\n non-english characters" } ]
[ { "msg_contents": "Hello,\n\nI noticed that our existing 2-param json{,b}_object functions take \ntext[] for both keys and values, so they are only able to build \none-layer-deep JSON objects. I'm interested in adding json{,b}_object \nfunctions that take text[] for the keys and json{,b}[] for the values. \nIt would otherwise behave the same as json_object(text[], text[]) (e.g. \nre NULL handling). Does that seem worthwhile to anyone?\n\nI'll share my specific problem where I felt I could use this function, \nalthough you can stop reading here if that isn't interesting to you. :-) \nI was building a jsonb_dasherize(j jsonb) function, which converts \nsnake_case JSON keys into dashed-case JSON keys. (It's because of a \nJavascript framework.... :-) My function needs to walk the whole JSON \nstructure, doing this recursively when it sees objects inside arrays or \nother objects. Here is the definition, including a comment where my \nproposed jsonb_object would have helped:\n\nCREATE FUNCTION jsonb_dasherize(j jsonb)\nRETURNS jsonb\nIMMUTABLE\nAS\n$$\nDECLARE\nt text;\nkey text;\nval jsonb;\nret jsonb;\nBEGIN\n t := jsonb_typeof(j);\n IF t = 'object' THEN\n -- So close! If only jsonb_object took text[] and jsonb[] params....\n -- SELECT jsonb_object(\n -- array_agg(dasherize_key(k)),\n -- array_agg(jsonb_dasherize(v)))\n -- FROM jsonb_each(j) AS t(k, v);\n ret := '{}';\n FOR key, val IN SELECT * FROM jsonb_each(j) LOOP\n ret := jsonb_set(ret,\n array[REPLACE(key, '_', '-')],\n jsonb_dasherize(val), true);\n END LOOP;\n RETURN ret;\n ELSIF t = 'array' THEN\n SELECT COALESCE(jsonb_agg(jsonb_dasherize(elem)), '[]')\n INTO ret\n FROM jsonb_array_elements(j) AS t(elem);\n RETURN ret;\n ELSIF t IS NULL THEN\n -- This should never happen internally\n -- but only from a passed-in NULL.\n RETURN NULL;\n ELSE\n -- string/number/null:\n RETURN j;\n END IF;\nEND;\n$$\nLANGUAGE plpgsql;\n\nI also tried a recursive CTE there using jsonb_set, but it was too late \nat night for me to figure that one out. :-)\n\nIt seems like a json-taking json_object would be just what I needed. And \nin general I was surprised that Postgres didn't have a more convenient \nway to build multi-layer JSON. I'm happy to add this myself if other \nfolks want it.\n\nRegards,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n", "msg_date": "Thu, 24 Oct 2019 08:17:23 -0700", "msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": true, "msg_subject": "Add json_object(text[], json[])?" }, { "msg_contents": "On 24.10.2019 18:17, Paul Jungwirth wrote:\n> Hello,\n>\n> I noticed that our existing 2-param json{,b}_object functions take \n> text[] for both keys and values, so they are only able to build \n> one-layer-deep JSON objects. I'm interested in adding json{,b}_object \n> functions that take text[] for the keys and json{,b}[] for the values. \n> It would otherwise behave the same as json_object(text[], text[]) \n> (e.g. re NULL handling). Does that seem worthwhile to anyone?\n>\n> I'll share my specific problem where I felt I could use this function, \n> although you can stop reading here if that isn't interesting to you. \n> :-) I was building a jsonb_dasherize(j jsonb) function, which converts \n> snake_case JSON keys into dashed-case JSON keys. (It's because of a \n> Javascript framework.... :-) My function needs to walk the whole JSON \n> structure, doing this recursively when it sees objects inside arrays \n> or other objects. Here is the definition, including a comment where my \n> proposed jsonb_object would have helped:\n>\n> CREATE FUNCTION jsonb_dasherize(j jsonb)\n> RETURNS jsonb\n> IMMUTABLE\n> AS\n> $$\n> DECLARE\n> t text;\n> key text;\n> val jsonb;\n> ret jsonb;\n> BEGIN\n>   t := jsonb_typeof(j);\n>   IF t = 'object' THEN\n>     -- So close! If only jsonb_object took text[] and jsonb[] params....\n>     -- SELECT  jsonb_object(\n>     --           array_agg(dasherize_key(k)),\n>     --           array_agg(jsonb_dasherize(v)))\n>     -- FROM    jsonb_each(j) AS t(k, v);\n>     ret := '{}';\n>     FOR key, val IN SELECT * FROM jsonb_each(j) LOOP\n>       ret := jsonb_set(ret,\n>                        array[REPLACE(key, '_', '-')],\n>                        jsonb_dasherize(val), true);\n>     END LOOP;\n>     RETURN ret;\n>   ELSIF t = 'array' THEN\n>     SELECT  COALESCE(jsonb_agg(jsonb_dasherize(elem)), '[]')\n>     INTO    ret\n>     FROM    jsonb_array_elements(j) AS t(elem);\n>     RETURN ret;\n>   ELSIF t IS NULL THEN\n>     -- This should never happen internally\n>     -- but only from a passed-in NULL.\n>     RETURN NULL;\n>   ELSE\n>     -- string/number/null:\n>     RETURN j;\n>   END IF;\n> END;\n> $$\n> LANGUAGE plpgsql;\n>\n> I also tried a recursive CTE there using jsonb_set, but it was too \n> late at night for me to figure that one out. :-)\n>\n> It seems like a json-taking json_object would be just what I needed. \n> And in general I was surprised that Postgres didn't have a more \n> convenient way to build multi-layer JSON. I'm happy to add this myself \n> if other folks want it.\n>\n> Regards,\n>\n\nYou can simply use jsonb_object_agg() to build a jsonb object from a sequence\nof transformed key-value pairs:\n\nSELECT COALESCE(jsonb_object_agg(REPLACE(k, '_', '-'),\n jsonb_dasherize(v)), '{}')\nINTO ret\nFROM jsonb_each(j) AS t(k, v);\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\nOn 24.10.2019 18:17, Paul Jungwirth\n wrote:\n\nHello,\n \n\n I noticed that our existing 2-param json{,b}_object functions take\n text[] for both keys and values, so they are only able to build\n one-layer-deep JSON objects. I'm interested in adding\n json{,b}_object functions that take text[] for the keys and\n json{,b}[] for the values. It would otherwise behave the same as\n json_object(text[], text[]) (e.g. re NULL handling). Does that\n seem worthwhile to anyone?\n \n\n I'll share my specific problem where I felt I could use this\n function, although you can stop reading here if that isn't\n interesting to you. :-) I was building a jsonb_dasherize(j jsonb)\n function, which converts snake_case JSON keys into dashed-case\n JSON keys. (It's because of a Javascript framework.... :-) My\n function needs to walk the whole JSON structure, doing this\n recursively when it sees objects inside arrays or other objects.\n Here is the definition, including a comment where my proposed\n jsonb_object would have helped:\n \n\n CREATE FUNCTION jsonb_dasherize(j jsonb)\n \n RETURNS jsonb\n \n IMMUTABLE\n \n AS\n \n $$\n \n DECLARE\n \n t text;\n \n key text;\n \n val jsonb;\n \n ret jsonb;\n \n BEGIN\n \n   t := jsonb_typeof(j);\n \n   IF t = 'object' THEN\n \n     -- So close! If only jsonb_object took text[] and jsonb[]\n params....\n \n     -- SELECT  jsonb_object(\n \n     --           array_agg(dasherize_key(k)),\n \n     --           array_agg(jsonb_dasherize(v)))\n \n     -- FROM    jsonb_each(j) AS t(k, v);\n \n     ret := '{}';\n \n     FOR key, val IN SELECT * FROM jsonb_each(j) LOOP\n \n       ret := jsonb_set(ret,\n \n                        array[REPLACE(key, '_', '-')],\n \n                        jsonb_dasherize(val), true);\n \n     END LOOP;\n \n     RETURN ret;\n \n   ELSIF t = 'array' THEN\n \n     SELECT  COALESCE(jsonb_agg(jsonb_dasherize(elem)), '[]')\n \n     INTO    ret\n \n     FROM    jsonb_array_elements(j) AS t(elem);\n \n     RETURN ret;\n \n   ELSIF t IS NULL THEN\n \n     -- This should never happen internally\n \n     -- but only from a passed-in NULL.\n \n     RETURN NULL;\n \n   ELSE\n \n     -- string/number/null:\n \n     RETURN j;\n \n   END IF;\n \n END;\n \n $$\n \n LANGUAGE plpgsql;\n \n\n I also tried a recursive CTE there using jsonb_set, but it was too\n late at night for me to figure that one out. :-)\n \n\n It seems like a json-taking json_object would be just what I\n needed. And in general I was surprised that Postgres didn't have a\n more convenient way to build multi-layer JSON. I'm happy to add\n this myself if other folks want it.\n \n\n Regards,\n \n\n\n\nYou can simply use jsonb_object_agg() to build a jsonb object from a sequence\nof transformed key-value pairs:\n\nSELECT COALESCE(jsonb_object_agg(REPLACE(k, '_', '-'),\n jsonb_dasherize(v)), '{}')\nINTO ret\nFROM jsonb_each(j) AS t(k, v);\n\n -- \n Nikita Glukhov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company", "msg_date": "Thu, 24 Oct 2019 18:42:26 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Add json_object(text[], json[])?" }, { "msg_contents": "Paul Jungwirth <pj@illuminatedcomputing.com> writes:\n> I noticed that our existing 2-param json{,b}_object functions take \n> text[] for both keys and values, so they are only able to build \n> one-layer-deep JSON objects. I'm interested in adding json{,b}_object \n> functions that take text[] for the keys and json{,b}[] for the values. \n> It would otherwise behave the same as json_object(text[], text[]) (e.g. \n> re NULL handling). Does that seem worthwhile to anyone?\n\nI think a potential problem is creation of ambiguity where there was\nnone before. I prototyped this as\n\nregression=# create function jsonb_object(text[], jsonb[]) returns jsonb\nas 'select jsonb_object($1, $2::text[])' language sql;\nCREATE FUNCTION\n\nand immediately got\n\nregression=# explain select jsonb_object('{a}', '{b}');\nERROR: function jsonb_object(unknown, unknown) is not unique\nLINE 1: explain select jsonb_object('{a}', '{b}');\n ^\nHINT: Could not choose a best candidate function. You might need to add explicit type casts.\n\nwhich is something that works fine as long as there's only one\njsonb_object(). I'm not sure whether that's a big problem in\npractice --- it seems like it will resolve successfully as long\nas at least one input isn't an unknown literal. But it could be\na problem for prepared statements, or clients using APIs that\ninvolve prepared statements under the hood:\n\nregression=# prepare foo as select jsonb_object($1,$2);\nERROR: function jsonb_object(unknown, unknown) is not unique\n\nAlso, as the prototype implementation shows, it's not like you\ncan't get this functionality today ... you just need to cast\njsonb to text. Admittedly that's annoying and wasteful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Oct 2019 11:52:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add json_object(text[], json[])?" }, { "msg_contents": "On Thu, Oct 24, 2019 at 8:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think a potential problem is creation of ambiguity where there was\n> none before.\n\nI agree that's not nice, and it seems like a new name might be better.\n\n> Also, as the prototype implementation shows, it's not like you\n> can't get this functionality today ... you just need to cast\n> jsonb to text. Admittedly that's annoying and wasteful.\n\nI don't think that gives the same result, does it? For example:\n\n# select jsonb_object(array['foo'], array['[{\"bar-bar\": [\"baz\"]}]'::jsonb]);\n jsonb_object\n---------------------------------------\n {\"foo\": \"[{\\\"bar-bar\\\": [\\\"baz\\\"]}]\"}\n\nYou can see the values are JSON strings, not JSON arrays/objects/etc.\n\nRegards,\nPaul\n\n\n", "msg_date": "Thu, 24 Oct 2019 09:37:30 -0700", "msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": false, "msg_subject": "Re: Add json_object(text[], json[])?" }, { "msg_contents": "On Thu, Oct 24, 2019 at 8:45 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> You can simply use jsonb_object_agg() to build a jsonb object from a sequence\n> of transformed key-value pairs:\n\n<strikes forehead> I've even used that function before. :-) I tried\nfinding it on the JSON functions page but couldn't, so I thought maybe\nI was going crazy. Of course it's on the aggregates page instead. As I\nsaid it was late at night. :-) Your version works perfectly!\n\nEven still, it may be nice to have a non-aggregate function that lets\nyou build nested JSON. But I agree jsonb_object_agg makes it less\nneedful.\n\nThanks!\nPaul\n\n\n", "msg_date": "Thu, 24 Oct 2019 09:46:12 -0700", "msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": false, "msg_subject": "Re: Add json_object(text[], json[])?" }, { "msg_contents": "Paul A Jungwirth <pj@illuminatedcomputing.com> writes:\n> On Thu, Oct 24, 2019 at 8:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also, as the prototype implementation shows, it's not like you\n>> can't get this functionality today ... you just need to cast\n>> jsonb to text. Admittedly that's annoying and wasteful.\n\n> I don't think that gives the same result, does it?\n\nAh, you're right --- ENOCAFFEINE :-(.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Oct 2019 13:06:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add json_object(text[], json[])?" }, { "msg_contents": "\nOn 10/24/19 12:46 PM, Paul A Jungwirth wrote:\n>\n> Even still, it may be nice to have a non-aggregate function that lets\n> you build nested JSON. But I agree jsonb_object_agg makes it less\n> needful.\n>\n\n\njson{b}_build_object and json{b}_build_array are designed for creating\nnested json{b}. Not sure if they would work for your purpose. I hadn't\nconsidered something to let you transform keys.\n\n\nPLV8 is useful for doing more outlandish JSON transformations. Maybe the\nUnderscore library has something that would be useful here.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 25 Oct 2019 09:40:18 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add json_object(text[], json[])?" }, { "msg_contents": "On 10/25/19 6:40 AM, Andrew Dunstan wrote:\n> json{b}_build_object and json{b}_build_array are designed for creating\n> nested json{b}. Not sure if they would work for your purpose.\n\nThanks for the suggestion! I looked at these a bit, but they only work \nif you have a known-ahead-of-time number of arguments. (I did explore \nbuilding an array and calling jsonb_build_object using VARIADIC, but you \ncan't build an array with alternating text & jsonb elements. That made \nme curious how these functions even worked, which led me to \nextract_variadic_args (utils/fmgr/funcapi.c), which has some magic to \nsupport heterogeneous types when not called with the VARIADIC keyword, \nso it seems they bypass the normal variadic handling.)\n\nRegards,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n", "msg_date": "Fri, 25 Oct 2019 11:32:05 -0700", "msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": true, "msg_subject": "Re: Add json_object(text[], json[])?" } ]
[ { "msg_contents": "We have primary and hot standby databases running Postgres 11.3 inside Docker, with their data directories bind-mounted to a reflink-enabled XFS filesystem. The VM is running Debian's 4.19.16-1~bpo9+1 kernel inside an AWS EC2 instance.\n\nI've seen TOAST corruption in one of the standby databases a few months ago in a ~44GB table, so I wiped the database and rebuilt it using pg_basebackup, which eliminated the corruption. This week I've seen corruption pop up again in the same table in one of the standby databases. The other standby database experienced no corruption.\n\nThe corrupted table has four columns of types integer, text, text, and jsonb. The corruption happens inside the jsonb column.\n\nThe corruption manifests itself as follows in the standby database:\n\nSELECT length(json::text) FROM <table> WHERE identity = '...';\nERROR: missing chunk number 0 for toast value 64265646 in pg_toast_16103925\n\nSELECT ctid, chunk_id, chunk_seq, md5(chunk_data) FROM pg_toast.pg_toast_16103925 WHERE chunk_id = 64265646;\n ctid | chunk_id | chunk_seq | md5\n------+----------+-----------+-----\n(0 rows)\n\nSELECT count(1) FROM pg_toast.pg_toast_16103925 WHERE chunk_id = 64265646;\n count\n-------\n 2\n(1 row)\n\nLooking up the TOAST block that is supposed to contain this value you can see that the TOAST tuples are missing:\n\nSELECT ctid, chunk_id, chunk_seq, md5(chunk_data) FROM pg_toast.pg_toast_16103925 WHERE ctid IN ('(1793121,1)', '(1793121,2)', '(1793121,3)', '(1793121,4)', '(1793121,5)', '(1793121,6)', '(1793121,7)');\n ctid | chunk_id | chunk_seq | md5\n-------------+----------+-----------+----------------------------------\n (1793121,3) | 41259162 | 0 | 1bff36f306bac135cce9da44dd6d6bbb\n (1793121,4) | 41259162 | 1 | b754d688c5c847c7bc519e65741ffef1\n (1793121,5) | 41259163 | 0 | 10dfa4f5b3e32188f0b4b28c9be76abe\n (1793121,6) | 41259163 | 1 | 7dceb98b2c2f4ac3c72245c58c85323f\n(4 rows)\n\nFor comparison here are the same queries against the primary database:\n\nSELECT length(json::text) FROM <table> WHERE identity = '...';\n length\n--------\n 7817\n(1 row)\n\nSELECT ctid, chunk_id, chunk_seq, md5(chunk_data) FROM pg_toast.pg_toast_16103925 WHERE chunk_id = 64265646;\n ctid | chunk_id | chunk_seq | md5\n-------------+----------+-----------+----------------------------------\n (1793121,1) | 64265646 | 0 | a9a2642e8408fc178fb809b86c430997\n (1793121,2) | 64265646 | 1 | 371bc2628ac5bfc8b37d32f93d08fefe\n(2 rows)\n\nSELECT count(1) FROM pg_toast.pg_toast_16103925 WHERE chunk_id = 64265646;\n count\n-------\n 2\n(1 row)\n\nSELECT ctid, chunk_id, chunk_seq, md5(chunk_data) FROM pg_toast.pg_toast_16103925 WHERE ctid IN ('(1793121,1)', '(1793121,2)', '(1793121,3)', '(1793121,4)', '(1793121,5)', '(1793121,6)', '(1793121,7)');\n ctid | chunk_id | chunk_seq | md5\n-------------+----------+-----------+----------------------------------\n (1793121,1) | 64265646 | 0 | a9a2642e8408fc178fb809b86c430997\n (1793121,2) | 64265646 | 1 | 371bc2628ac5bfc8b37d32f93d08fefe\n (1793121,3) | 41259162 | 0 | 1bff36f306bac135cce9da44dd6d6bbb\n (1793121,4) | 41259162 | 1 | b754d688c5c847c7bc519e65741ffef1\n (1793121,5) | 41259163 | 0 | 10dfa4f5b3e32188f0b4b28c9be76abe\n (1793121,6) | 41259163 | 1 | 7dceb98b2c2f4ac3c72245c58c85323f\n(6 rows)\n\nLooking at the data file for the TOAST relation, the header data structures in the relevant block seem fine to me, which makes me think this is not caused by filesystem corruption (unless a write silently failed). The second half of that block is identical between the primary and corrupted standby, but in the first half the corrupted standby is missing data.\n\nStandby (corrupted):\n\n# dd if=data/base/18034/16103928.13 bs=8192 skip=89185 count=1 status=none | hexdump -C | head -8\n00000000 a3 0e 00 00 48 46 88 0e 00 00 05 00 30 00 58 0f |....HF......0.X.|\n00000010 00 20 04 20 00 00 00 00 00 00 00 00 00 00 00 00 |. . ............|\n00000020 10 98 e0 0f 98 97 e8 00 a8 8f e0 0f 58 8f 96 00 |............X...|\n00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|\n*\n00000f50 00 00 00 00 00 00 00 00 32 b0 0a 01 00 00 00 00 |........2.......|\n00000f60 00 00 00 00 1b 00 61 5c 06 00 03 00 02 09 18 00 |......a\\........|\n00000f70 9b 90 75 02 01 00 00 00 ac 00 00 00 83 9f 64 00 |..u...........d.|\n\nPrimary:\n\n# dd if=data/base/18034/16103928.13 bs=8192 skip=89185 count=1 status=none | hexdump -C | head -8\n00000000 bd 0e 00 00 08 ad 32 b7 00 00 05 00 30 00 90 04 |......2.....0...|\n00000010 00 20 04 20 00 00 00 00 68 87 e0 0f 90 84 a8 05 |. . ....h.......|\n00000020 10 98 e0 0f 98 97 e8 00 a8 8f e0 0f 58 8f 96 00 |............X...|\n00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|\n*\n00000490 a6 07 7e 02 00 00 00 00 00 00 00 00 1b 00 61 5c |..~...........a\\|\n000004a0 02 00 03 00 02 09 18 00 ae 9d d4 03 01 00 00 00 |................|\n000004b0 d0 0a 00 00 23 25 10 07 88 02 13 0f 2c 04 78 01 |....#%......,.x.|\n\nBased on the above observations it seems to me that occasionally some of the changes aren't replicating to or persisting by the standby database. In the past I've seen some TCP packets get mangled or dropped between our EC2 instances, leading to sudden disconnects. The standby connects to the primary using SSL (sslmode=require sslcompression=1) so I would think if there's any network-level corruption SSL would catch it, causing the connection to fail and reconnect. Outside of any SSL disconnects (which don't happen often), this database is stopped and restarted twice a week so we can clone it (using cp -a --reflink=always).\n\nAny ideas on what might be causing this?\n\nThanks,\n\nAlex\n\n", "msg_date": "Thu, 24 Oct 2019 20:20:00 +0000", "msg_from": "Alex Adriaanse <alex@oseberg.io>", "msg_from_op": true, "msg_subject": "TOAST corruption in standby database" }, { "msg_contents": "On Fri, Oct 25, 2019 at 1:50 AM Alex Adriaanse <alex@oseberg.io> wrote:\n>\n> Standby (corrupted):\n>\n> # dd if=data/base/18034/16103928.13 bs=8192 skip=89185 count=1 status=none | hexdump -C | head -8\n> 00000000 a3 0e 00 00 48 46 88 0e 00 00 05 00 30 00 58 0f |....HF......0.X.|\n> 00000010 00 20 04 20 00 00 00 00 00 00 00 00 00 00 00 00 |. . ............|\n> 00000020 10 98 e0 0f 98 97 e8 00 a8 8f e0 0f 58 8f 96 00 |............X...|\n> 00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|\n> *\n> 00000f50 00 00 00 00 00 00 00 00 32 b0 0a 01 00 00 00 00 |........2.......|\n> 00000f60 00 00 00 00 1b 00 61 5c 06 00 03 00 02 09 18 00 |......a\\........|\n> 00000f70 9b 90 75 02 01 00 00 00 ac 00 00 00 83 9f 64 00 |..u...........d.|\n>\n> Primary:\n>\n> # dd if=data/base/18034/16103928.13 bs=8192 skip=89185 count=1 status=none | hexdump -C | head -8\n> 00000000 bd 0e 00 00 08 ad 32 b7 00 00 05 00 30 00 90 04 |......2.....0...|\n> 00000010 00 20 04 20 00 00 00 00 68 87 e0 0f 90 84 a8 05 |. . ....h.......|\n> 00000020 10 98 e0 0f 98 97 e8 00 a8 8f e0 0f 58 8f 96 00 |............X...|\n> 00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|\n> *\n> 00000490 a6 07 7e 02 00 00 00 00 00 00 00 00 1b 00 61 5c |..~...........a\\|\n> 000004a0 02 00 03 00 02 09 18 00 ae 9d d4 03 01 00 00 00 |................|\n> 000004b0 d0 0a 00 00 23 25 10 07 88 02 13 0f 2c 04 78 01 |....#%......,.x.|\n>\n> Based on the above observations it seems to me that occasionally some of the changes aren't replicating to or persisting by the standby database.\n>\n\nI am not sure what is the best way to detect this, but one idea could\nbe to enable wal_consistency_checking [1]. This will at the very\nleast can detect if the block is replicated correctly for the very\nfirst time. Also, if there is some corruption issue on standby, you\nmight be able to detect. But the point to note is that enabling this\noption has overhead, so you need to be careful.\n\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-developer.html\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 26 Oct 2019 12:42:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TOAST corruption in standby database" }, { "msg_contents": "SELECT ctid, chunk_id, chunk_seq, md5(chunk_data) FROM\npg_toast.pg_toast_16103925 WHERE chunk_id = 64265646;\n ctid | chunk_id | chunk_seq | md5\n------+----------+-----------+-----\n(0 rows)\n\nSELECT count(1) FROM pg_toast.pg_toast_16103925 WHERE chunk_id = 64265646;\n count\n-------\n 2\n(1 row)\n\n From the aboving query,I think the problem is form the index. First query\nuse the default toast index (chunk_id, chunk_seq) to search, it's found\nnone. The second query use the seq scan and find two matched rows. So I\nthink the index store value maybe not match the heap tuple. If have the\ncorruption toast data, I think we can use the tool to see the index and heap\ntuple data.\n\ntypedef struct IndexTupleData\n{\n\tItemPointerData t_tid;\t\t/* reference TID to heap tuple */\n\n\t/* ---------------\n\t * t_info is laid out in the following fashion:\n\t *\n\t * 15th (high) bit: has nulls\n\t * 14th bit: has var-width attributes\n\t * 13th bit: AM-defined meaning\n\t * 12-0 bit: size of tuple\n\t * ---------------\n\t */\n\n\tunsigned short t_info;\t\t/* various info about tuple */\n\n} IndexTupleData;\t\t\t\t/* MORE DATA FOLLOWS AT END OF STRUCT */\n\nIn my env, I encounter many cases like \"missing chunk number 0 for toast\nvalue XXX in pg_toast_2619\" after shutdown the service forcely without\nstopping the database. The 2619 is the pg_statistic which is update\nfrequently. \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 28 Oct 2019 08:57:04 -0700 (MST)", "msg_from": "\"postgresql_2016@163.com\" <postgresql_2016@163.com>", "msg_from_op": false, "msg_subject": "Re: TOAST corruption in standby database" } ]
[ { "msg_contents": "Currently I see the vacuum behavior for a table is that, even if a long\nrunning query on a different table is executing in another read committed\ntransaction.\nThat vacuum in the 1st transaction skips the dead rows until the long\nrunning query finishes.\nWhy that is the case, On same table long running query blocking vacuum we\ncan understand but why query on a different table block it.\n\nCurrently I see the vacuum behavior for a table is that, even if a long running query on a different table is executing in another read committed transaction. That vacuum in the 1st transaction skips the dead rows until the long running query finishes. Why that is the case, On same table long running query blocking vacuum we can understand but why query on a different table block it.", "msg_date": "Fri, 25 Oct 2019 11:33:11 +0530", "msg_from": "Virender Singla <virender.cse@gmail.com>", "msg_from_op": true, "msg_subject": "vacuum on table1 skips rows because of a query on table2" }, { "msg_contents": "Virender Singla <virender.cse@gmail.com> writes:\n> Currently I see the vacuum behavior for a table is that, even if a long\n> running query on a different table is executing in another read committed\n> transaction.\n> That vacuum in the 1st transaction skips the dead rows until the long\n> running query finishes.\n> Why that is the case, On same table long running query blocking vacuum we\n> can understand but why query on a different table block it.\n\nProbably because vacuum's is-this-row-dead-to-everyone tests are based\non the global xmin minimum. This must be so, because even if the\nlong-running transaction hasn't touched the table being vacuumed,\nwe don't know that it won't do so in future. So we can't remove\nrows that it should be able to see if it were to look.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Oct 2019 12:46:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: vacuum on table1 skips rows because of a query on table2" }, { "msg_contents": "If long-running transaction is \"read committed\", then we are sure that any\nnew query coming\n(even on same table1 as vacuum table) will need snapshot on point of time\nquery start and not the time transaction\nstarts (but still why read committed transaction on table2 cause vacuum on\ntable1 to skip rows).\nHence if a vacuum on table1 sees that all the transactions in the database\nare \"read committed\" and no one\naccessing table1, vacuum should be able to clear dead rows.\nFor read committed transactions, different table should not interfere with\neach other.\n\nOn Fri, Oct 25, 2019 at 10:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Virender Singla <virender.cse@gmail.com> writes:\n> > Currently I see the vacuum behavior for a table is that, even if a long\n> > running query on a different table is executing in another read committed\n> > transaction.\n> > That vacuum in the 1st transaction skips the dead rows until the long\n> > running query finishes.\n> > Why that is the case, On same table long running query blocking vacuum we\n> > can understand but why query on a different table block it.\n>\n> Probably because vacuum's is-this-row-dead-to-everyone tests are based\n> on the global xmin minimum. This must be so, because even if the\n> long-running transaction hasn't touched the table being vacuumed,\n> we don't know that it won't do so in future. So we can't remove\n> rows that it should be able to see if it were to look.\n>\n> regards, tom lane\n>\n\nIf long-running transaction is \"read committed\", then we are sure that any new query coming (even on same  table1 as vacuum table)  will need snapshot on point of time query start and not the time transaction starts (but still why read committed transaction on table2 cause vacuum on table1 to skip rows). Hence if a vacuum on table1 sees that all the transactions in the database are \"read committed\" and no one accessing table1, vacuum should be able to clear dead rows. For read committed transactions, different table should not interfere with each other.On Fri, Oct 25, 2019 at 10:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Virender Singla <virender.cse@gmail.com> writes:\n> Currently I see the vacuum behavior for a table is that, even if a long\n> running query on a different table is executing in another read committed\n> transaction.\n> That vacuum in the 1st transaction skips the dead rows until the long\n> running query finishes.\n> Why that is the case, On same table long running query blocking vacuum we\n> can understand but why query on a different table block it.\n\nProbably because vacuum's is-this-row-dead-to-everyone tests are based\non the global xmin minimum.  This must be so, because even if the\nlong-running transaction hasn't touched the table being vacuumed,\nwe don't know that it won't do so in future.  So we can't remove\nrows that it should be able to see if it were to look.\n\n                        regards, tom lane", "msg_date": "Sat, 26 Oct 2019 09:40:02 +0530", "msg_from": "Virender Singla <virender.cse@gmail.com>", "msg_from_op": true, "msg_subject": "Re: vacuum on table1 skips rows because of a query on table2" }, { "msg_contents": "On Sat, Oct 26, 2019 at 1:44 PM Virender Singla <virender.cse@gmail.com> wrote:\n> If long-running transaction is \"read committed\", then we are sure that any new query coming\n> (even on same table1 as vacuum table) will need snapshot on point of time query start and not the time transaction\n> starts (but still why read committed transaction on table2 cause vacuum on table1 to skip rows).\n\nI wish that this argument were completely correct, but it isn't,\nbecause the current query could involve a function written in some\nprocedural language (or in C) which could do anything, including\naccessing tables that the query hasn't previously touched. It could be\nthat the function will only be called towards the end of the current\nquery's execution, or it could be that it's going to be called\nmultiple times and does different things each time.\n\nNow, this is pretty unlikely and most queries don't behave anything\nlike that. They do things like \"+\" or \"coalesce\" which don't open new\ntables. There are contrary examples, though, even among functions\nbuilt into core, like \"table_to_xmlschema\", which takes a relation OID\nas an argument and thus may open a new relation each time it's called.\nIf we had some way of analyzing a query and determining whether it\nuses any functions or operators that open new tables, then this kind\nof optimization might be possible, but we don't.\n\nHowever, even if we did have such infrastructure, it wouldn't solve\nall of our problems, because vacuum would have to know which sessions\nwere running queries that might open new tables and which were running\nqueries that won't open new tables -- and among the latter, it would\nneed to know which tables those sessions already have open. We could\nmake the former available via a new shared memory flag and the latter\ncould, perhaps, be deduced from the lock table, which is already\nshared. However, if we did all that, VACUUM would potentially have to\ndo significantly more work to deduce the xmin horizon for each table\nthat it wanted to process.\n\nEven given all that, I'm moderately confident that something like this\nwould benefit a lot of people. However, it would probably hurt some\npeople too, either because the overhead of figuring out that the\ncurrent query won't lock any more relations, so that we can advertise\nthat fact in shared memory, or because of the increased overhead of\nfiguring out the xmin horizon for a table to be vacuumed. Users with\nshort-running queries and small tables would be the most likely to be\nharmed. On the other hand, for users with giant tables, even more\naggressive approaches might pay off - e.g. recompute the xmin horizon\nevery 1GB or so, because it might have advanced, and the effort to\nrecheck that might pay off by allowing us to vacuum more stuff sooner.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 13:00:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: vacuum on table1 skips rows because of a query on table2" }, { "msg_contents": "On Mon, 2019-10-28 at 13:00 -0400, Robert Haas wrote:\n> On Sat, Oct 26, 2019 at 1:44 PM Virender Singla <virender.cse@gmail.com> wrote:\n> > If long-running transaction is \"read committed\", then we are sure that any new query coming\n> > (even on same table1 as vacuum table) will need snapshot on point of time query start and not the time transaction\n> > starts (but still why read committed transaction on table2 cause vacuum on table1 to skip rows).\n> \n> I wish that this argument were completely correct, but it isn't,\n> because the current query could involve a function written in some\n> procedural language (or in C) which could do anything, including\n> accessing tables that the query hasn't previously touched. It could be\n> that the function will only be called towards the end of the current\n> query's execution, or it could be that it's going to be called\n> multiple times and does different things each time.\n\nEven if you call a function that uses a new table in a READ COMMITTED\ntransaction, that function would use the snapshot of the statement that\ncalled the function and *not* the transaction snapshot, so the function\ncould see no tuples older than the statement's snapshot.\n\nSo VACUUM could remove tuples that were visible when the transaction\nstarted, but are not visible in the current statement's snapshot.\n\nOf course a C function could completely ignore MVCC and access any\nold tuple, but do we want to cater for that?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 28 Oct 2019 22:41:57 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: vacuum on table1 skips rows because of a query on table2" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Oct 26, 2019 at 1:44 PM Virender Singla <virender.cse@gmail.com> wrote:\n>> If long-running transaction is \"read committed\", then we are sure that any new query coming\n>> (even on same table1 as vacuum table) will need snapshot on point of time query start and not the time transaction\n>> starts (but still why read committed transaction on table2 cause vacuum on table1 to skip rows).\n\n> I wish that this argument were completely correct, but it isn't,\n> [ for lots of reasons ]\n\nOn top of the problems Robert enumerated, there's another fairly serious\none, which is that \"global xmin\" is not just the minimal XID that's\nrunning. Rather, it's the minimum XID that was running when any active\nsnapshot was taken. Thus, even if you could prove that some long-running\ntransaction isn't going to touch the table you wish to vacuum, that\nfact in itself won't move your estimate of the relevant xmin very much:\nthat transaction's own XID is holding back the xmins of every other\ntransaction --- and not only the ones open now, but ones that will\nstart in future, which you certainly can't predict anything about.\n\nThus, to decide whether tuples newer than the long-running transaction's\nXID are safe to remove, you'd have to figure out what the other\ntransactions' snapshots would look like if that transaction weren't there\n... and you don't have that information. The model we use of exposing\nonly \"xmin\", and not any more-detailed info about the contents of other\ntransactions' snapshots, really isn't adequate to allow this sort of\nanalysis. You could imagine exposing more info, but that carries more\ncosts --- costs that would be paid whether or not VACUUM ever gets any\nbenefit from it.\n\n> Even given all that, I'm moderately confident that something like this\n> would benefit a lot of people. However, it would probably hurt some\n> people too, either because the overhead of figuring out that the\n> current query won't lock any more relations, so that we can advertise\n> that fact in shared memory, or because of the increased overhead of\n> figuring out the xmin horizon for a table to be vacuumed.\n\nYeah, the whole thing is a delicate tradeoff between the cost of\ntracking/advertising transaction state and the value of being able\nto remove tuples sooner. Maybe we can move that tradeoff, but it'd\nrequire a whole lot of pretty fundamental rework.\n\n> On the other hand, for users with giant tables, even more\n> aggressive approaches might pay off - e.g. recompute the xmin horizon\n> every 1GB or so, because it might have advanced, and the effort to\n> recheck that might pay off by allowing us to vacuum more stuff sooner.\n\nHmm, that's an interesting idea. It wouldn't take a lot of work\nto try it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 17:54:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: vacuum on table1 skips rows because of a query on table2" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2019-10-28 at 13:00 -0400, Robert Haas wrote:\n>> I wish that this argument were completely correct, but it isn't,\n>> because the current query could involve a function written in some\n>> procedural language (or in C) which could do anything, including\n>> accessing tables that the query hasn't previously touched. It could be\n>> that the function will only be called towards the end of the current\n>> query's execution, or it could be that it's going to be called\n>> multiple times and does different things each time.\n\n> Even if you call a function that uses a new table in a READ COMMITTED\n> transaction, that function would use the snapshot of the statement that\n> called the function and *not* the transaction snapshot, so the function\n> could see no tuples older than the statement's snapshot.\n\n> So VACUUM could remove tuples that were visible when the transaction\n> started, but are not visible in the current statement's snapshot.\n\nI don't think that's particularly relevant here. Our sessions already\nadvertise the xmin from their oldest live snapshot, which would be\nthe statement snapshot in this case. What the OP is wishing for is\nanalysis that's finer-grained than \"global xmin\" allows for ---\nbut per Robert's comments and my own nearby comments, you would need\na *whole* lot more information to do noticeably better.\n\n> Of course a C function could completely ignore MVCC and access any\n> old tuple, but do we want to cater for that?\n\nThat's already not guaranteed to work, since a tuple older than the\nxmin your session is advertising could disappear at any moment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 18:00:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: vacuum on table1 skips rows because of a query on table2" } ]
[ { "msg_contents": "Hi all,\n\nWhile digging into a separate issue, I have found a new bug with\nREINDEX CONCURRENTLY. Once the new index is built and validated,\na couple of things are done at the swap phase, like switching\nconstraints, comments, and dependencies. The current code moves all\nthe dependency entries of pg_depend from the old index to the new\nindex, but it never counted on the fact that the new index may have\nsome entries already. So, once the swapping is done, pg_depend\nfinishes with duplicated entries: the ones coming from the old index\nand the ones of the index freshly-created. For example, take an index\nwhich uses an attribute or an expression and has dependencies with the\nparent's columns.\n\nAttached is a patch to fix the issue. As we know that the old index\nwill have a definition and dependencies that match with the old one, I\nthink that we should just remove any dependency records on the new\nindex before moving the new set of dependencies from the old to the\nnew index. The patch includes regression tests that scan pg_depend to\ncheck that everything remains consistent after REINDEX CONCURRENTLY.\n\nAny thoughts?\n--\nMichael", "msg_date": "Fri, 25 Oct 2019 15:43:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Duplicate entries in pg_depend after REINDEX CONCURRENTLY" }, { "msg_contents": "On Fri, Oct 25, 2019 at 03:43:18PM +0900, Michael Paquier wrote:\n> Attached is a patch to fix the issue. As we know that the old index\n> will have a definition and dependencies that match with the old one, I\n> think that we should just remove any dependency records on the new\n> index before moving the new set of dependencies from the old to the\n> new index. The patch includes regression tests that scan pg_depend to\n> check that everything remains consistent after REINDEX CONCURRENTLY.\n> \n> Any thoughts?\n\nI have done more tests for this one through the day, and committed the\npatch. There is still one bug pending related to partitioned indexes\nwhere REINDEX CONCURRENTLY is cancelled after phase 4 (swap) has\ncommitted. I am still looking more into that.\n--\nMichael", "msg_date": "Mon, 28 Oct 2019 15:01:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Duplicate entries in pg_depend after REINDEX CONCURRENTLY" }, { "msg_contents": "On Mon, Oct 28, 2019 at 03:01:31PM +0900, Michael Paquier wrote:\n> On Fri, Oct 25, 2019 at 03:43:18PM +0900, Michael Paquier wrote:\n> > Attached is a patch to fix the issue. As we know that the old index\n> > will have a definition and dependencies that match with the old one, I\n> > think that we should just remove any dependency records on the new\n> > index before moving the new set of dependencies from the old to the\n> > new index. The patch includes regression tests that scan pg_depend to\n> > check that everything remains consistent after REINDEX CONCURRENTLY.\n> > \n> > Any thoughts?\n> \n> I have done more tests for this one through the day, and committed the\n> patch. There is still one bug pending related to partitioned indexes\n> where REINDEX CONCURRENTLY is cancelled after phase 4 (swap) has\n> committed. I am still looking more into that.\n\nAre there any bad effects of this bug on PG 12?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 5 Nov 2019 18:26:56 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Duplicate entries in pg_depend after REINDEX CONCURRENTLY" }, { "msg_contents": "On Tue, Nov 05, 2019 at 06:26:56PM -0500, Bruce Momjian wrote:\n> Are there any bad effects of this bug on PG 12?\n\nNot that I could guess, except a bloat of pg_depend... The more you\nissue REINDEX CONCURRENTLY on an index, the more duplicated entries\naccumulate in pg_depend as the dependencies of the old index are\npassed to the new one, say:\n=# create table aa (a int);\nCREATE TABLE\n=# create index aai on aa(a);\nCREATE INDEX\n=# select count(pg_describe_object(classid, objid, objsubid))\n from pg_depend\n where classid = 'pg_class'::regclass AND\n objid in ('aai'::regclass);\n count\n-------\n 1\n(1 row)\n=# reindex index concurrently aai;\nREINDEX\n=# reindex index concurrently aai;\nREINDEX\n=# select count(pg_describe_object(classid, objid, objsubid))\n from pg_depend\n where classid = 'pg_class'::regclass AND\n objid in ('aai'::regclass);\n count\n-------\n 3\n(1 row)\n\nAfter that, if for example one drops a column the rebuilt index\ndepends on or just drops the index, then all the duplicated entries\nget removed as well with the index. Note that we have also cases\nwhere it is legit to have multiple entries in pg_depend. For example\ntake the case of one index which lists two times the same column.\n--\nMichael", "msg_date": "Wed, 6 Nov 2019 13:06:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Duplicate entries in pg_depend after REINDEX CONCURRENTLY" } ]
[ { "msg_contents": "Hi,\n\nI recently discovered two possible bugs about synchronous replication.\n\n1. SyncRepCleanupAtProcExit may delete an element that has been deleted\nSyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, \nacquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, \nit will be deleted repeatedly, SHMQueueDelete will core with a segment fault. \n\nIMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\nwhether the queue is detached or not.\n\n\n2. SyncRepWaitForLSN may not call SyncRepCancelWait if ereport check one interrupt.\nFor SyncRepWaitForLSN, if a query cancel interrupt arrives, we just terminate the wait \nwith suitable warning. As follows:\n\na. set QueryCancelPending to false\nb. errport outputs one warning\nc. calls SyncRepCancelWait to delete one element from the queue\n\nIf another cancel interrupt arrives when we are outputting warning at step b, the errfinish\nwill call CHECK_FOR_INTERRUPTS that will output an ERROR, such as \"canceling autovacuum\ntask\", then the process will jump to the sigsetjmp. Unfortunately, the step c will be skipped\nand the element that should be deleted by SyncRepCancelWait is remained.\n\nThe easiest way to fix this is to swap the order of step b and step c. On the other hand, \nlet sigsetjmp clean up the queue may also be a good choice. What do you think?\n\nAttached the patch, any feedback is greatly appreciated.\n\nBest regards,\n--\nDongming Liu", "msg_date": "Fri, 25 Oct 2019 15:18:34 +0800", "msg_from": "\"Dongming Liu\" <lingce.ldm@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?UHJvYmxlbSB3aXRoIHN5bmNocm9ub3VzIHJlcGxpY2F0aW9u?=" }, { "msg_contents": "Can someone help me to confirm that these two problems are bugs?\nIf they are bugs, please help review the patch or provide better fix suggestions.\nThanks.\n\nBest regards,\n--\nDongming Liu\n------------------------------------------------------------------\nFrom:LIU Dongming <lingce.ldm@alibaba-inc.com>\nSent At:2019 Oct. 25 (Fri.) 15:18\nTo:pgsql-hackers <pgsql-hackers@postgresql.org>\nSubject:Problem with synchronous replication\n\n\nHi,\n\nI recently discovered two possible bugs about synchronous replication.\n\n1. SyncRepCleanupAtProcExit may delete an element that has been deleted\nSyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, \nacquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, \nit will be deleted repeatedly, SHMQueueDelete will core with a segment fault. \n\nIMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\nwhether the queue is detached or not.\n\n\n2. SyncRepWaitForLSN may not call SyncRepCancelWait if ereport check one interrupt.\nFor SyncRepWaitForLSN, if a query cancel interrupt arrives, we just terminate the wait \nwith suitable warning. As follows:\n\na. set QueryCancelPending to false\nb. errport outputs one warning\nc. calls SyncRepCancelWait to delete one element from the queue\n\nIf another cancel interrupt arrives when we are outputting warning at step b, the errfinish\nwill call CHECK_FOR_INTERRUPTS that will output an ERROR, such as \"canceling autovacuum\ntask\", then the process will jump to the sigsetjmp. Unfortunately, the step c will be skipped\nand the element that should be deleted by SyncRepCancelWait is remained.\n\nThe easiest way to fix this is to swap the order of step b and step c. On the other hand, \nlet sigsetjmp clean up the queue may also be a good choice. What do you think?\n\nAttached the patch, any feedback is greatly appreciated.\n\nBest regards,\n--\nDongming Liu\nCan someone help me to confirm that these two problems are bugs?If they are bugs, please help review the patch or provide better fix suggestions.Thanks.Best regards,--Dongming Liu------------------------------------------------------------------From:LIU Dongming <lingce.ldm@alibaba-inc.com>Sent At:2019 Oct. 25 (Fri.) 15:18To:pgsql-hackers <pgsql-hackers@postgresql.org>Subject:Problem with synchronous replicationHi,I recently discovered two possible bugs about synchronous replication.1. SyncRepCleanupAtProcExit may delete an element that has been deletedSyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, acquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, it will be deleted repeatedly, SHMQueueDelete will core with a segment fault. IMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then checkwhether the queue is detached or not.2. SyncRepWaitForLSN may not call SyncRepCancelWait if ereport check one interrupt.For SyncRepWaitForLSN, if a query cancel interrupt arrives, we just terminate the wait with suitable warning. As follows:a. set QueryCancelPending to falseb. errport outputs one warningc. calls SyncRepCancelWait to delete one element from the queueIf another cancel interrupt arrives when we are outputting warning at step b, the errfinishwill call CHECK_FOR_INTERRUPTS that will output an ERROR, such as \"canceling autovacuumtask\", then the process will jump to the sigsetjmp. Unfortunately, the step c will be skippedand the element that should be deleted by SyncRepCancelWait is remained.The easiest way to fix this is to swap the order of step b and step c. On the other hand, let sigsetjmp clean up the queue may also be a good choice. What do you think?Attached the patch, any feedback is greatly appreciated.Best regards,--Dongming Liu", "msg_date": "Tue, 29 Oct 2019 13:40:41 +0800", "msg_from": "\"Dongming Liu\" <lingce.ldm@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?UmU6IFByb2JsZW0gd2l0aCBzeW5jaHJvbm91cyByZXBsaWNhdGlvbg==?=" }, { "msg_contents": "Hello.\n\nAt Fri, 25 Oct 2019 15:18:34 +0800, \"Dongming Liu\" <lingce.ldm@alibaba-inc.com> wrote in \n> \n> Hi,\n> \n> I recently discovered two possible bugs about synchronous replication.\n> \n> 1. SyncRepCleanupAtProcExit may delete an element that has been deleted\n> SyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, \n> acquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, \n> it will be deleted repeatedly, SHMQueueDelete will core with a segment fault. \n> \n> IMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\n> whether the queue is detached or not.\n\nI think you're right here.\n\n> 2. SyncRepWaitForLSN may not call SyncRepCancelWait if ereport check one interrupt.\n> For SyncRepWaitForLSN, if a query cancel interrupt arrives, we just terminate the wait \n> with suitable warning. As follows:\n> \n> a. set QueryCancelPending to false\n> b. errport outputs one warning\n> c. calls SyncRepCancelWait to delete one element from the queue\n> \n> If another cancel interrupt arrives when we are outputting warning at step b, the errfinish\n> will call CHECK_FOR_INTERRUPTS that will output an ERROR, such as \"canceling autovacuum\n> task\", then the process will jump to the sigsetjmp. Unfortunately, the step c will be skipped\n> and the element that should be deleted by SyncRepCancelWait is remained.\n> \n> The easiest way to fix this is to swap the order of step b and step c. On the other hand, \n> let sigsetjmp clean up the queue may also be a good choice. What do you think?\n> \n> Attached the patch, any feedback is greatly appreciated.\n\nThis is not right. It is in transaction commit so it is in a\nHOLD_INTERRUPTS section. ProcessInterrupt does not respond to\ncancel/die interrupts thus the ereport should return.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 29 Oct 2019 19:50:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Tue, Oct 29, 2019 at 01:40:41PM +0800, Dongming Liu wrote:\n> Can someone help me to confirm that these two problems are bugs?\n> If they are bugs, please help review the patch or provide better fix\n> suggestions.\n\nThere is no need to send periodic pings. Sometimes it takes time to\nget replies as time is an important resource that is always limited.\nI can see that Horiguchi-san has already provided some feedback, and I\nam looking now at your suggestions.\n--\nMichael", "msg_date": "Wed, 30 Oct 2019 10:11:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Tue, Oct 29, 2019 at 07:50:01PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 25 Oct 2019 15:18:34 +0800, \"Dongming Liu\" <lingce.ldm@alibaba-inc.com> wrote in \n>> I recently discovered two possible bugs about synchronous replication.\n>> \n>> 1. SyncRepCleanupAtProcExit may delete an element that has been deleted\n>> SyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, \n>> acquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, \n>> it will be deleted repeatedly, SHMQueueDelete will core with a segment fault. \n>> \n>> IMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\n>> whether the queue is detached or not.\n> \n> I think you're right here.\n\nIndeed. Looking at the surroundings we expect some code paths to hold\nSyncRepLock in exclusive or shared mode but we don't actually check\nthat the lock is hold. So let's add some assertions while on it.\n\n> This is not right. It is in transaction commit so it is in a\n> HOLD_INTERRUPTS section. ProcessInterrupt does not respond to\n> cancel/die interrupts thus the ereport should return.\n\nYeah. There is an easy way to check after that: InterruptHoldoffCount\nneeds to be strictly positive.\n\nMy suggestions are attached. Any thoughts?\n--\nMichael", "msg_date": "Wed, 30 Oct 2019 10:45:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "At Wed, 30 Oct 2019 10:45:11 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Oct 29, 2019 at 07:50:01PM +0900, Kyotaro Horiguchi wrote:\n> > At Fri, 25 Oct 2019 15:18:34 +0800, \"Dongming Liu\" <lingce.ldm@alibaba-inc.com> wrote in \n> >> I recently discovered two possible bugs about synchronous replication.\n> >> \n> >> 1. SyncRepCleanupAtProcExit may delete an element that has been deleted\n> >> SyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, \n> >> acquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, \n> >> it will be deleted repeatedly, SHMQueueDelete will core with a segment fault. \n> >> \n> >> IMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\n> >> whether the queue is detached or not.\n> > \n> > I think you're right here.\n> \n> Indeed. Looking at the surroundings we expect some code paths to hold\n> SyncRepLock in exclusive or shared mode but we don't actually check\n> that the lock is hold. So let's add some assertions while on it.\n\nI looked around closer.\n\nIf we do that strictly, other functions like\nSyncRepGetOldestSyncRecPtr need the same Assert()s. I think static\nfunctions don't need Assert() and caution in their comments would be\nenough.\n\nOn the other hand, the similar-looking code in SyncRepInitConfig and\nSyncRepUpdateSyncStandbysDefined seems safe since AFAICS it doesn't\nhave (this kind of) racing condition on wirtes. It might need a\ncomment like that. Or, we could go to (apparently) safer-side by\napplying the same amendment to it.\n\nSyncRepReleaseWaiters reads MyWalSnd->sync_standby_priority without\nholding SyncRepLock, which could lead to a message with wrong\npriority. I'm not sure it matters, though.\n\n> > This is not right. It is in transaction commit so it is in a\n> > HOLD_INTERRUPTS section. ProcessInterrupt does not respond to\n> > cancel/die interrupts thus the ereport should return.\n> \n> Yeah. There is an easy way to check after that: InterruptHoldoffCount\n> needs to be strictly positive.\n> \n> My suggestions are attached. Any thoughts?\n\nSeems reasonable for holdoffs. The same assertion would be needed in\nmore places but it's another issue.\n\n\nBy the way while I was looking this, I found a typo. Please find the\nattached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 30 Oct 2019 12:34:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Wed, Oct 30, 2019 at 9:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Oct 29, 2019 at 01:40:41PM +0800, Dongming Liu wrote:\n> > Can someone help me to confirm that these two problems are bugs?\n> > If they are bugs, please help review the patch or provide better fix\n> > suggestions.\n>\n> There is no need to send periodic pings. Sometimes it takes time to\n> get replies as time is an important resource that is always limited.\n>\n\n Thank you for your reply. I also realized my mistake, thank you for\ncorrecting me.\n\n\n> I can see that Horiguchi-san has already provided some feedback, and I\n> am looking now at your suggestions.\n>\n\nThanks again.\n\nOn Wed, Oct 30, 2019 at 9:12 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Oct 29, 2019 at 01:40:41PM +0800, Dongming Liu wrote:\n> Can someone help me to confirm that these two problems are bugs?\n> If they are bugs, please help review the patch or provide better fix\n> suggestions.\n\nThere is no need to send periodic pings.  Sometimes it takes time to\nget replies as time is an important resource that is always limited.  Thank you for your reply. I also realized my mistake, thank you for correcting me. \nI can see that Horiguchi-san has already provided some feedback, and I\nam looking now at your suggestions.Thanks again.", "msg_date": "Wed, 30 Oct 2019 12:22:12 +0800", "msg_from": "Dongming Liu <ldming101@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Oct 29, 2019, at 18:50, Kyotaro Horiguchi <horikyota.ntt@gmail.com <mailto:horikyota.ntt@gmail.com>> wrote:\n> \n> Hello.\n> \n> At Fri, 25 Oct 2019 15:18:34 +0800, \"Dongming Liu\" <lingce.ldm@alibaba-inc.com <mailto:lingce.ldm@alibaba-inc.com>> wrote in \n>> \n>> Hi,\n>> \n>> I recently discovered two possible bugs about synchronous replication.\n>> \n>> 1. SyncRepCleanupAtProcExit may delete an element that has been deleted\n>> SyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, \n>> acquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, \n>> it will be deleted repeatedly, SHMQueueDelete will core with a segment fault. \n>> \n>> IMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\n>> whether the queue is detached or not.\n> \n> I think you're right here.\n\nThanks.\n\n> \n>> 2. SyncRepWaitForLSN may not call SyncRepCancelWait if ereport check one interrupt.\n>> For SyncRepWaitForLSN, if a query cancel interrupt arrives, we just terminate the wait \n>> with suitable warning. As follows:\n>> \n>> a. set QueryCancelPending to false\n>> b. errport outputs one warning\n>> c. calls SyncRepCancelWait to delete one element from the queue\n>> \n>> If another cancel interrupt arrives when we are outputting warning at step b, the errfinish\n>> will call CHECK_FOR_INTERRUPTS that will output an ERROR, such as \"canceling autovacuum\n>> task\", then the process will jump to the sigsetjmp. Unfortunately, the step c will be skipped\n>> and the element that should be deleted by SyncRepCancelWait is remained.\n>> \n>> The easiest way to fix this is to swap the order of step b and step c. On the other hand, \n>> let sigsetjmp clean up the queue may also be a good choice. What do you think?\n>> \n>> Attached the patch, any feedback is greatly appreciated.\n> \n> This is not right. It is in transaction commit so it is in a\n> HOLD_INTERRUPTS section. ProcessInterrupt does not respond to\n> cancel/die interrupts thus the ereport should return.\n\nI read the relevant code, you are right. I called SyncRepWaitForLSN somewhere else, \nbut forgot to put it in a HOLD_INTERRUPTS and triggered an exception.\n\nregards.\n\n—\nDongming Liu", "msg_date": "Wed, 30 Oct 2019 14:27:33 +0800", "msg_from": "lingce.ldm <lingce.ldm@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Oct 30, 2019, at 09:45, Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> \n> On Tue, Oct 29, 2019 at 07:50:01PM +0900, Kyotaro Horiguchi wrote:\n>> At Fri, 25 Oct 2019 15:18:34 +0800, \"Dongming Liu\" <lingce.ldm@alibaba-inc.com> wrote in \n>>> I recently discovered two possible bugs about synchronous replication.\n>>> \n>>> 1. SyncRepCleanupAtProcExit may delete an element that has been deleted\n>>> SyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached, \n>>> acquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender, \n>>> it will be deleted repeatedly, SHMQueueDelete will core with a segment fault. \n>>> \n>>> IMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\n>>> whether the queue is detached or not.\n>> \n>> I think you're right here.\n> \n> Indeed. Looking at the surroundings we expect some code paths to hold\n> SyncRepLock in exclusive or shared mode but we don't actually check\n> that the lock is hold. So let's add some assertions while on it.\n> \n>> This is not right. It is in transaction commit so it is in a\n>> HOLD_INTERRUPTS section. ProcessInterrupt does not respond to\n>> cancel/die interrupts thus the ereport should return.\n> \n> Yeah. There is an easy way to check after that: InterruptHoldoffCount\n> needs to be strictly positive.\n> \n> My suggestions are attached. Any thoughts?\n\nThanks, this patch looks good to me.", "msg_date": "Wed, 30 Oct 2019 14:27:46 +0800", "msg_from": "lingce.ldm <lingce.ldm@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Wed, Oct 30, 2019 at 4:16 PM lingce.ldm <lingce.ldm@alibaba-inc.com> wrote:\n>\n> On Oct 29, 2019, at 18:50, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n>\n> Hello.\n>\n> At Fri, 25 Oct 2019 15:18:34 +0800, \"Dongming Liu\" <lingce.ldm@alibaba-inc.com> wrote in\n>\n>\n> Hi,\n>\n> I recently discovered two possible bugs about synchronous replication.\n>\n> 1. SyncRepCleanupAtProcExit may delete an element that has been deleted\n> SyncRepCleanupAtProcExit first checks whether the queue is detached, if it is not detached,\n> acquires the SyncRepLock lock and deletes it. If this element has been deleted by walsender,\n> it will be deleted repeatedly, SHMQueueDelete will core with a segment fault.\n>\n> IMO, like SyncRepCancelWait, we should lock the SyncRepLock first and then check\n> whether the queue is detached or not.\n>\n>\n> I think you're right here.\n\nThis change causes every ending backends to always take the exclusive lock\neven when it's not in SyncRep queue. This may be problematic, for example,\nwhen terminating multiple backends at the same time? If yes,\nit might be better to check SHMQueueIsDetached() again after taking the lock.\nThat is,\n\nif (!SHMQueueIsDetached(&(MyProc->syncRepLinks)))\n{\n LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);\n if (!SHMQueueIsDetached(&(MyProc->syncRepLinks)))\n SHMQueueDelete(&(MyProc->syncRepLinks));\n LWLockRelease(SyncRepLock);\n}\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Wed, 30 Oct 2019 17:21:17 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "Hello.\n\nAt Wed, 30 Oct 2019 17:21:17 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n> This change causes every ending backends to always take the exclusive lock\n> even when it's not in SyncRep queue. This may be problematic, for example,\n> when terminating multiple backends at the same time? If yes,\n> it might be better to check SHMQueueIsDetached() again after taking the lock.\n> That is,\n\nI'm not sure how much that harms but double-checked locking\n(releasing) is simple enough for reducing possible congestion here, I\nthink. In short, + 1 for that.\n\n> if (!SHMQueueIsDetached(&(MyProc->syncRepLinks)))\n> {\n> LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);\n> if (!SHMQueueIsDetached(&(MyProc->syncRepLinks)))\n> SHMQueueDelete(&(MyProc->syncRepLinks));\n> LWLockRelease(SyncRepLock);\n> }\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 30 Oct 2019 17:43:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Wed, Oct 30, 2019 at 05:21:17PM +0900, Fujii Masao wrote:\n> This change causes every ending backends to always take the exclusive lock\n> even when it's not in SyncRep queue. This may be problematic, for example,\n> when terminating multiple backends at the same time? If yes,\n> it might be better to check SHMQueueIsDetached() again after taking the lock.\n> That is,\n> \n> if (!SHMQueueIsDetached(&(MyProc->syncRepLinks)))\n> {\n> LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);\n> if (!SHMQueueIsDetached(&(MyProc->syncRepLinks)))\n> SHMQueueDelete(&(MyProc->syncRepLinks));\n> LWLockRelease(SyncRepLock);\n> }\n\nMakes sense. Thanks for the suggestion.\n--\nMichael", "msg_date": "Wed, 30 Oct 2019 22:00:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Wed, Oct 30, 2019 at 12:34:28PM +0900, Kyotaro Horiguchi wrote:\n> If we do that strictly, other functions like\n> SyncRepGetOldestSyncRecPtr need the same Assert()s. I think static\n> functions don't need Assert() and caution in their comments would be\n> enough.\n\nPerhaps. I'd rather be careful though if we meddle again with this\ncode in the future as it is shared across multiple places and\ncallers.\n\n> SyncRepReleaseWaiters reads MyWalSnd->sync_standby_priority without\n> holding SyncRepLock, which could lead to a message with wrong\n> priority. I'm not sure it matters, though.\n\nThe WAL sender is the only writer of its info in shared memory, so\nthere is no problem to have it read data without its spin lock hold.\n\n> Seems reasonable for holdoffs. The same assertion would be needed in\n> more places but it's another issue.\n\nSure.\n\n> By the way while I was looking this, I found a typo. Please find the\n> attached.\n\nThanks, applied that one.\n--\nMichael", "msg_date": "Thu, 31 Oct 2019 10:30:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Wed, Oct 30, 2019 at 05:43:04PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 30 Oct 2019 17:21:17 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n>> This change causes every ending backends to always take the exclusive lock\n>> even when it's not in SyncRep queue. This may be problematic, for example,\n>> when terminating multiple backends at the same time? If yes,\n>> it might be better to check SHMQueueIsDetached() again after taking the lock.\n>> That is,\n> \n> I'm not sure how much that harms but double-checked locking\n> (releasing) is simple enough for reducing possible congestion here, I\n> think.\n\nFWIW, I could not measure any actual difference with pgbench -C, up to\n500 sessions and an empty input file (just have one meta-command) and\n-c 20.\n\nI have added some comments in SyncRepCleanupAtProcExit(), and adjusted\nthe patch with the suggestion from Fujii-san. Any comments?\n--\nMichael", "msg_date": "Thu, 31 Oct 2019 11:11:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Thu, Oct 31, 2019 at 11:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 30, 2019 at 05:43:04PM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 30 Oct 2019 17:21:17 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in\n> >> This change causes every ending backends to always take the exclusive lock\n> >> even when it's not in SyncRep queue. This may be problematic, for example,\n> >> when terminating multiple backends at the same time? If yes,\n> >> it might be better to check SHMQueueIsDetached() again after taking the lock.\n> >> That is,\n> >\n> > I'm not sure how much that harms but double-checked locking\n> > (releasing) is simple enough for reducing possible congestion here, I\n> > think.\n>\n> FWIW, I could not measure any actual difference with pgbench -C, up to\n> 500 sessions and an empty input file (just have one meta-command) and\n> -c 20.\n>\n> I have added some comments in SyncRepCleanupAtProcExit(), and adjusted\n> the patch with the suggestion from Fujii-san. Any comments?\n\nThanks for the patch! Looks good to me.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Thu, 31 Oct 2019 17:38:32 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Oct 31, 2019, at 10:11, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Oct 30, 2019 at 05:43:04PM +0900, Kyotaro Horiguchi wrote:\n>> At Wed, 30 Oct 2019 17:21:17 +0900, Fujii Masao <masao.fujii@gmail.com> wrote in \n>>> This change causes every ending backends to always take the exclusive lock\n>>> even when it's not in SyncRep queue. This may be problematic, for example,\n>>> when terminating multiple backends at the same time? If yes,\n>>> it might be better to check SHMQueueIsDetached() again after taking the lock.\n>>> That is,\n>> \n>> I'm not sure how much that harms but double-checked locking\n>> (releasing) is simple enough for reducing possible congestion here, I\n>> think.\n> \n> FWIW, I could not measure any actual difference with pgbench -C, up to\n> 500 sessions and an empty input file (just have one meta-command) and\n> -c 20.\n> \n> I have added some comments in SyncRepCleanupAtProcExit(), and adjusted\n> the patch with the suggestion from Fujii-san. Any comments?\n\nThanks for the patch. Looks good to me +1.\n\nRegards,\n\n—\nDongming Liu", "msg_date": "Fri, 1 Nov 2019 13:27:54 +0800", "msg_from": "\"lingce.ldm\" <lingce.ldm@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" }, { "msg_contents": "On Thu, Oct 31, 2019 at 05:38:32PM +0900, Fujii Masao wrote:\n> Thanks for the patch! Looks good to me.\n\nThanks. I have applied the core fix down to 9.4, leaving the new\nassertion improvements only for HEAD.\n--\nMichael", "msg_date": "Fri, 1 Nov 2019 23:01:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Problem with synchronous replication" } ]
[ { "msg_contents": "Today, I committed a patch (dddf4cdc) to reorder some of the file\nheader inclusions and buildfarm members prairiedog and locust failed\nas a result of that. The reason turns out to be that we have defined\na bool in pgtypeslib_extern.h and that definition is different from\nwhat we define in c.h.\n\nc.h defines it as:\n#ifndef bool\ntypedef unsigned char bool;\n#endif\n\npgtypeslib_extern.h defines it as:\n#ifndef bool\n#define bool char\n#endif\n\nPrior to dddf4cdc, pgtypeslib_extern.h was included as a first header\nbefore any usage of bool, but commit moves it after dt.h in file\ndt_common.c. Now, it seems like dt.h was using a version of bool as\ndefined in c.h and dt_common.c uses as defined by pgtypeslib_extern.h\nwhich leads to some compilation errors as below:\n\ndt_common.c:672: error: conflicting types for 'EncodeDateOnly'\ndt.h:321: error: previous declaration of 'EncodeDateOnly' was here\ndt_common.c:756: error: conflicting types for 'EncodeDateTime'\ndt.h:316: error: previous declaration of 'EncodeDateTime' was here\ndt_common.c:1783: error: conflicting types for 'DecodeDateTime'\ndt.h:324: error: previous declaration of 'DecodeDateTime' was here\nmake[4]: *** [dt_common.o] Error 1\n\nAs suggested by Andrew Gierth [1], I think we can remove the define in\npgtypeslib_extern.h as it doesn't seem to be exposed.\n\nThoughts?\n\nNote - For the time being, I have changed the order of file inclusions\n(c114229ca2) in dt_common.c as it was before so that the buildfarm\nbecomes green again.\n\n[1] - https://www.postgresql.org/message-id/87h83xmg4m.fsf%40news-spur.riddles.org.uk\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Oct 2019 15:18:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "define bool in pgtypeslib_extern.h" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> As suggested by Andrew Gierth [1], I think we can remove the define in\n> pgtypeslib_extern.h as it doesn't seem to be exposed.\n\nYeah, it's not good that that results in a header ordering dependency,\nand it doesn't seem like a good idea for pgtypeslib_extern.h to be\nmessing with the issue at all.\n\nIf you like, I can experiment with that locally on prairiedog's host\nbefore we make the buildfarm jump through hoops.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Oct 2019 09:41:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "I wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>> As suggested by Andrew Gierth [1], I think we can remove the define in\n>> pgtypeslib_extern.h as it doesn't seem to be exposed.\n\n> Yeah, it's not good that that results in a header ordering dependency,\n> and it doesn't seem like a good idea for pgtypeslib_extern.h to be\n> messing with the issue at all.\n> If you like, I can experiment with that locally on prairiedog's host\n> before we make the buildfarm jump through hoops.\n\nI checked that that works and fixes the immediate problem, so I pushed\nit. However, we're not out of the woods, because lookee here in\necpglib.h:\n\n#ifndef __cplusplus\n#ifndef bool\n#define bool char\n#endif /* ndef bool */\n\n#ifndef true\n#define true ((bool) 1)\n#endif /* ndef true */\n#ifndef false\n#define false ((bool) 0)\n#endif /* ndef false */\n#endif /* not C++ */\n\n#ifndef TRUE\n#define TRUE 1\n#endif /* TRUE */\n\n#ifndef FALSE\n#define FALSE 0\n#endif /* FALSE */\n\nThis stuff *is* exposed to client programs, so it's not clear how\npainless it'd be to monkey around with it. And it is used, further\ndown in the same file, so we can't fix it just by deleting it.\nNor can we import c.h to get the \"real\" definition from that.\n\nI'm more than slightly surprised that we haven't already seen\nproblems due to this conflicting with d26a810eb. I've not bothered\nto run to ground exactly why not, though.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Oct 2019 12:25:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "I wrote:\n> I checked that that works and fixes the immediate problem, so I pushed\n> it. However, we're not out of the woods, because lookee here in\n> ecpglib.h:\n> ...\n\nOh, and for extra fun, take a look in src/backend/utils/probes.d :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Oct 2019 12:52:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "On Fri, Oct 25, 2019 at 9:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> >> As suggested by Andrew Gierth [1], I think we can remove the define in\n> >> pgtypeslib_extern.h as it doesn't seem to be exposed.\n>\n> > Yeah, it's not good that that results in a header ordering dependency,\n> > and it doesn't seem like a good idea for pgtypeslib_extern.h to be\n> > messing with the issue at all.\n> > If you like, I can experiment with that locally on prairiedog's host\n> > before we make the buildfarm jump through hoops.\n>\n> I checked that that works and fixes the immediate problem, so I pushed\n> it.\n>\n\nThank you.\n\n> However, we're not out of the woods, because lookee here in\n> ecpglib.h:\n>\n> #ifndef __cplusplus\n> #ifndef bool\n> #define bool char\n> #endif /* ndef bool */\n>\n> #ifndef true\n> #define true ((bool) 1)\n> #endif /* ndef true */\n> #ifndef false\n> #define false ((bool) 0)\n> #endif /* ndef false */\n> #endif /* not C++ */\n>\n> #ifndef TRUE\n> #define TRUE 1\n> #endif /* TRUE */\n>\n> #ifndef FALSE\n> #define FALSE 0\n> #endif /* FALSE */\n>\n> This stuff *is* exposed to client programs, so it's not clear how\n> painless it'd be to monkey around with it. And it is used, further\n> down in the same file, so we can't fix it just by deleting it.\n> Nor can we import c.h to get the \"real\" definition from that.\n>\n> I'm more than slightly surprised that we haven't already seen\n> problems due to this conflicting with d26a810eb.\n>\n\nI think it is because it never gets any imports from c.h. It instead\nuses postgres_ext.h. If we want to fix this, the simplest thing that\ncomes to mind is to change the definition of bool in ecpglib.h and\nprobes.d to match with c.h. These files contain exposed interfaces,\nso the change can impact clients, but not sure what else we can do\nhere. I have also tried to think about moving bool definition to\npostgres_ext.h, but I think that won't be straightforward. OTOH, if\nyou think that might be worth investigating, I can spend some more\ntime to see if we can do that way.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 26 Oct 2019 08:51:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Fri, Oct 25, 2019 at 9:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, we're not out of the woods, because lookee here in\n>> ecpglib.h:\n>> #ifndef bool\n>> #define bool char\n>> #endif /* ndef bool */\n>> I'm more than slightly surprised that we haven't already seen\n>> problems due to this conflicting with d26a810eb.\n\n> I think it is because it never gets any imports from c.h.\n\nOn closer inspection, it seems to be just blind luck. For example,\nif I rearrange the inclusion order in a file using ecpglib.h:\n\ndiff --git a/src/interfaces/ecpg/ecpglib/data.c b/src/interfaces/ecpg/ecpglib/data.c\nindex 7d2a78a..09944ff 100644\n--- a/src/interfaces/ecpg/ecpglib/data.c\n+++ b/src/interfaces/ecpg/ecpglib/data.c\n@@ -6,8 +6,8 @@\n #include <math.h>\n \n #include \"ecpgerrno.h\"\n-#include \"ecpglib.h\"\n #include \"ecpglib_extern.h\"\n+#include \"ecpglib.h\"\n #include \"ecpgtype.h\"\n #include \"pgtypes_date.h\"\n #include \"pgtypes_interval.h\"\n\nthen on a PPC Mac I get\n\ndata.c:210: error: conflicting types for 'ecpg_get_data'\necpglib_extern.h:167: error: previous declaration of 'ecpg_get_data' was here\n\nIt's exactly the same problem as we saw with pgtypeslib_extern.h:\nheader ordering changes affect the meaning of uses of bool, and that's\njust not acceptable.\n\nIn this case it's even worse because we're mucking with type definitions\nin a user-visible header. I'm surprised we've not gotten bug reports\nabout that. Maybe all ECPG users include <stdbool.h> before they\ninclude ecpglib.h, but that doesn't exactly make things worry-free either,\nbecause code doing that will think that these functions return _Bool,\nwhen the compiled library possibly thinks differently. Probably the\nonly thing saving us is that sizeof(_Bool) is 1 on just about every\nplatform in common use nowadays.\n\nI'm inclined to think that we need to make ecpglib.h's bool-related\ndefinitions exactly match c.h, which will mean that it has to pull in\n<stdbool.h> on most platforms, which will mean adding a control symbol\nfor that to ecpg_config.h. I do not think we should export\nHAVE_STDBOOL_H and SIZEOF_BOOL there though; probably better to have\nconfigure make the choice and export something named like PG_USE_STDBOOL.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Oct 2019 13:19:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> On closer inspection, it seems to be just blind luck. For example,\n Tom> if I rearrange the inclusion order in a file using ecpglib.h:\n\nUgh.\n\n Tom> I'm inclined to think that we need to make ecpglib.h's\n Tom> bool-related definitions exactly match c.h,\n\nI'm wondering whether we should actually go the opposite way and say\nthat c.h's \"bool\" definition should be backend only, and that in\nfrontend code we should define a PG_bool type or something of that ilk\nfor when we want \"PG's 1-byte bool\" and otherwise let the platform\ndefine \"bool\" however it wants.\n\nAnd we certainly shouldn't be defining \"bool\" in something that's going\nto be included in the user's code the way that ecpglib.h is.\n\n-- \nAndrew.\n\n\n", "msg_date": "Sat, 26 Oct 2019 20:29:03 +0100", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> I'm inclined to think that we need to make ecpglib.h's\n> Tom> bool-related definitions exactly match c.h,\n\n> I'm wondering whether we should actually go the opposite way and say\n> that c.h's \"bool\" definition should be backend only, and that in\n> frontend code we should define a PG_bool type or something of that ilk\n> for when we want \"PG's 1-byte bool\" and otherwise let the platform\n> define \"bool\" however it wants.\n\n> And we certainly shouldn't be defining \"bool\" in something that's going\n> to be included in the user's code the way that ecpglib.h is.\n\nThe trouble here is the hazard of creating an ABI break, if we modify\necpglib.h in a way that causes its \"bool\" references to be interpreted\ndifferently than they were before. I don't think we want that (although\nI suspect we have inadvertently caused ABI breaks already on platforms\nwhere this matters).\n\nIn practice, since v11 on every modern platform, the exported ecpglib\nfunctions have supposed that \"bool\" is _Bool, because they were compiled\nin files that included c.h before ecpglib.h. I assert furthermore that\nclients might well have included <stdbool.h> before ecpglib.h and thereby\nbeen fully compatible with that. If we start having ecpglib.h include\n<stdbool.h> itself, we'll just be eliminating a minor header inclusion\norder hazard. It's also rather hard to argue that including <stdbool.h>\nautomatically is really likely to break anything that was including\necpglib.h already, since that file was already usurping those symbols.\nExcept on platforms where sizeof(_Bool) isn't 1, but things are already\npretty darn broken there.\n\nI think it's possible to construct a counterexample that will fail\nfor *anything* we can do here. I'm not inclined to uglify things like\nmad to reduce the problem space from 0.1% to 0.01% of use-cases, or\nwhatever the numbers would be in practice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Oct 2019 16:12:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "On Sat, Oct 26, 2019 at 10:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I'm inclined to think that we need to make ecpglib.h's bool-related\n> definitions exactly match c.h, which will mean that it has to pull in\n> <stdbool.h> on most platforms, which will mean adding a control symbol\n> for that to ecpg_config.h. I do not think we should export\n> HAVE_STDBOOL_H and SIZEOF_BOOL there though; probably better to have\n> configure make the choice and export something named like PG_USE_STDBOOL.\n>\n\nThis sounds reasonable to me, but we also might want to do something\nfor probes.d. To be clear, I am not immediately planning to write a\npatch for this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:30:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sat, Oct 26, 2019 at 10:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm inclined to think that we need to make ecpglib.h's bool-related\n>> definitions exactly match c.h, which will mean that it has to pull in\n>> <stdbool.h> on most platforms, which will mean adding a control symbol\n>> for that to ecpg_config.h. I do not think we should export\n>> HAVE_STDBOOL_H and SIZEOF_BOOL there though; probably better to have\n>> configure make the choice and export something named like PG_USE_STDBOOL.\n\n> This sounds reasonable to me, but we also might want to do something\n> for probes.d. To be clear, I am not immediately planning to write a\n> patch for this.\n\nAs far as probes.d goes, it seems to work to do\n\n@@ -20,7 +20,7 @@\n #define BlockNumber unsigned int\n #define Oid unsigned int\n #define ForkNumber int\n-#define bool char\n+#define bool _Bool\n \n provider postgresql {\n \nalthough removing the macro altogether leads to compilation failures.\nI surmise that dtrace is trying to compile the generated code without\nany #include's, so that only compiler built-in types will do.\n\n(I tried this on macOS, FreeBSD, and NetBSD, to the extent of seeing\nwhether a build with --enable-dtrace goes through. I don't know\nenough about dtrace to test the results easily, but I suppose that\nif it compiles then this is OK.)\n\nThis would, of course, not work on any platform where we're not\nusing <stdbool.h>, but I doubt that the set of platforms where\ndtrace works includes any such.\n\nA plausible alternative is to do\n\n-#define bool char\n+#define bool unsigned char\n\nwhich is correct on platforms where we don't use <stdbool.h>,\nand is at least no worse than now on those where we do. In\npractice, since we know sizeof(_Bool) == 1 on platforms where\nwe use it, this is probably just fine for dtrace's purposes.\n\nAnyone have a preference?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 13:57:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "On Mon, Oct 28, 2019 at 11:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sat, Oct 26, 2019 at 10:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'm inclined to think that we need to make ecpglib.h's bool-related\n> >> definitions exactly match c.h, which will mean that it has to pull in\n> >> <stdbool.h> on most platforms, which will mean adding a control symbol\n> >> for that to ecpg_config.h. I do not think we should export\n> >> HAVE_STDBOOL_H and SIZEOF_BOOL there though; probably better to have\n> >> configure make the choice and export something named like PG_USE_STDBOOL.\n>\n> > This sounds reasonable to me, but we also might want to do something\n> > for probes.d. To be clear, I am not immediately planning to write a\n> > patch for this.\n>\n> As far as probes.d goes, it seems to work to do\n>\n> @@ -20,7 +20,7 @@\n> #define BlockNumber unsigned int\n> #define Oid unsigned int\n> #define ForkNumber int\n> -#define bool char\n> +#define bool _Bool\n>\n> provider postgresql {\n>\n> although removing the macro altogether leads to compilation failures.\n> I surmise that dtrace is trying to compile the generated code without\n> any #include's, so that only compiler built-in types will do.\n>\n> (I tried this on macOS, FreeBSD, and NetBSD, to the extent of seeing\n> whether a build with --enable-dtrace goes through. I don't know\n> enough about dtrace to test the results easily, but I suppose that\n> if it compiles then this is OK.)\n>\n> This would, of course, not work on any platform where we're not\n> using <stdbool.h>, but I doubt that the set of platforms where\n> dtrace works includes any such.\n>\n> A plausible alternative is to do\n>\n> -#define bool char\n> +#define bool unsigned char\n>\n> which is correct on platforms where we don't use <stdbool.h>,\n> and is at least no worse than now on those where we do. In\n> practice, since we know sizeof(_Bool) == 1 on platforms where\n> we use it, this is probably just fine for dtrace's purposes.\n>\n> Anyone have a preference?\n>\n\n+1 for the second alternative as it will make it similar to c.h.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Nov 2019 15:04:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Mon, Oct 28, 2019 at 11:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A plausible alternative is to do\n>> \n>> -#define bool char\n>> +#define bool unsigned char\n>> \n>> which is correct on platforms where we don't use <stdbool.h>,\n>> and is at least no worse than now on those where we do. In\n>> practice, since we know sizeof(_Bool) == 1 on platforms where\n>> we use it, this is probably just fine for dtrace's purposes.\n\n> +1 for the second alternative as it will make it similar to c.h.\n\nYeah, that's the minimum-risk alternative. I'll go make it so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 06 Nov 2019 09:54:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> I'm wondering whether we should actually go the opposite way and say\n> that c.h's \"bool\" definition should be backend only, and that in\n> frontend code we should define a PG_bool type or something of that ilk\n> for when we want \"PG's 1-byte bool\" and otherwise let the platform\n> define \"bool\" however it wants.\n> And we certainly shouldn't be defining \"bool\" in something that's going\n> to be included in the user's code the way that ecpglib.h is.\n\nI experimented with doing things that way, and ended up with the attached\ndraft patch. It basically gets ecpglib.h out of the business of declaring\nany bool-related stuff at all, instead insisting that client code include\n<stdbool.h> or otherwise declare bool for itself. The function\ndeclarations that were previously relying on \"bool\" now use the \"pqbool\"\ntypedef that libpq-fe.h was already exporting. Per discussion, that's\nnot an ABI break, even on platforms where sizeof(_Bool) > 1, because\nthe actual underlying library functions are certainly expecting to take\nor return a value of size 1.\n\nWhile this seems like a generally cleaner place to be, I'm a bit concerned\nabout a number of aspects:\n\n* This will of course be an API break for clients, which might not've\nincluded <stdbool.h> before.\n\n* On platforms where sizeof(_Bool) > 1, it's far from clear to me that\nECPG will interface correctly with client code that is treating bool\nas _Bool. There are some places that seem to be prepared for bool\nclient variables to be either sizeof(char) or sizeof(int), for example\necpg_store_input(), but there are a fair number of other places that\nseem to assume that sizeof(bool) is relevant, which it won't be.\nThe ECPG regression tests do pass for me on a PPC Mac, but I wonder\nhow much that proves.\n\n* The \"sql/dyntest.pgc\" test declares BOOLVAR as \"char\" and then does\n\n exec sql var BOOLVAR is bool;\n\nIt's not clear to me what the implications of that statement are\n(and our manual is no help), but looking at the generated code,\nit seems like this causes ecpg to believe that the size of the\nvariable is sizeof(bool). So that looks like buffer overrun\ntrouble waiting to happen. I changed the variable declaration to\n\"bool\" in the attached, but I wonder what's supposed to be getting\ntested there.\n\nOn the whole I'm not finding this an attractive way to proceed\ncompared to the other approach I sketched. It will certainly\ncause some clients to have compile failures, and I'm at best\nqueasy about whether it will really work on platforms where\nsizeof(_Bool) > 1. I think we're better off to go with the\nother approach of making ecpglib.h export what we think the\ncorrect definition of bool is. For most people that will\nend up being <stdbool.h>, which I think will be unsurprising.\n\n\t\t\tregards, tom lane\n\nPS: another issue this fixes, which I think we ought to fix and back-patch\nregardless of what we decide about bool, is it moves the declaration for\necpg_gettext() out of ecpglib.h and into the private header\necpglib_extern.h. That function isn't meant for client access, the\ndeclaration is wrong where it is because it is not inside extern \"C\",\nand the declaration wouldn't even compile for clients because they\nwill not know what pg_attribute_format_arg() is. The only reason we've\nnot had complaints, I imagine, is that nobody's tried to compile client\ncode with ENABLE_NLS defined ... but that's already an intrusion on\nclient namespace.", "msg_date": "Wed, 06 Nov 2019 14:48:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "I wrote:\n> I'm inclined to think that we need to make ecpglib.h's bool-related\n> definitions exactly match c.h, which will mean that it has to pull in\n> <stdbool.h> on most platforms, which will mean adding a control symbol\n> for that to ecpg_config.h. I do not think we should export\n> HAVE_STDBOOL_H and SIZEOF_BOOL there though; probably better to have\n> configure make the choice and export something named like PG_USE_STDBOOL.\n\nHere's a proposed patch that does it like that.\n\nI'm of two minds about whether to back-patch or not. This shouldn't\nreally change anything except on platforms where sizeof(_Bool) isn't\none. We have some reason to think that nobody is actually using\necpg on such platforms :-(, because if they were, they'd likely have\ncomplained about breakage. So maybe we should just put this in HEAD\nand be done.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 07 Nov 2019 15:47:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: define bool in pgtypeslib_extern.h" }, { "msg_contents": "On Fri, Nov 8, 2019 at 2:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > I'm inclined to think that we need to make ecpglib.h's bool-related\n> > definitions exactly match c.h, which will mean that it has to pull in\n> > <stdbool.h> on most platforms, which will mean adding a control symbol\n> > for that to ecpg_config.h. I do not think we should export\n> > HAVE_STDBOOL_H and SIZEOF_BOOL there though; probably better to have\n> > configure make the choice and export something named like PG_USE_STDBOOL.\n>\n> Here's a proposed patch that does it like that.\n>\n> I'm of two minds about whether to back-patch or not. This shouldn't\n> really change anything except on platforms where sizeof(_Bool) isn't\n> one. We have some reason to think that nobody is actually using\n> ecpg on such platforms :-(, because if they were, they'd likely have\n> complained about breakage.\n>\n\nYeah, this is a valid point, but I think this would have caused\nbreakage only after d26a810eb which is a recent change. If that is\nright, then I am not sure such an assumption is safe. Also, we have\nalready backpatched the probes.d change, so it seems reasonable to\nmake this change and keep the bool definition consistent in code.\nOTOH, I think there is no harm in making this change for HEAD and if\nlater we face any complaint, we can backpatch it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Nov 2019 08:06:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: define bool in pgtypeslib_extern.h" } ]
[ { "msg_contents": "Dear all,\n\nWe stumbled upon a few cases in which retrieving information from the\nforeign server may turn pretty useful before creating any foreign\ntable, especially info related to the catalog. E.g: a list of schemas\nor tables the user has access to.\n\nI thought of using dblink for it, but that requires duplication of\nserver and user mapping details and it adds its own management of\nconnections.\n\nThen I thought a better approach may be a mix of both: a function to\nissue arbitrary queries to the foreign server reusing all the details\nencapsulated in the server and user mapping. It would use the same\npool of connections.\n\nE.g:\n\nCREATE FUNCTION postgres_fdw_query(server name, sql text)\nRETURNS SETOF record\n\nSELECT * FROM postgres_fdw_query('foreign_server', $$SELECT table_name,\ntable_type\n FROM information_schema.tables\n WHERE table_schema = 'public'\n ORDER BY table_name$$\n) AS schemas(table_name text, table_type text);\n\nFind attached a patch with a working PoC (with some code from\ndblink). It is not meant to be perfect yet.\n\nIs this something you may be interested in having as part of\npostgres_fdw? Thoughts?\n\nThanks\n-Rafa de la Torre", "msg_date": "Fri, 25 Oct 2019 17:17:18 +0200", "msg_from": "rtorre@carto.com", "msg_from_op": true, "msg_subject": "[Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "rtorre@carto.com writes:\n> We stumbled upon a few cases in which retrieving information from the\n> foreign server may turn pretty useful before creating any foreign\n> table, especially info related to the catalog. E.g: a list of schemas\n> or tables the user has access to.\n\n> I thought of using dblink for it, but that requires duplication of\n> server and user mapping details and it adds its own management of\n> connections.\n\n> Then I thought a better approach may be a mix of both: a function to\n> issue arbitrary queries to the foreign server reusing all the details\n> encapsulated in the server and user mapping. It would use the same\n> pool of connections.\n\ndblink can already reference a postgres_fdw \"server\" for connection\ndetails, so I think this problem is already solved from the usability\nend of things. And allowing arbitrary queries to go over a postgres_fdw\nconnection would be absolutely disastrous from a debuggability and\nmaintainability standpoint, because they might change the remote\nsession's state in ways that postgres_fdw isn't prepared to handle.\n(In a dblink connection, the remote session's state is the user's\nresponsibility to manage, but this isn't the case for postgres_fdw.)\nSo I think this proposal has to be firmly rejected.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Oct 2019 12:38:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "On Fri, Oct 25, 2019 at 05:17:18PM +0200, rtorre@carto.com wrote:\n> Dear all,\n> \n> We stumbled upon a few cases in which retrieving information from the\n> foreign server may turn pretty useful before creating any foreign\n> table, especially info related to the catalog. E.g: a list of schemas\n> or tables the user has access to.\n> \n> I thought of using dblink for it, but that requires duplication of\n> server and user mapping details and it adds its own management of\n> connections.\n> \n> Then I thought a better approach may be a mix of both: a function to\n> issue arbitrary queries to the foreign server reusing all the details\n> encapsulated in the server and user mapping. It would use the same\n> pool of connections.\n\nThere's a SQL MED standard feature for CREATE ROUTINE MAPPING that\ndoes something similar to this. Might it be possible to incorporate\nit into the previous patch that implemented that feature?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sun, 27 Oct 2019 19:07:20 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "On Sun, Oct 27, 2019 at 7:07 PM David Fetter <david@fetter.org> wrote:\n>\n> There's a SQL MED standard feature for CREATE ROUTINE MAPPING that\n> does something similar to this. Might it be possible to incorporate\n> it into the previous patch that implemented that feature?\n\nThanks for the idea, David. I'll investigate it and hopefully\ncome up with a more standard proposal.\n\nBest regards\n-Rafa\n\nOn Sun, Oct 27, 2019 at 7:07 PM David Fetter <david@fetter.org> wrote:>> There's a SQL MED standard feature for CREATE ROUTINE MAPPING that> does something similar to this.  Might it be possible to incorporate> it into the previous patch that implemented that feature?Thanks for the idea, David. I'll investigate it and hopefullycome up with a more standard proposal.Best regards-Rafa", "msg_date": "Mon, 28 Oct 2019 10:09:44 +0100", "msg_from": "rtorre@carto.com", "msg_from_op": true, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "On Fri, Oct 25, 2019 at 12:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> end of things. And allowing arbitrary queries to go over a postgres_fdw\n> connection would be absolutely disastrous from a debuggability and\n> maintainability standpoint, because they might change the remote\n> session's state in ways that postgres_fdw isn't prepared to handle.\n> (In a dblink connection, the remote session's state is the user's\n> responsibility to manage, but this isn't the case for postgres_fdw.)\n> So I think this proposal has to be firmly rejected.\n\nI think the reduction in debuggability and maintainability has to be\nbalanced against a possible significant gain in usability. I mean,\nyou could document that if the values of certain GUCs are changed, or\nif you create and drop prepared statements with certain names, it\nmight cause queries in the same session issued through the regular\nforeign table API to produce wrong answers. That would still leave an\nenormous number of queries that users could issue with absolutely no\nproblems. I really don't see a bona fide maintainability problem here.\nWhen someone produces a reproducible test case showing that they did\none of the things we told them not to do, then we'll tell them to read\nthe fine manual and move on.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 08:53:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "On Sun, Oct 27, 2019 at 7:07 PM David Fetter <david@fetter.org> wrote:\n> There's a SQL MED standard feature for CREATE ROUTINE MAPPING that\n> does something similar to this. Might it be possible to incorporate\n> it into the previous patch that implemented that feature?\n\nSupporting CREATE ROUTINE MAPPING goes a level beyond\npostgres_fdw. It'd require adding new DDL syntax elements to the\nparser, catalog and callbacks for the FDW's to support them.\n\nFor the record, there's a very interesting thread on this topic (that\nyou participated in):\nhttps://www.postgresql.org/message-id/flat/CADkLM%3DdK0dmkzLhaLPpnjN2wBF5GRpvzOr%3DeW0EWdCnG-OHnpQ%40mail.gmail.com\n\nI know they have different semantics and may turn more limiting, but\nfor certain use cases, the `extensions` parameter of postgres_fdw may\ncome in handy (shipping function calls to the foreign end from\nextensions present in both local and foreign).\n\nFor my use case, which is retrieving catalog info before any CREATE\nFOREIGN TABLE, CREATE ROUTINE MAPPING is not really a good fit.\n\nThank you for pointing out anyway.\n-Rafa\n\nOn Sun, Oct 27, 2019 at 7:07 PM David Fetter <david@fetter.org> wrote:> There's a SQL MED standard feature for CREATE ROUTINE MAPPING that> does something similar to this.  Might it be possible to incorporate> it into the previous patch that implemented that feature?Supporting CREATE ROUTINE MAPPING goes a level beyondpostgres_fdw. It'd require adding new DDL syntax elements to theparser, catalog and callbacks for the FDW's to support them.For the record, there's a very interesting thread on this topic (thatyou participated in):https://www.postgresql.org/message-id/flat/CADkLM%3DdK0dmkzLhaLPpnjN2wBF5GRpvzOr%3DeW0EWdCnG-OHnpQ%40mail.gmail.comI know they have different semantics and may turn more limiting, butfor certain use cases, the `extensions` parameter of postgres_fdw maycome in handy (shipping function calls to the foreign end fromextensions present in both local and foreign).For my use case, which is retrieving catalog info before any CREATEFOREIGN TABLE, CREATE ROUTINE MAPPING is not really a good fit.Thank you for pointing out anyway.-Rafa", "msg_date": "Tue, 5 Nov 2019 11:09:34 +0100", "msg_from": "rtorre@carto.com", "msg_from_op": true, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "> On Fri, Oct 25, 2019 at 12:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > end of things. And allowing arbitrary queries to go over a postgres_fdw\n> > connection would be absolutely disastrous from a debuggability and\n> > maintainability standpoint, because they might change the remote\n> > session's state in ways that postgres_fdw isn't prepared to handle.\n> > (In a dblink connection, the remote session's state is the user's\n> > responsibility to manage, but this isn't the case for postgres_fdw.)\n> > So I think this proposal has to be firmly rejected.\n\nOn Mon, Oct 28, 2019 at 1:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think the reduction in debuggability and maintainability has to be\n> balanced against a possible significant gain in usability. I mean,\n> you could document that if the values of certain GUCs are changed, or\n> if you create and drop prepared statements with certain names, it\n> might cause queries in the same session issued through the regular\n> foreign table API to produce wrong answers. That would still leave an\n> enormous number of queries that users could issue with absolutely no\n> problems.\n\nI understand both points, the alternatives and the tradeoffs.\n\nMy motivations not use dblink are twofold: to purposefully reuse the\nconnection pool in postgres_fdw, and to avoid installing another\nextension. I cannot speak to whether this can be advantageous to\nothers to accept the tradeoffs.\n\nIf you are still interested, I'm willing to listen to the feedback and\ncontinue improving the patch. Otherwise we can settle it here and (of\ncourse!) I won't take any offense because of that.\n\nFind attached v2 of the patch with the following changes:\n- added support for commands, as it failed upon PGRES_COMMAND_OK (with\n tests with prepared statements)\n- documentation for the new function, with the mentioned caveats\n- removed the test with the `SELECT current_user`, because it produced\n different results depending on the execution environment.\n\nRegards\n-Rafa", "msg_date": "Tue, 5 Nov 2019 17:49:27 +0100", "msg_from": "rtorre@carto.com", "msg_from_op": true, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "On Tue, Nov 05, 2019 at 11:09:34AM +0100, rtorre@carto.com wrote:\n> On Sun, Oct 27, 2019 at 7:07 PM David Fetter <david@fetter.org> wrote:\n> > There's a SQL MED standard feature for CREATE ROUTINE MAPPING that\n> > does something similar to this. Might it be possible to incorporate\n> > it into the previous patch that implemented that feature?\n> \n> Supporting CREATE ROUTINE MAPPING goes a level beyond\n> postgres_fdw. It'd require adding new DDL syntax elements to the\n> parser, catalog and callbacks for the FDW's to support them.\n> \n> For the record, there's a very interesting thread on this topic (that\n> you participated in):\n> https://www.postgresql.org/message-id/flat/CADkLM%3DdK0dmkzLhaLPpnjN2wBF5GRpvzOr%3DeW0EWdCnG-OHnpQ%40mail.gmail.com\n> \n> I know they have different semantics and may turn more limiting, but\n> for certain use cases, the `extensions` parameter of postgres_fdw may\n> come in handy (shipping function calls to the foreign end from\n> extensions present in both local and foreign).\n> \n> For my use case, which is retrieving catalog info before any CREATE\n> FOREIGN TABLE,\n\nCould you use IMPORT FOREIGN SCHEMA for that? I seem to recall that\nI've managed to import information_schema successfully.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 5 Nov 2019 19:41:55 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" }, { "msg_contents": "On Tue, Nov 5, 2019 at 7:41 PM David Fetter <david@fetter.org> wrote:\n> Could you use IMPORT FOREIGN SCHEMA for that? I seem to recall that\n> I've managed to import information_schema successfully.\n\nYes, I tried it and I can import and operate on the\ninformation_schema, which actually covers part of my needs. It does so\nat the expense of polluting the catalog with foreign tables, but I can\nlive with it. Thanks for pointing out.\n\nThere are other cases that can be covered with either this proposal or\nCREATE ROUTINE MAPPING, but not with the current state of things (as\nfar as I know). E.g: calling version() or postgis_version() on the\nforeign end.\n\nIt's largely a matter of convenience vs development effort. That said,\nI understand this may not make the design quality cut.\n\nRegards\n-Rafa\n\nOn Tue, Nov 5, 2019 at 7:41 PM David Fetter <david@fetter.org> wrote:> Could you use IMPORT FOREIGN SCHEMA for that? I seem to recall that> I've managed to import information_schema successfully.Yes, I tried it and I can import and operate on theinformation_schema, which actually covers part of my needs. It does soat the expense of polluting the catalog with foreign tables, but I canlive with it. Thanks for pointing out.There are other cases that can be covered with either this proposal orCREATE ROUTINE MAPPING, but not with the current state of things (asfar as I know). E.g: calling version() or postgis_version() on theforeign end.It's largely a matter of convenience vs development effort. That said,I understand this may not make the design quality cut.Regards-Rafa", "msg_date": "Wed, 6 Nov 2019 11:37:33 +0100", "msg_from": "rtorre@carto.com", "msg_from_op": true, "msg_subject": "Re: [Proposal] Arbitrary queries in postgres_fdw" } ]
[ { "msg_contents": "Hi list,\n\nWhen investigating for the bug reported in thread \"logical replication -\nnegative bitmapset member not allowed\", I found a way to seg fault postgresql\nonly when cassert is enabled.\n\nSee the scenario in attachment.\n\nWhen executed against binaries compiled with --enable-cassert, I have the\nfollowing error in logs:\n\n LOG: 00000: background worker \"logical replication worker\" (PID 761) was\n terminated by signal 11: Segmentation fault\n\nHere is the stack trace:\n\n#0 in slot_store_cstrings (slot=0x55a3c6973b48, rel=0x55a3c6989468,\n values=0x7ffe08ae67b0) at worker.c:330\n#1 in apply_handle_update (s=0x7ffe08aeddb0) at worker.c:712\n#2 in apply_dispatch (s=0x7ffe08aeddb0) at worker.c:968\n#3 in LogicalRepApplyLoop (last_received=87957952) at worker.c:1175\n#4 in ApplyWorkerMain (main_arg=0) at worker.c:1733\n#5 in StartBackgroundWorker () at bgworker.c:834\n#6 in do_start_bgworker (rw=0x55a3c68c16d0) at postmaster.c:5763\n#7 in maybe_start_bgworkers () at postmaster.c:5976\n#8 in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5161\n#9 <signal handler called>\n#10 in __GI___select (nfds=6, readfds=0x7ffe08aee680, writefds=0x0,\n exceptfds=0x0, timeout=0x7ffe08aee700)\n at ../sysdeps/unix/sysv/linux/select.c:41\n#11 in ServerLoop () at postmaster.c:1668\n#12 in PostmasterMain (argc=3, argv=0x55a3c6899820) at postmaster.c:1377\n#13 in main (argc=3, argv=0x55a3c6899820) at main.c:228\n\n\nIt leads to this conditional test in worker.c:slot_store_cstrings\n\n for (i = 0; i < natts; i++)\n { [...]\n if (!att->attisdropped && remoteattnum >= 0 &&\n values[remoteattnum] != NULL)\n\nIn gdb, I found remoteattnum seems to be not correctly initialized for the\nlatest column the scenario adds in pgbench_branches:\n\n (gdb) p remoteattnum\n $1 = 32639\n (gdb) p i\n $2 = 3\n\nI hadn't time to digg further yet. However, I don't understand why this crash\nis triggered when cassert is enabled.\n\nRegards,", "msg_date": "Fri, 25 Oct 2019 17:59:29 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "segmentation fault when cassert enabled" }, { "msg_contents": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com> writes:\n> When investigating for the bug reported in thread \"logical replication -\n> negative bitmapset member not allowed\", I found a way to seg fault postgresql\n> only when cassert is enabled.\n> ...\n> I hadn't time to digg further yet. However, I don't understand why this crash\n> is triggered when cassert is enabled.\n\nMost likely, it's not so much assertions that provoke the crash as\nCLOBBER_FREED_MEMORY, ie the actual problem here is use of already-freed\nmemory.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Oct 2019 12:28:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "At Fri, 25 Oct 2019 12:28:38 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> writes:\n> > When investigating for the bug reported in thread \"logical replication -\n> > negative bitmapset member not allowed\", I found a way to seg fault postgresql\n> > only when cassert is enabled.\n> > ...\n> > I hadn't time to digg further yet. However, I don't understand why this crash\n> > is triggered when cassert is enabled.\n> \n> Most likely, it's not so much assertions that provoke the crash as\n> CLOBBER_FREED_MEMORY, ie the actual problem here is use of already-freed\n> memory.\n\nAgreed.\n\nBy the way I didn't get a crash by Jehan's script with the\n--enable-cassert build of the master HEAD of a few days ago.\n\nFWIW I sometimes got SEGVish crashes or mysterious misbehavor when\nsome structs were changed and I didn't do \"make clean\". Rarely I\nneeded \"make distclean\". (Yeah, I didn't ususally turn on\n--enable-depend..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Oct 2019 16:47:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Fri, 25 Oct 2019 12:28:38 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> writes:\n> > When investigating for the bug reported in thread \"logical replication -\n> > negative bitmapset member not allowed\", I found a way to seg fault\n> > postgresql only when cassert is enabled.\n> > ...\n> > I hadn't time to digg further yet. However, I don't understand why this\n> > crash is triggered when cassert is enabled. \n> \n> Most likely, it's not so much assertions that provoke the crash as\n> CLOBBER_FREED_MEMORY, ie the actual problem here is use of already-freed\n> memory.\n\nThank you. Indeed, enabling CLOBBER_FREED_MEMORY on its own is enough to\ntrigger the segfault.\n\nIn fact, valgrind detect it as an uninitialised value, no matter\nCLOBBER_FREED_MEMORY is defined or not:\n\n Conditional jump or move depends on uninitialised value(s)\n at 0x43F410: slot_modify_cstrings (worker.c:398)\n by 0x43FBE9: apply_handle_update (worker.c:744)\n by 0x440088: apply_dispatch (worker.c:968)\n by 0x4405D7: LogicalRepApplyLoop (worker.c:1175)\n by 0x440CD0: ApplyWorkerMain (worker.c:1733)\n by 0x411C34: StartBackgroundWorker (bgworker.c:834)\n by 0x41EA24: do_start_bgworker (postmaster.c:5763)\n by 0x41EB6F: maybe_start_bgworkers (postmaster.c:5976)\n by 0x41F562: sigusr1_handler (postmaster.c:5161)\n by 0x48A072F: ??? (in /lib/x86_64-linux-gnu/libpthread-2.28.so)\n by 0x4B31FF6: select (select.c:41)\n by 0x41FDDE: ServerLoop (postmaster.c:1668)\n Uninitialised value was created by a heap allocation\n at 0x5C579B: palloc (mcxt.c:949)\n by 0x437116: logicalrep_rel_open (relation.c:270)\n by 0x43FA8F: apply_handle_update (worker.c:684)\n by 0x440088: apply_dispatch (worker.c:968)\n by 0x4405D7: LogicalRepApplyLoop (worker.c:1175)\n by 0x440CD0: ApplyWorkerMain (worker.c:1733)\n by 0x411C34: StartBackgroundWorker (bgworker.c:834)\n by 0x41EA24: do_start_bgworker (postmaster.c:5763)\n by 0x41EB6F: maybe_start_bgworkers (postmaster.c:5976)\n by 0x41F562: sigusr1_handler (postmaster.c:5161)\n by 0x48A072F: ??? (in /lib/x86_64-linux-gnu/libpthread-2.28.so)\n by 0x4B31FF6: select (select.c:41)\n\nMy best bet so far is that logicalrep_relmap_invalidate_cb is not called after\nthe DDL on the subscriber so the relmap cache is not invalidated. So we end up\nwith slot->tts_tupleDescriptor->natts superior than rel->remoterel->natts in\nslot_store_cstrings, leading to the overflow on attrmap and the sigsev.\n\nI hadn't follow this path yet.\n\nBy the way, I noticed attrmap is declared as AttrNumber * in struct\nLogicalRepRelMapEntry, AttrNumber being typedef'd as an int16. However, attrmap\nis allocated based on sizeof(int) in logicalrep_rel_open:\n\n entry->attrmap = palloc(desc->natts * sizeof(int));\n\nIt doesn't look like a major problem, it just allocates more memory than\nneeded.\n\nRegards,\n\n\n", "msg_date": "Tue, 5 Nov 2019 17:29:18 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On 2019-11-05 17:29, Jehan-Guillaume de Rorthais wrote:\n> My best bet so far is that logicalrep_relmap_invalidate_cb is not called after\n> the DDL on the subscriber so the relmap cache is not invalidated. So we end up\n> with slot->tts_tupleDescriptor->natts superior than rel->remoterel->natts in\n> slot_store_cstrings, leading to the overflow on attrmap and the sigsev.\n\nIt looks like something like that is happening. But it shouldn't. \nDifferent table schemas on publisher and subscriber are well supported, \nso this must be an edge case of some kind. Please continue investigating.\n\n> By the way, I noticed attrmap is declared as AttrNumber * in struct\n> LogicalRepRelMapEntry, AttrNumber being typedef'd as an int16. However, attrmap\n> is allocated based on sizeof(int) in logicalrep_rel_open:\n> \n> entry->attrmap = palloc(desc->natts * sizeof(int));\n> \n> It doesn't look like a major problem, it just allocates more memory than\n> needed.\n\nRight. I have committed a fix for this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 Nov 2019 14:34:38 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Mon, 28 Oct 2019 16:47:02 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Fri, 25 Oct 2019 12:28:38 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> writes: \n> > > When investigating for the bug reported in thread \"logical replication -\n> > > negative bitmapset member not allowed\", I found a way to seg fault\n> > > postgresql only when cassert is enabled.\n> > > ...\n> > > I hadn't time to digg further yet. However, I don't understand why this\n> > > crash is triggered when cassert is enabled. \n> > \n> > Most likely, it's not so much assertions that provoke the crash as\n> > CLOBBER_FREED_MEMORY, ie the actual problem here is use of already-freed\n> > memory. \n> \n> Agreed.\n> \n> By the way I didn't get a crash by Jehan's script with the\n> --enable-cassert build of the master HEAD of a few days ago.\n\nI am now working with HEAD and I can confirm I am able to make it crash 99% of\nthe time using my script.\nIt feels like a race condition between cache invalidation and record\nprocessing from worker.c. Make sure you have enough write activity during the\ntest.\n\n> FWIW I sometimes got SEGVish crashes or mysterious misbehavor when\n> some structs were changed and I didn't do \"make clean\". Rarely I\n> needed \"make distclean\". (Yeah, I didn't ususally turn on\n> --enable-depend..)\n\nI'm paranoid, I always do:\n\n* make distclean\n* git reset; git clean -df\n* ./configure && make install\n\nRegards,\n\n\n", "msg_date": "Tue, 12 Nov 2019 18:00:43 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Wed, 6 Nov 2019 14:34:38 +0100\nPeter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-11-05 17:29, Jehan-Guillaume de Rorthais wrote:\n> > My best bet so far is that logicalrep_relmap_invalidate_cb is not called\n> > after the DDL on the subscriber so the relmap cache is not invalidated. So\n> > we end up with slot->tts_tupleDescriptor->natts superior than\n> > rel->remoterel->natts in slot_store_cstrings, leading to the overflow on\n> > attrmap and the sigsev. \n> \n> It looks like something like that is happening. But it shouldn't. \n> Different table schemas on publisher and subscriber are well supported, \n> so this must be an edge case of some kind. Please continue investigating.\n\nI've been able to find the origin of the crash, but it was a long journey.\n\n<debugger hard life>\n\n I was unable to debug using gdb record because of this famous error:\n\n Process record does not support instruction 0xc5 at address 0x1482758a4b30.\n\n Program stopped.\n __memset_avx2_unaligned_erms ()\n at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:168\n 168\t../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such\n file or directory.\n\n Trying with rr, I had constant \"stack depth limit exceeded\", even with\n unlimited one. Does it worth opening a discussion or a wiki page about these\n tools? Peter, it looks like you have some experience with rr, any feedback?\n\n Finally, Julien Rouhaud spend some time with me after work hours, a,swering\n my questions about some parts of the code and pointed me to the excellent\n backtrace_functions GUC addition few days ago. This finally did the trick to\n find out what was happening. Many thanks Julien!\n\n</debugger hard life>\n\nBack to the bug itself. Consider a working logical replication with constant\nupdate/insert activity, eg. pgbench running against provider.\n\nNow, on the subscriber side, a session issue an \"ALTER TABLE ADD\nCOLUMN\" on a subscribed table, eg. pgbench_branches. A cache invalidation\nmessage is then pending for this table.\n\nIn the meantime, the logical replication worker receive an UPDATE to apply. It\nopens the local relation using \"logicalrep_rel_open\". It finds the related\nentry in LogicalRepRelMap is valid, so it does not update its attrmap\nand directly opens and locks the local relation:\n\n /* Need to update the local cache? */\n if (!OidIsValid(entry->localreloid))\n {\n [...updates attrmap here...]\n }\n else\n entry->localrel = table_open(entry->localreloid, lockmode);\n\nHowever, when locking the table, the code in LockRelationOid() actually process\nany pending invalidation messages:\n\n LockRelationOid(Oid relid, LOCKMODE lockmode)\n {\n [...]\n /*\n * Now that we have the lock, check for invalidation messages, so that we\n * will update or flush any stale relcache entry before we try to use it.\n * RangeVarGetRelid() specifically relies on us for this. We can skip\n * this in the not-uncommon case that we already had the same type of lock\n * being requested, since then no one else could have modified the\n * relcache entry in an undesirable way. (In the case where our own xact\n * modifies the rel, the relcache update happens via\n * CommandCounterIncrement, not here.)\n *\n * However, in corner cases where code acts on tables (usually catalogs)\n * recursively, we might get here while still processing invalidation\n * messages in some outer execution of this function or a sibling. The\n * \"cleared\" status of the lock tells us whether we really are done\n * absorbing relevant inval messages.\n */\n if (res != LOCKACQUIRE_ALREADY_CLEAR)\n {\n AcceptInvalidationMessages();\n MarkLockClear(locallock);\n }\n }\n\nWe end up with attrmap referencing N columns and the relcache referencing N+1\ncolumns. Later, in apply_handle_update(), we build a TupleTableSlot based on\nthe relcache representation and we crash by overflowing attrmap while trying to\nfeed this larger slot in slot_store_cstrings().\n\nPlease find in attachment a bugfix proposal. It just moves the attrmap update\nafter the table_open() call.\n\nLast, I was wondering if entry->attrmap should be pfree'd before palloc'ing it\nagain during its rebuild to avoid some memory leak?\n\nRegards,", "msg_date": "Mon, 25 Nov 2019 15:55:19 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Mon, Nov 25, 2019 at 8:25 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Wed, 6 Nov 2019 14:34:38 +0100\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>\n> > On 2019-11-05 17:29, Jehan-Guillaume de Rorthais wrote:\n> > > My best bet so far is that logicalrep_relmap_invalidate_cb is not called\n> > > after the DDL on the subscriber so the relmap cache is not invalidated. So\n> > > we end up with slot->tts_tupleDescriptor->natts superior than\n> > > rel->remoterel->natts in slot_store_cstrings, leading to the overflow on\n> > > attrmap and the sigsev.\n> >\n> > It looks like something like that is happening. But it shouldn't.\n> > Different table schemas on publisher and subscriber are well supported,\n> > so this must be an edge case of some kind. Please continue investigating.\n>\n> I've been able to find the origin of the crash, but it was a long journey.\n>\n> <debugger hard life>\n>\n> I was unable to debug using gdb record because of this famous error:\n>\n> Process record does not support instruction 0xc5 at address 0x1482758a4b30.\n>\n> Program stopped.\n> __memset_avx2_unaligned_erms ()\n> at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:168\n> 168 ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such\n> file or directory.\n>\n> Trying with rr, I had constant \"stack depth limit exceeded\", even with\n> unlimited one. Does it worth opening a discussion or a wiki page about these\n> tools? Peter, it looks like you have some experience with rr, any feedback?\n>\n> Finally, Julien Rouhaud spend some time with me after work hours, a,swering\n> my questions about some parts of the code and pointed me to the excellent\n> backtrace_functions GUC addition few days ago. This finally did the trick to\n> find out what was happening. Many thanks Julien!\n>\n> </debugger hard life>\n>\n> Back to the bug itself. Consider a working logical replication with constant\n> update/insert activity, eg. pgbench running against provider.\n>\n> Now, on the subscriber side, a session issue an \"ALTER TABLE ADD\n> COLUMN\" on a subscribed table, eg. pgbench_branches. A cache invalidation\n> message is then pending for this table.\n>\n> In the meantime, the logical replication worker receive an UPDATE to apply. It\n> opens the local relation using \"logicalrep_rel_open\". It finds the related\n> entry in LogicalRepRelMap is valid, so it does not update its attrmap\n> and directly opens and locks the local relation:\n>\n\n- /* Try to find and lock the relation by name. */\n+ /* Try to find the relation by name */\n relid = RangeVarGetRelid(makeRangeVar(remoterel->nspname,\\\n remoterel->relname, -1),\n- lockmode, true);\n+ NoLock, true);\n\nI think we can't do this because it could lead to locking the wrong\nreloid. See RangeVarGetRelidExtended. It ensures that after locking\nthe relation (which includes accepting invalidation messages), that\nthe reloid is correct. I think changing the code in the way you are\nsuggesting can lead to locking incorrect reloid.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 6 Dec 2019 17:30:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Fri, Dec 6, 2019 at 5:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 25, 2019 at 8:25 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> >\n> > On Wed, 6 Nov 2019 14:34:38 +0100\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > > On 2019-11-05 17:29, Jehan-Guillaume de Rorthais wrote:\n> > > > My best bet so far is that logicalrep_relmap_invalidate_cb is not called\n> > > > after the DDL on the subscriber so the relmap cache is not invalidated. So\n> > > > we end up with slot->tts_tupleDescriptor->natts superior than\n> > > > rel->remoterel->natts in slot_store_cstrings, leading to the overflow on\n> > > > attrmap and the sigsev.\n> > >\n> > > It looks like something like that is happening. But it shouldn't.\n> > > Different table schemas on publisher and subscriber are well supported,\n> > > so this must be an edge case of some kind. Please continue investigating.\n> >\n> > I've been able to find the origin of the crash, but it was a long journey.\n> >\n> > <debugger hard life>\n> >\n> > I was unable to debug using gdb record because of this famous error:\n> >\n> > Process record does not support instruction 0xc5 at address 0x1482758a4b30.\n> >\n> > Program stopped.\n> > __memset_avx2_unaligned_erms ()\n> > at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:168\n> > 168 ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such\n> > file or directory.\n> >\n> > Trying with rr, I had constant \"stack depth limit exceeded\", even with\n> > unlimited one. Does it worth opening a discussion or a wiki page about these\n> > tools? Peter, it looks like you have some experience with rr, any feedback?\n> >\n> > Finally, Julien Rouhaud spend some time with me after work hours, a,swering\n> > my questions about some parts of the code and pointed me to the excellent\n> > backtrace_functions GUC addition few days ago. This finally did the trick to\n> > find out what was happening. Many thanks Julien!\n> >\n> > </debugger hard life>\n> >\n> > Back to the bug itself. Consider a working logical replication with constant\n> > update/insert activity, eg. pgbench running against provider.\n> >\n> > Now, on the subscriber side, a session issue an \"ALTER TABLE ADD\n> > COLUMN\" on a subscribed table, eg. pgbench_branches. A cache invalidation\n> > message is then pending for this table.\n> >\n> > In the meantime, the logical replication worker receive an UPDATE to apply. It\n> > opens the local relation using \"logicalrep_rel_open\". It finds the related\n> > entry in LogicalRepRelMap is valid, so it does not update its attrmap\n> > and directly opens and locks the local relation:\n> >\n>\n> - /* Try to find and lock the relation by name. */\n> + /* Try to find the relation by name */\n> relid = RangeVarGetRelid(makeRangeVar(remoterel->nspname,\\\n> remoterel->relname, -1),\n> - lockmode, true);\n> + NoLock, true);\n>\n> I think we can't do this because it could lead to locking the wrong\n> reloid. See RangeVarGetRelidExtended. It ensures that after locking\n> the relation (which includes accepting invalidation messages), that\n> the reloid is correct. I think changing the code in the way you are\n> suggesting can lead to locking incorrect reloid.\n>\n\nI have made changes to fix the comment provided. The patch for the\nsame is attached. Could not add a test case for this scenario is based\non timing issue.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 13 Dec 2019 12:10:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Fri, Dec 13, 2019 at 12:10 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I have made changes to fix the comment provided. The patch for the\n> same is attached. Could not add a test case for this scenario is based\n> on timing issue.\n> Thoughts?\n>\n\nI agree that this is a timing issue. I also don't see a way to write\na reproducible test for this. However, I could reproduce it via\ndebugger consistently by following the below steps. I have updated a\nfew comments and commit messages in the attached patch.\n\nPeter E., Petr J or anyone else, do you have comments or objections on\nthis patch? If none, then I am planning to commit (and backpatch)\nthis patch in a few days time.\n\nTest steps to reproduce the issue.\nSet up\n---------\nset up master and subscriber nodes.\nIn code, add a while(true) in apply_handle_update() before a call to\nlogicalrep_rel_open(). This is to ensure that we can debug the replay\nof Update\noperation on subscriber.\n\nMaster\n-----------\nCreate table t1(c1 int);\nCreate publication pub_t1 for table t1;\nAlter table t1 replica identity full;\n\n\nSubscriber\n-------------\nCreate table t1(c1 int);\nCREATE SUBSCRIPTION sub_t1 CONNECTION 'host=localhost port=5432\ndbname=postgres' PUBLICATION pub_t1;\n\nMaster\n----------\nInsert into t1 values(1); --this will create LogicalRepRelMap entry\nfor t1 on subscriber.\n\nSubscriber\n----------\nSelect * from t1; -- This should display the data inserted in master.\n\nMaster\n----------\nUpdate t1 set c1 = 2 where c1=1;\n\nNow on the subscriber, attach a debugger and debug logicalrep_rel_open\nand stop debugger just before table_open call.\n\nSubscriber\n-----------\nAlter table t1 add c2 int;\n\nNow, continue in debugger, it should crash in slot_store_cstrings()\nbecause the rel->attrmap is not updated.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 16 Dec 2019 15:41:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On 2019-12-16 11:11, Amit Kapila wrote:\n> I agree that this is a timing issue. I also don't see a way to write\n> a reproducible test for this. However, I could reproduce it via\n> debugger consistently by following the below steps. I have updated a\n> few comments and commit messages in the attached patch.\n> \n> Peter E., Petr J or anyone else, do you have comments or objections on\n> this patch? If none, then I am planning to commit (and backpatch)\n> this patch in a few days time.\n\nThe patch seems fine to me. Writing a test seems hard. Let's skip it.\n\nThe commit message has a duplicate \"building\"/\"built\" in the first sentence.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Dec 2019 13:27:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Fri, 13 Dec 2019 12:10:07 +0530\nvignesh C <vignesh21@gmail.com> wrote:\n\n> On Fri, Dec 6, 2019 at 5:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 25, 2019 at 8:25 PM Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote: \n> > >\n> > > On Wed, 6 Nov 2019 14:34:38 +0100\n> > > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > > \n> > > > On 2019-11-05 17:29, Jehan-Guillaume de Rorthais wrote: \n> > > > > My best bet so far is that logicalrep_relmap_invalidate_cb is not\n> > > > > called after the DDL on the subscriber so the relmap cache is not\n> > > > > invalidated. So we end up with slot->tts_tupleDescriptor->natts\n> > > > > superior than rel->remoterel->natts in slot_store_cstrings, leading\n> > > > > to the overflow on attrmap and the sigsev. \n> > > >\n> > > > It looks like something like that is happening. But it shouldn't.\n> > > > Different table schemas on publisher and subscriber are well supported,\n> > > > so this must be an edge case of some kind. Please continue\n> > > > investigating. \n> > >\n> > > I've been able to find the origin of the crash, but it was a long journey.\n> > >\n> > > <debugger hard life>\n> > >\n> > > I was unable to debug using gdb record because of this famous error:\n> > >\n> > > Process record does not support instruction 0xc5 at address\n> > > 0x1482758a4b30.\n> > >\n> > > Program stopped.\n> > > __memset_avx2_unaligned_erms ()\n> > > at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:168\n> > > 168 ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such\n> > > file or directory.\n> > >\n> > > Trying with rr, I had constant \"stack depth limit exceeded\", even with\n> > > unlimited one. Does it worth opening a discussion or a wiki page about\n> > > these tools? Peter, it looks like you have some experience with rr, any\n> > > feedback?\n> > >\n> > > Finally, Julien Rouhaud spend some time with me after work hours,\n> > > a,swering my questions about some parts of the code and pointed me to the\n> > > excellent backtrace_functions GUC addition few days ago. This finally did\n> > > the trick to find out what was happening. Many thanks Julien!\n> > >\n> > > </debugger hard life>\n> > >\n> > > Back to the bug itself. Consider a working logical replication with\n> > > constant update/insert activity, eg. pgbench running against provider.\n> > >\n> > > Now, on the subscriber side, a session issue an \"ALTER TABLE ADD\n> > > COLUMN\" on a subscribed table, eg. pgbench_branches. A cache invalidation\n> > > message is then pending for this table.\n> > >\n> > > In the meantime, the logical replication worker receive an UPDATE to\n> > > apply. It opens the local relation using \"logicalrep_rel_open\". It finds\n> > > the related entry in LogicalRepRelMap is valid, so it does not update its\n> > > attrmap and directly opens and locks the local relation:\n> > > \n> >\n> > - /* Try to find and lock the relation by name. */\n> > + /* Try to find the relation by name */\n> > relid = RangeVarGetRelid(makeRangeVar(remoterel->nspname,\\\n> > remoterel->relname, -1),\n> > - lockmode, true);\n> > + NoLock, true);\n> >\n> > I think we can't do this because it could lead to locking the wrong\n> > reloid. See RangeVarGetRelidExtended. It ensures that after locking\n> > the relation (which includes accepting invalidation messages), that\n> > the reloid is correct. I think changing the code in the way you are\n> > suggesting can lead to locking incorrect reloid.\n\nSorry for the delay, I couldn't answer earlier.\n\nTo be honest, I was wondering about that. Since we keep in cache the relid and\nuse it as cache invalidation, I thought it might be fragile. But then - as far\nas I could find - the only way to change the relid is to drop and create a new\ntable. I wasn't sure it could really cause a race condition there because of\nthe impact of such commands on logical replication.\n\nBut now, I realize I should have go all the way through and close this\npotential bug as well. Thank you.\n\n> I have made changes to fix the comment provided. The patch for the\n> same is attached. Could not add a test case for this scenario is based\n> on timing issue.\n\nThank you for this fix Vignesh!\n\n\n\n", "msg_date": "Mon, 16 Dec 2019 16:40:00 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Mon, 16 Dec 2019 13:27:43 +0100\nPeter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-12-16 11:11, Amit Kapila wrote:\n> > I agree that this is a timing issue. I also don't see a way to write\n> > a reproducible test for this. However, I could reproduce it via\n> > debugger consistently by following the below steps. I have updated a\n> > few comments and commit messages in the attached patch.\n> > \n> > Peter E., Petr J or anyone else, do you have comments or objections on\n> > this patch? If none, then I am planning to commit (and backpatch)\n> > this patch in a few days time. \n> \n> The patch seems fine to me. Writing a test seems hard. Let's skip it.\n> \n> The commit message has a duplicate \"building\"/\"built\" in the first sentence.\n\nI think the sentence is quite long. I had to re-read it to get it.\n\nWhat about:\n\n This patch allows building the local relmap cache for a subscribed relation\n after processing pending invalidation messages and potential relcache\n updates.\n\nRegards,\n\n\n", "msg_date": "Mon, 16 Dec 2019 16:46:39 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Mon, Dec 16, 2019 at 9:16 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Mon, 16 Dec 2019 13:27:43 +0100\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>\n> > On 2019-12-16 11:11, Amit Kapila wrote:\n> > > I agree that this is a timing issue. I also don't see a way to write\n> > > a reproducible test for this. However, I could reproduce it via\n> > > debugger consistently by following the below steps. I have updated a\n> > > few comments and commit messages in the attached patch.\n> > >\n> > > Peter E., Petr J or anyone else, do you have comments or objections on\n> > > this patch? If none, then I am planning to commit (and backpatch)\n> > > this patch in a few days time.\n> >\n> > The patch seems fine to me. Writing a test seems hard. Let's skip it.\n> >\n\nOkay.\n\n> > The commit message has a duplicate \"building\"/\"built\" in the first sentence.\n>\n> I think the sentence is quite long. I had to re-read it to get it.\n>\n> What about:\n>\n> This patch allows building the local relmap cache for a subscribed relation\n> after processing pending invalidation messages and potential relcache\n> updates.\n>\n\nAttached patch with updated commit message based on suggestions. I am\nplanning to commit this tomorrow unless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 17 Dec 2019 10:09:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Tue, Dec 17, 2019 at 10:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 16, 2019 at 9:16 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> >\n> > On Mon, 16 Dec 2019 13:27:43 +0100\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > > On 2019-12-16 11:11, Amit Kapila wrote:\n> > > > I agree that this is a timing issue. I also don't see a way to write\n> > > > a reproducible test for this. However, I could reproduce it via\n> > > > debugger consistently by following the below steps. I have updated a\n> > > > few comments and commit messages in the attached patch.\n> > > >\n> > > > Peter E., Petr J or anyone else, do you have comments or objections on\n> > > > this patch? If none, then I am planning to commit (and backpatch)\n> > > > this patch in a few days time.\n> > >\n> > > The patch seems fine to me. Writing a test seems hard. Let's skip it.\n> > >\n>\n> Okay.\n>\n> > > The commit message has a duplicate \"building\"/\"built\" in the first sentence.\n> >\n> > I think the sentence is quite long. I had to re-read it to get it.\n> >\n> > What about:\n> >\n> > This patch allows building the local relmap cache for a subscribed relation\n> > after processing pending invalidation messages and potential relcache\n> > updates.\n> >\n>\n> Attached patch with updated commit message based on suggestions. I am\n> planning to commit this tomorrow unless there are more comments.\n>\n\nWhile testing the patch on back versions, I found that the patch does\nnot apply on PG 11 & PG 10 branch. Attached patch has the changes for\nPG 11 & PG 10 branch. Only difference in the patch was that table_open\nneeded to be changed to heap_open. I have verified the patch on back\nbranches and found it to be working. For PG 12 & current the previous\npatch that Amit need to be used, I'm not reattaching here.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 17 Dec 2019 18:00:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Tue, Dec 17, 2019 at 6:01 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Dec 17, 2019 at 10:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Attached patch with updated commit message based on suggestions. I am\n> > planning to commit this tomorrow unless there are more comments.\n> >\n>\n> While testing the patch on back versions, I found that the patch does\n> not apply on PG 11 & PG 10 branch. Attached patch has the changes for\n> PG 11 & PG 10 branch. Only difference in the patch was that table_open\n> needed to be changed to heap_open. I have verified the patch on back\n> branches and found it to be working. For PG 12 & current the previous\n> patch that Amit need to be used, I'm not reattaching here.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Dec 2019 08:46:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: segmentation fault when cassert enabled" }, { "msg_contents": "On Wed, 18 Dec 2019 08:46:01 +0530\nAmit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Dec 17, 2019 at 6:01 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Dec 17, 2019 at 10:09 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote: \n> > >\n> > > Attached patch with updated commit message based on suggestions. I am\n> > > planning to commit this tomorrow unless there are more comments.\n> > > \n> >\n> > While testing the patch on back versions, I found that the patch does\n> > not apply on PG 11 & PG 10 branch. Attached patch has the changes for\n> > PG 11 & PG 10 branch. Only difference in the patch was that table_open\n> > needed to be changed to heap_open. I have verified the patch on back\n> > branches and found it to be working. For PG 12 & current the previous\n> > patch that Amit need to be used, I'm not reattaching here.\n> > \n> \n> Pushed.\n\nThanks!\n\n\n", "msg_date": "Thu, 19 Dec 2019 14:14:10 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: segmentation fault when cassert enabled" } ]
[ { "msg_contents": "Under MinGW, when compiling the ecpg test files, you get these warnings:\n\nsqlda.pgc: In function 'dump_sqlda':\nsqlda.pgc:44:11: warning: unknown conversion type character 'l' in format [-Wformat=]\n printf(\"name sqlda descriptor: '%s' value %lld\\n\", sqlda->sqlvar[i].sqlname.data, *(long long int *)sqlda->sqlvar[i].sqldata);\nsqlda.pgc:44:11: warning: too many arguments for format [-Wformat-extra-args]\nsqlda.pgc:44:11: warning: unknown conversion type character 'l' in format [-Wformat=]\nsqlda.pgc:44:11: warning: too many arguments for format [-Wformat-extra-args]\n\nThese files don't use our printf replacement or the c.h porting layer,\nso unless we want to start doing that, I propose the attached patch to\ndetermine the appropriate format conversion the hard way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 25 Oct 2019 21:32:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "MinGW compiler warnings in ecpg tests" }, { "msg_contents": "> These files don't use our printf replacement or the c.h porting\n> layer,\n> so unless we want to start doing that, I propose the attached patch\n> to\n> determine the appropriate format conversion the hard way.\n\nI don't think such porting efforts are worth it for a single test case,\nor in other words, if you ask me go ahead with your patch.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n", "msg_date": "Sat, 26 Oct 2019 10:40:54 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: MinGW compiler warnings in ecpg tests" }, { "msg_contents": "On 2019-10-26 10:40, Michael Meskes wrote:\n>> These files don't use our printf replacement or the c.h porting\n>> layer,\n>> so unless we want to start doing that, I propose the attached patch\n>> to\n>> determine the appropriate format conversion the hard way.\n> \n> I don't think such porting efforts are worth it for a single test case,\n> or in other words, if you ask me go ahead with your patch.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 29 Oct 2019 09:40:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: MinGW compiler warnings in ecpg tests" }, { "msg_contents": "Dear Peter, Michael,\r\n\r\nSorry for reviving the old thread. While trying to build postgres on msys2 by meson,\r\nI faced the same warning. The OS is Windows 10.\r\n\r\n```\r\n$ ninja\r\n[2378/2402] Compiling C object src/interfaces/ecpg/test/sql/sqlda.exe.p/meson-generated_.._sqlda.c.obj\r\n../postgres/src/interfaces/ecpg/test/sql/sqlda.pgc: In function 'dump_sqlda':\r\n../postgres/src/interfaces/ecpg/test/sql/sqlda.pgc:45:33: warning: format '%d' expects argument of type 'int', but argument 3 has type 'long long int' [-Wformat=]\r\n 45 | \"name sqlda descriptor: '%s' value %I64d\\n\",\r\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n......\r\n 49 | sqlda->sqlvar[i].sqlname.data, *(long long int *)sqlda->sqlvar[i].sqldata);\r\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n | |\r\n | long long int\r\n```\r\n\r\n\r\nBefore building, I did below steps:\r\n\r\n1. Installed required software listed in [1].\r\n2. ran `meson setup -Dcassert=true -Ddebug=true /path/to/builddir`\r\n3. moved to /path/to/builddir\r\n4. ran `ninja`\r\n5. got above warning\r\n\r\nAttached file summarize the result of meson command, which was output at the end of it.\r\nAlso, belows show the version of meson/ninja.\r\n\r\n```\r\n$ ninja --version\r\n1.11.1\r\n$ meson -v\r\n1.2.3\r\n```\r\n\r\nI was quite not sure the windows build, but I could see that gcc compiler was\r\nused here. Does it mean that the compiler might not like the format string \"%I64d\"?\r\nI modified like below and could be compiled without warnings.\r\n\r\n```\r\n--- a/src/interfaces/ecpg/test/sql/sqlda.pgc\r\n+++ b/src/interfaces/ecpg/test/sql/sqlda.pgc\r\n@@ -41,7 +41,7 @@ dump_sqlda(sqlda_t *sqlda)\r\n break;\r\n case ECPGt_long_long:\r\n printf(\r\n-#ifdef _WIN32\r\n+#if !defined(__GNUC__)\r\n \"name sqlda descriptor: '%s' value %I64d\\n\",\r\n #else\r\n \"name sqlda descriptor: '%s' value %lld\\n\",\r\n\r\n```\r\n\r\n[1]: https://www.postgresql.org/message-id/9f4f22be-f9f1-b350-bc06-521226b87f7a%40dunslane.net\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Fri, 10 Nov 2023 07:59:49 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: MinGW compiler warnings in ecpg tests" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16079\nLogged by: Yudhveer Kandukuri\nEmail address: k.yudhveer@gmail.com\nPostgreSQL version: 10.10\nOperating system: UBUNTU\nDescription: \n\nAs your team mentioned that LDAP process is not secured compared to the\nGSSAPI authentication.\r\n\r\nCan you clarify me this question, whenever the client provide his\ncredentials to connect to the PostgreSQL server it will authenticated\nagainst the LDAP Server and then LDAP will direct the client connecttion to\nthe Postgrers server. But the user credentials will not be sent to\nPostgresql server to authenticate.\r\n\r\nBecause your team mentioned this statement \" it's much more secure than\nusing LDAP-based auth and avoids the user's password being\r\nsent to the PostgreSQL server (where it could be compromised if the\nPGprocess is compromised).\"\r\n\r\nI am having user defined in the LDAP server with all the credentails and\nalso same user in the postgres server.", "msg_date": "Fri, 25 Oct 2019 23:16:25 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* PG Bug reporting form (noreply@postgresql.org) wrote:\n> As your team mentioned that LDAP process is not secured compared to the\n> GSSAPI authentication.\n\nNo, it isn't.\n\n> Can you clarify me this question, whenever the client provide his\n> credentials to connect to the PostgreSQL server it will authenticated\n> against the LDAP Server and then LDAP will direct the client connecttion to\n> the Postgrers server. But the user credentials will not be sent to\n> Postgresql server to authenticate.\n\nUh, the user's credentials certainly are sent to the PG server.\n\nHere's a nice short patch that just prints out the user's password after\nthe server gets it when using LDAP auth. You'll see the results like\nthis in the log:\n\nusers password is: hello\n\n> Because your team mentioned this statement \" it's much more secure than\n> using LDAP-based auth and avoids the user's password being\n> sent to the PostgreSQL server (where it could be compromised if the\n> PGprocess is compromised).\"\n\nYes, that's correct, if the PG server is compromised then the user's\ncredentials, when using LDAP auth, can be captured.\n\n> I am having user defined in the LDAP server with all the credentails and\n> also same user in the postgres server.\n\nI'm not sure what you're suggesting here, but the way LDAP auth in PG\nworks is that the user's password is sent to the PG server and then the\nPG server turns around and tries to use it to authenticate to the LDAP\nserver and, if successful, the authentication is allowed, and if\nunsuccessful, the authentication is denied. When using LDAP auth, we\ndon't look at the rolpassword column in pg_authid at all.\n\nI do think it'd be a useful improvement to add a way to control who is\nallowed to access a PG server (aka- authorization), perhaps through an\nLDAP query to check it, while using Kerberos/GSSAPI authentication to\nactually do the authentication, but there isn't a way to do that with PG\ntoday.\n\nThanks,\n\nStephen", "msg_date": "Mon, 28 Oct 2019 11:47:54 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "On Tue, Oct 29, 2019 at 4:48 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Uh, the user's credentials certainly are sent to the PG server.\n\nPerhaps we should log a warning when PostgreSQL has received a\npassword over the network without SSL. Perhaps we should log another\nwarning when PostgreSQL has sent a password over the network without\nSSL.\n\n> users password is: hello\n\nThe fact that you can steal the password from PostgreSQL's memory\nseems like a next level problem to me, but the fact that it's easy to\nconfigure PostgreSQL in a way that sends cleartext passwords over the\nnetwork a couple of times seems to be a bigger problem to me.\n\nHere's a demonstration. I run make -C src/test/ldap check, just to\nget a working slapd setup, and then I start it like so:\n\n/usr/local/libexec/slapd -f slapd.conf -h ldap://localhost:8888\n\nI put this into my pg_hba.conf:\n\nhost postgres test1 127.0.0.1/32 ldap\nldapurl=\"ldap://localhost:8888/dc=example,dc=net?uid?sub\"\n\nI trace my postmaster + children with truss -p PID -s 1024 -f, and\nthen I try to log in with psql -h localhost -p 8888 postgres test1,\nand give the password \"foobar\". Here is my password, which travelled\nover the network in cleartext twice (into PostgreSQL, and then out to\nslapd):\n\n38412: accept(6,{ AF_INET 127.0.0.1:12891 },0x801d07118) = 9 (0x9)\n...\n38412: fork() = 38459 (0x963b)\n...\n38459: recvfrom(9,\"p\\0\\0\\0\\vfoobar\\0\",8192,0,NULL,0x0) = 12 (0xc)\n...\n38459: connect(4,{ AF_INET 127.0.0.1:8888 },16) = 0 (0x0)\n38459: write(4,\"0-\\^B\\^A\\^A`(\\^B\\^A\\^C\\^D\\^[uid=test1,dc=example,dc=net\\M^@\\^Ffoobar\",47)\n= 47 (0x2f)\n\n\n", "msg_date": "Fri, 15 Nov 2019 17:41:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "On Fri, Nov 15, 2019 at 5:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Tue, Oct 29, 2019 at 4:48 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Uh, the user's credentials certainly are sent to the PG server.\n>\n> Perhaps we should log a warning when PostgreSQL has received a\n> password over the network without SSL. Perhaps we should log another\n> warning when PostgreSQL has sent a password over the network without\n> SSL.\n>\n\nFor the old plaintext \"password\" method, we log a warning when we parse the\nconfiguration file.\n\nMaybe we should do the same for LDAP (and RADIUS)? This seems like a better\nplace to put it than to log it at every time it's received?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Nov 15, 2019 at 5:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Tue, Oct 29, 2019 at 4:48 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Uh, the user's credentials certainly are sent to the PG server.\n\nPerhaps we should log a warning when PostgreSQL has received a\npassword over the network without SSL.  Perhaps we should log another\nwarning when PostgreSQL has sent a password over the network without\nSSL.For the old plaintext \"password\" method, we log a warning when we parse the configuration file.Maybe we should do the same for LDAP (and RADIUS)? This seems like a better place to put it than to log it at every time it's received?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sat, 16 Nov 2019 14:29:58 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Tue, Oct 29, 2019 at 4:48 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Uh, the user's credentials certainly are sent to the PG server.\n> \n> Perhaps we should log a warning when PostgreSQL has received a\n> password over the network without SSL. Perhaps we should log another\n> warning when PostgreSQL has sent a password over the network without\n> SSL.\n\nI like the idea of having these warnings, I don't like the idea of\nlimiting it to when SSL is or isn't being used.\n\n> > users password is: hello\n> \n> The fact that you can steal the password from PostgreSQL's memory\n> seems like a next level problem to me, but the fact that it's easy to\n> configure PostgreSQL in a way that sends cleartext passwords over the\n> network a couple of times seems to be a bigger problem to me.\n\nBoth are issues and clearly users are confused when they use an\nenterprise authentication system (eg: Active Directory) and configure PG\nto use it (\"ldap\") and expect us to do things intelligently like other\nsimilar products do (SQL Server).\n\nIs it our fault that they don't realize that they aren't configuring PG\nproperly in an AD environment when they use the LDAP auth method? Maybe\nnot *technically*, but we sure don't make it very clear that the LDAP\nauth method is *not* the same as what they get with something like a\nSQL Server instance and that it's an poor way of doing authentication\nwhen you're in an Active Directory environment.\n\n> Here's a demonstration. I run make -C src/test/ldap check, just to\n> get a working slapd setup, and then I start it like so:\n> \n> /usr/local/libexec/slapd -f slapd.conf -h ldap://localhost:8888\n> \n> I put this into my pg_hba.conf:\n> \n> host postgres test1 127.0.0.1/32 ldap\n> ldapurl=\"ldap://localhost:8888/dc=example,dc=net?uid?sub\"\n> \n> I trace my postmaster + children with truss -p PID -s 1024 -f, and\n> then I try to log in with psql -h localhost -p 8888 postgres test1,\n> and give the password \"foobar\". Here is my password, which travelled\n> over the network in cleartext twice (into PostgreSQL, and then out to\n> slapd):\n> \n> 38412: accept(6,{ AF_INET 127.0.0.1:12891 },0x801d07118) = 9 (0x9)\n> ...\n> 38412: fork() = 38459 (0x963b)\n> ...\n> 38459: recvfrom(9,\"p\\0\\0\\0\\vfoobar\\0\",8192,0,NULL,0x0) = 12 (0xc)\n> ...\n> 38459: connect(4,{ AF_INET 127.0.0.1:8888 },16) = 0 (0x0)\n> 38459: write(4,\"0-\\^B\\^A\\^A`(\\^B\\^A\\^C\\^D\\^[uid=test1,dc=example,dc=net\\M^@\\^Ffoobar\",47)\n> = 47 (0x2f)\n\nYes, this is indeed also terrible, heh.\n\nThanks,\n\nStephen", "msg_date": "Tue, 3 Dec 2019 14:58:12 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Fri, Nov 15, 2019 at 5:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> > On Tue, Oct 29, 2019 at 4:48 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Uh, the user's credentials certainly are sent to the PG server.\n> >\n> > Perhaps we should log a warning when PostgreSQL has received a\n> > password over the network without SSL. Perhaps we should log another\n> > warning when PostgreSQL has sent a password over the network without\n> > SSL.\n> \n> For the old plaintext \"password\" method, we log a warning when we parse the\n> configuration file.\n> \n> Maybe we should do the same for LDAP (and RADIUS)? This seems like a better\n> place to put it than to log it at every time it's received?\n\nSeems like a reasonable approach to me though we should probably also\ninclude details in the documentation around what this warning means,\nexactly, since we probably can't write the full paragraph or more that\nwe'd need to inside the warning itself.\n\nSorry though.. where do we log that warning you're talking about wrt\nthe 'password' method? I just started a 13devel with 'password'\nconfigured in pg_hba.conf and didn't see any warnings...\n\n(commit b5273943679d22f58f1e1e269ad75e791172f557)\n\nI'm all for adding a warning when any of these methods is used, maybe\nwith an optional override of \"yes, I know this is bad but I don't care\".\n\nThanks,\n\nStephen", "msg_date": "Tue, 3 Dec 2019 15:10:02 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Fri, Nov 15, 2019 at 5:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Tue, Oct 29, 2019 at 4:48 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Uh, the user's credentials certainly are sent to the PG server.\n> >\n> > Perhaps we should log a warning when PostgreSQL has received a\n> > password over the network without SSL. Perhaps we should log another\n> > warning when PostgreSQL has sent a password over the network without\n> > SSL.\n> \n> For the old plaintext \"password\" method, we log a warning when we parse the\n> configuration file.\n> \n> Maybe we should do the same for LDAP (and RADIUS)? This seems like a better\n> place to put it than to log it at every time it's received?\n\nA dollar short and a year late, but ... +1.\n\nThanks,\n\nStephen", "msg_date": "Sun, 20 Dec 2020 19:58:26 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "On Sun, Dec 20, 2020 at 7:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> * Magnus Hagander (magnus@hagander.net) wrote:\n>\n>\nChanged from bugs to hackers.\n\n\n> > For the old plaintext \"password\" method, we log a warning when we parse\n> the\n> > configuration file.\n>\n\nLike Stephen, I don't see such a warning getting logged.\n\n\n> >\n> > Maybe we should do the same for LDAP (and RADIUS)? This seems like a\n> better\n> > place to put it than to log it at every time it's received?\n>\n> A dollar short and a year late, but ... +1.\n\n\nI would suggest going further. I would make the change on the client side,\nand have libpq refuse to send unhashed passwords without having an\nenvironment variable set which allows it. (Also, change the client\nbehavior so it defaults to verify-full when PGSSLMODE is not set.)\n\nWhat is the value of logging on the server side? I can change the setting\nfrom password to md5 or from ldap to gss, when I notice the log message.\nBut once compromised or during a MITM attack, the bad guy will just set it\nback to the unsafe form and the client will silently go along with it.\n\nCheers,\n\nJeff\n\nOn Sun, Dec 20, 2020 at 7:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n* Magnus Hagander (magnus@hagander.net) wrote:Changed from bugs to hackers. \n> For the old plaintext \"password\" method, we log a warning when we parse the\n> configuration file.Like Stephen, I don't see such a warning getting logged. \n> \n> Maybe we should do the same for LDAP (and RADIUS)? This seems like a better\n> place to put it than to log it at every time it's received?\n\nA dollar short and a year late, but ... +1.I would suggest going further.  I would make the change on the client side, and have libpq refuse to send unhashed passwords without having an environment variable set which allows it.  (Also, change the client behavior so it defaults to verify-full when PGSSLMODE is not set.)What is the value of logging on the server side?  I can change the setting from password to md5 or from ldap to gss, when I notice the log message. But once compromised or during a MITM attack, the bad guy will just set it back to the unsafe form and the client will silently go along with it.Cheers,Jeff", "msg_date": "Mon, 21 Dec 2020 12:26:17 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> On Sun, Dec 20, 2020 at 7:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> * Magnus Hagander (magnus@hagander.net) wrote:\n>>> Maybe we should do the same for LDAP (and RADIUS)? This seems like a\n>>> better place to put it than to log it at every time it's received?\n\n>> A dollar short and a year late, but ... +1.\n\n> I would suggest going further. I would make the change on the client side,\n> and have libpq refuse to send unhashed passwords without having an\n> environment variable set which allows it.\n\nAs noted, that would break LDAP and RADIUS auth methods; likely also PAM.\n\n> What is the value of logging on the server side?\n\nI do agree with this point, but mostly on the grounds of \"nobody reads\nthe server log\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Dec 2020 13:31:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Jeff Janes <jeff.janes@gmail.com> writes:\n> > On Sun, Dec 20, 2020 at 7:58 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >> * Magnus Hagander (magnus@hagander.net) wrote:\n> >>> Maybe we should do the same for LDAP (and RADIUS)? This seems like a\n> >>> better place to put it than to log it at every time it's received?\n> \n> >> A dollar short and a year late, but ... +1.\n> \n> > I would suggest going further. I would make the change on the client side,\n> > and have libpq refuse to send unhashed passwords without having an\n> > environment variable set which allows it.\n> \n> As noted, that would break LDAP and RADIUS auth methods; likely also PAM.\n\nWhich would be an altogether good thing as all of those end up exposing\nsensitive information should the server be compromised and a user uses\none of them to log in.\n\nThe point would be to make it clear to the user, while having an escape\nhatch if necessary, that they're sending their password (or pin in the\nRADIUS case) to the server.\n\n> > What is the value of logging on the server side?\n> \n> I do agree with this point, but mostly on the grounds of \"nobody reads\n> the server log\".\n\nI agree that doing this server side really isn't all that helpful.\n\nThanks,\n\nStephen", "msg_date": "Mon, 21 Dec 2020 13:35:11 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Jeff Janes <jeff.janes@gmail.com> writes:\n>>> I would suggest going further. I would make the change on the client side,\n>>> and have libpq refuse to send unhashed passwords without having an\n>>> environment variable set which allows it.\n\n>> As noted, that would break LDAP and RADIUS auth methods; likely also PAM.\n\n> Which would be an altogether good thing as all of those end up exposing\n> sensitive information should the server be compromised and a user uses\n> one of them to log in.\n\nHm. I'm less concerned about that scenario than about somebody snooping\nthe on-the-wire traffic. If we're going to invent a connection setting\nfor this, I'd say that in addition to \"ok to send cleartext password\"\nand \"never ok to send cleartext password\", there should be a setting for\n\"send cleartext password only if connection is encrypted\". Possibly\nthat should even be the default.\n\n(I guess Unix-socket connections would be an exception, since we never\nencrypt those.)\n\nBTW, do we have a client-side setting to insist that passwords not be\nsent in MD5 hashing either? A person who is paranoid about this would\nlikely want to disable that code path as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Dec 2020 13:44:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "On Mon, Dec 21, 2020 at 7:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Jeff Janes <jeff.janes@gmail.com> writes:\n> >>> I would suggest going further. I would make the change on the client side,\n> >>> and have libpq refuse to send unhashed passwords without having an\n> >>> environment variable set which allows it.\n>\n> >> As noted, that would break LDAP and RADIUS auth methods; likely also PAM.\n>\n> > Which would be an altogether good thing as all of those end up exposing\n> > sensitive information should the server be compromised and a user uses\n> > one of them to log in.\n>\n> Hm. I'm less concerned about that scenario than about somebody snooping\n> the on-the-wire traffic. If we're going to invent a connection setting\n> for this, I'd say that in addition to \"ok to send cleartext password\"\n> and \"never ok to send cleartext password\", there should be a setting for\n> \"send cleartext password only if connection is encrypted\". Possibly\n> that should even be the default.\n>\n> (I guess Unix-socket connections would be an exception, since we never\n> encrypt those.)\n\n\"send cleartext password only if connection is secure\", and define\nsecure as being tls encrypted, gss encrypted, or unix socket.\n\n\n> BTW, do we have a client-side setting to insist that passwords not be\n> sent in MD5 hashing either? A person who is paranoid about this would\n> likely want to disable that code path as well.\n\nI don't think we do, and we possibly should. You can require channel\nbinding which will require scram which solves the problem, but it does\nso only for scram.\n\nIIRC we've discussed having a parameter that says \"allowed\nauthentication methods\" on the client as well, but I don't believe it\nhas been built. But it wouldn't be bad to be able to for example force\nthe client to only attempt gssapi auth, regardless of what the server\nasks for, and just fail if it's not there.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 21 Dec 2020 19:52:42 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Jeff Janes <jeff.janes@gmail.com> writes:\n> >>> I would suggest going further. I would make the change on the client side,\n> >>> and have libpq refuse to send unhashed passwords without having an\n> >>> environment variable set which allows it.\n> \n> >> As noted, that would break LDAP and RADIUS auth methods; likely also PAM.\n> \n> > Which would be an altogether good thing as all of those end up exposing\n> > sensitive information should the server be compromised and a user uses\n> > one of them to log in.\n> \n> Hm. I'm less concerned about that scenario than about somebody snooping\n> the on-the-wire traffic. If we're going to invent a connection setting\n> for this, I'd say that in addition to \"ok to send cleartext password\"\n> and \"never ok to send cleartext password\", there should be a setting for\n> \"send cleartext password only if connection is encrypted\". Possibly\n> that should even be the default.\n\nI'd still strongly advocate for \"never ok to send cleartext password\" to\nbe the default, otherwise we put this out and then everyone ends up\nhaving to include \"set this on all your clients to never allow it\" in\ntheir hardening guidelines. That's really not ideal.\n\nThat said, having such an option would certainly be better than not\nhaving any reasonable way on the client side to make sure that the\nuser's password isn't being sent to the server.\n\n> (I guess Unix-socket connections would be an exception, since we never\n> encrypt those.)\n\nFor the middle-ground \"I don't care if the server sees my password, but\ndon't want someone on the network seeing it\" it would seem unix sockets\nwould be alright.\n\n> BTW, do we have a client-side setting to insist that passwords not be\n> sent in MD5 hashing either? A person who is paranoid about this would\n> likely want to disable that code path as well.\n\nNo, but it would surely be good if we did... or we could just rip out\nthe md5 support entirely.\n\n(Yes, I appreciate that the position I'm taking here isn't likely to be\npopular and I'm not going to completely say no to compromises, but every\nkind of compromise like these invites users to end up doing the insecure\nthing; the more difficult we make it to do the insecure thing the better\noverall for security.)\n\nThanks,\n\nStephen", "msg_date": "Mon, 21 Dec 2020 13:53:19 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Dec 21, 2020 at 7:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > BTW, do we have a client-side setting to insist that passwords not be\n> > sent in MD5 hashing either? A person who is paranoid about this would\n> > likely want to disable that code path as well.\n> \n> I don't think we do, and we possibly should. You can require channel\n> binding which will require scram which solves the problem, but it does\n> so only for scram.\n> \n> IIRC we've discussed having a parameter that says \"allowed\n> authentication methods\" on the client as well, but I don't believe it\n> has been built. But it wouldn't be bad to be able to for example force\n> the client to only attempt gssapi auth, regardless of what the server\n> asks for, and just fail if it's not there.\n\nThe client is able to require a GSS encrypted connection, and a savy\nuser will realize that they should 'kinit' (or equivilant) locally and\nnever provide their password explicitly to the psql (or equivilant)\ncommand, but that's certainly less than ideal.\n\nHaving a way to explicitly tell libpq what auth methods are acceptable\nwas discussed previously and does generally seem like a good idea, as\notherwise there's a lot of risk of what are essentially downgrade\nattacks.\n\nThanks,\n\nStephen", "msg_date": "Mon, 21 Dec 2020 14:06:08 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "On Mon, Dec 21, 2020 at 8:06 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Magnus Hagander (magnus@hagander.net) wrote:\n> > On Mon, Dec 21, 2020 at 7:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > BTW, do we have a client-side setting to insist that passwords not be\n> > > sent in MD5 hashing either? A person who is paranoid about this would\n> > > likely want to disable that code path as well.\n> >\n> > I don't think we do, and we possibly should. You can require channel\n> > binding which will require scram which solves the problem, but it does\n> > so only for scram.\n> >\n> > IIRC we've discussed having a parameter that says \"allowed\n> > authentication methods\" on the client as well, but I don't believe it\n> > has been built. But it wouldn't be bad to be able to for example force\n> > the client to only attempt gssapi auth, regardless of what the server\n> > asks for, and just fail if it's not there.\n>\n> The client is able to require a GSS encrypted connection, and a savy\n> user will realize that they should 'kinit' (or equivilant) locally and\n> never provide their password explicitly to the psql (or equivilant)\n> command, but that's certainly less than ideal.\n\nSure, but even if you do, then if you connect to a server that has gss\nsupport but is configured for password auth, it will perform password\nauth.\n\n\n> Having a way to explicitly tell libpq what auth methods are acceptable\n> was discussed previously and does generally seem like a good idea, as\n> otherwise there's a lot of risk of what are essentially downgrade\n> attacks.\n\nThat was my point exactly..\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 21 Dec 2020 20:11:32 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Dec 21, 2020 at 8:06 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Magnus Hagander (magnus@hagander.net) wrote:\n> > > On Mon, Dec 21, 2020 at 7:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > BTW, do we have a client-side setting to insist that passwords not be\n> > > > sent in MD5 hashing either? A person who is paranoid about this would\n> > > > likely want to disable that code path as well.\n> > >\n> > > I don't think we do, and we possibly should. You can require channel\n> > > binding which will require scram which solves the problem, but it does\n> > > so only for scram.\n> > >\n> > > IIRC we've discussed having a parameter that says \"allowed\n> > > authentication methods\" on the client as well, but I don't believe it\n> > > has been built. But it wouldn't be bad to be able to for example force\n> > > the client to only attempt gssapi auth, regardless of what the server\n> > > asks for, and just fail if it's not there.\n> >\n> > The client is able to require a GSS encrypted connection, and a savy\n> > user will realize that they should 'kinit' (or equivilant) locally and\n> > never provide their password explicitly to the psql (or equivilant)\n> > command, but that's certainly less than ideal.\n> \n> Sure, but even if you do, then if you connect to a server that has gss\n> support but is configured for password auth, it will perform password\n> auth.\n\nRight, and that's bad. Think we agree on that. I was just saying that\nsomeone who understanding how GSS works wouldn't actually provide their\npassword at that point. Trusting to that is definitely not sufficient\nthough.\n\n> > Having a way to explicitly tell libpq what auth methods are acceptable\n> > was discussed previously and does generally seem like a good idea, as\n> > otherwise there's a lot of risk of what are essentially downgrade\n> > attacks.\n> \n> That was my point exactly..\n\nYes, it was my intention to agree with you on this. :)\n\nThanks,\n\nStephen", "msg_date": "Mon, 21 Dec 2020 14:13:39 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "On Mon, 2020-12-21 at 13:44 -0500, Tom Lane wrote:\n> Hm. I'm less concerned about that scenario than about somebody\n> snooping\n> the on-the-wire traffic. If we're going to invent a connection\n> setting\n> for this, I'd say that in addition to \"ok to send cleartext password\"\n> and \"never ok to send cleartext password\", there should be a setting\n> for\n> \"send cleartext password only if connection is encrypted\". Possibly\n> that should even be the default.\n\nThere was a fair amount of related discussion here:\n\n\nhttps://www.postgresql.org/message-id/227015d8417f2b4fef03f8966dbfa5cbcc4f44da.camel%40j-davis.com\n\nMy feeling after all of that discussion is that the next step would be\nto move to some kind of negotiation between client and server about\nwhich methods are mutually acceptable. Right now, the protocol is\nstructured around the server driving the authentication process, and\nthe most the client can do is abort.\n\n> BTW, do we have a client-side setting to insist that passwords not be\n> sent in MD5 hashing either? A person who is paranoid about this\n> would\n> likely want to disable that code path as well.\n\nchannel_binding=require is one way to do it, but it also requires ssl.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 03 Jun 2021 11:02:56 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" }, { "msg_contents": "On Thu, Jun 03, 2021 at 11:02:56AM -0700, Jeff Davis wrote:\n> My feeling after all of that discussion is that the next step would be\n> to move to some kind of negotiation between client and server about\n> which methods are mutually acceptable. Right now, the protocol is\n> structured around the server driving the authentication process, and\n> the most the client can do is abort.\n\nFWIW, this sounds very similar to what SASL solves when we try to\nselect a mechanism name, plus some filtering applied in the backend\nwith some HBA rule or some filtering in the frontend with a connection\nparameter doing the restriction, like channel_binding here.\n\nIntroducing a new libpq parameter that allows the user to select which\nauthentication methods are allowed has been discussed in the past, I\nremember vaguely writing/reviewing a patch doing that actually..\n--\nMichael", "msg_date": "Fri, 4 Jun 2021 10:09:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BUG #16079: Question Regarding the BUG #16064" } ]
[ { "msg_contents": "Hi, \n\nI found a missing column value in the pg_stat_progress_cluster view document.\nI read the src/backend/catalog/system_views.sql file, there seems to be a possibility that 'writing new heap' is output in the 'phase' column.\nThe attached patch adds a description of the 'writing new heap' value output in the 'phase' column.\n\nRegards,\nNoriyoshi Shinoda", "msg_date": "Sat, 26 Oct 2019 05:13:49 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "[DOC] Fix for the missing pg_stat_progress_cluster view phase column\n value" }, { "msg_contents": "On Sat, Oct 26, 2019 at 05:13:49AM +0000, Shinoda, Noriyoshi (PN Japan A&PS Delivery) wrote:\n> The attached patch adds a description of the 'writing new heap'\n> value output in the 'phase' column. \n\nIndeed, fixed. Thanks for the patch.\n--\nMichael", "msg_date": "Mon, 28 Oct 2019 14:25:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [DOC] Fix for the missing pg_stat_progress_cluster view phase\n column value" }, { "msg_contents": "At Sat, 26 Oct 2019 05:13:49 +0000, \"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\" <noriyoshi.shinoda@hpe.com> wrote in \n> I found a missing column value in the pg_stat_progress_cluster view document.\n> I read the src/backend/catalog/system_views.sql file, there seems to be a possibility that 'writing new heap' is output in the 'phase' column.\n> The attached patch adds a description of the 'writing new heap' value output in the 'phase' column.\n\nGood catch!\n\nBy the way the table mentions the phases common to CLUSTER and VACUUM FULL. I wonder why some of them are described as \"CLUSTER is\" and others are \"The command is\"..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Oct 2019 14:26:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [DOC] Fix for the missing pg_stat_progress_cluster view phase\n column value" }, { "msg_contents": "On Mon, Oct 28, 2019 at 02:26:39PM +0900, Kyotaro Horiguchi wrote:\n> By the way the table mentions the phases common to CLUSTER and\n> VACUUM FULL. I wonder why some of them are described as \"CLUSTER is\"\n> and others are \"The command is\".. \n\nBecause VACUUM FULL does not use the sort-and-scan mode, no?\n--\nMichael", "msg_date": "Mon, 28 Oct 2019 15:22:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [DOC] Fix for the missing pg_stat_progress_cluster view phase\n column value" }, { "msg_contents": "Thank you for your response.\n\n> By the way the table mentions the phases common to CLUSTER and VACUUM FULL. I wonder why some of them are described as \"CLUSTER is\" and others are \"The command is\"..\n\nThe 'writing new heap' phase seems to appear only when the CLUSTER statement is executed. When I read the table_relation_copy_for_cluster function, it seems to be a phase that is executed only during sorting.\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Kyotaro Horiguchi [mailto:horikyota.ntt@gmail.com] \nSent: Monday, October 28, 2019 2:27 PM\nTo: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: [DOC] Fix for the missing pg_stat_progress_cluster view phase column value\n\nAt Sat, 26 Oct 2019 05:13:49 +0000, \"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\" <noriyoshi.shinoda@hpe.com> wrote in \n> I found a missing column value in the pg_stat_progress_cluster view document.\n> I read the src/backend/catalog/system_views.sql file, there seems to be a possibility that 'writing new heap' is output in the 'phase' column.\n> The attached patch adds a description of the 'writing new heap' value output in the 'phase' column.\n\nGood catch!\n\nBy the way the table mentions the phases common to CLUSTER and VACUUM FULL. I wonder why some of them are described as \"CLUSTER is\" and others are \"The command is\"..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Oct 2019 08:20:27 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": true, "msg_subject": "RE: [DOC] Fix for the missing pg_stat_progress_cluster view phase\n column value" } ]
[ { "msg_contents": "It seems to me that using IDENT_USERNAME_MAX for peer authentication is\nsome kind of historical leftover and not really appropriate or useful,\nso I propose the attached cleanup.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 26 Oct 2019 08:55:03 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove one use of IDENT_USERNAME_MAX" }, { "msg_contents": "Hello.\n\nAt Sat, 26 Oct 2019 08:55:03 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> IDENT_USERNAME_MAX is the maximum length of the information returned\n> by an ident server, per RFC 1413. Using it as the buffer size in peer\n> authentication is inappropriate. It was done here because of the\n> historical relationship between peer and ident authentication. But\n> since it's also completely useless code-wise, remove it.\n\nIn think one of the reasons for the coding is the fact that *pw is\ndescribed to be placed in the static area, which can be overwritten by\nsucceeding calls to getpw*() functions. I think we can believe\ncheck_usermap() never calls them but I suppose that some comments\nneeded..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Oct 2019 14:10:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove one use of IDENT_USERNAME_MAX" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Sat, 26 Oct 2019 08:55:03 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n>> IDENT_USERNAME_MAX is the maximum length of the information returned\n>> by an ident server, per RFC 1413. Using it as the buffer size in peer\n>> authentication is inappropriate. It was done here because of the\n>> historical relationship between peer and ident authentication. But\n>> since it's also completely useless code-wise, remove it.\n\n> In think one of the reasons for the coding is the fact that *pw is\n> described to be placed in the static area, which can be overwritten by\n> succeeding calls to getpw*() functions.\n\nGood point ... so maybe pstrdup instead of using a fixed-size buffer?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 09:45:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove one use of IDENT_USERNAME_MAX" }, { "msg_contents": "On 2019-10-28 14:45, Tom Lane wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> At Sat, 26 Oct 2019 08:55:03 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in\n>>> IDENT_USERNAME_MAX is the maximum length of the information returned\n>>> by an ident server, per RFC 1413. Using it as the buffer size in peer\n>>> authentication is inappropriate. It was done here because of the\n>>> historical relationship between peer and ident authentication. But\n>>> since it's also completely useless code-wise, remove it.\n> \n>> In think one of the reasons for the coding is the fact that *pw is\n>> described to be placed in the static area, which can be overwritten by\n>> succeeding calls to getpw*() functions.\n> \n> Good point ... so maybe pstrdup instead of using a fixed-size buffer?\n\nMaybe. Or we just decide that check_usermap() is not allowed to call \ngetpw*(). It's just a string-matching routine, so it doesn't have any \nsuch business anyway.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 29 Oct 2019 08:10:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove one use of IDENT_USERNAME_MAX" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-10-28 14:45, Tom Lane wrote:\n>> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>>> In think one of the reasons for the coding is the fact that *pw is\n>>> described to be placed in the static area, which can be overwritten by\n>>> succeeding calls to getpw*() functions.\n\n>> Good point ... so maybe pstrdup instead of using a fixed-size buffer?\n\n> Maybe. Or we just decide that check_usermap() is not allowed to call \n> getpw*(). It's just a string-matching routine, so it doesn't have any \n> such business anyway.\n\nI'm okay with that as long as you add a comment describing this\nassumption.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Oct 2019 10:34:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove one use of IDENT_USERNAME_MAX" }, { "msg_contents": "On 2019-10-29 15:34, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2019-10-28 14:45, Tom Lane wrote:\n>>> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>>>> In think one of the reasons for the coding is the fact that *pw is\n>>>> described to be placed in the static area, which can be overwritten by\n>>>> succeeding calls to getpw*() functions.\n> \n>>> Good point ... so maybe pstrdup instead of using a fixed-size buffer?\n> \n>> Maybe. Or we just decide that check_usermap() is not allowed to call\n>> getpw*(). It's just a string-matching routine, so it doesn't have any\n>> such business anyway.\n> \n> I'm okay with that as long as you add a comment describing this\n> assumption.\n\nCommitted with a pstrdup(). That seemed more consistent with other code \nin that file.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 30 Oct 2019 11:19:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove one use of IDENT_USERNAME_MAX" } ]
[ { "msg_contents": "Hi,\n\nOne of the function apply_typmod in numeric.c file present within #if 0.\nThis is like this for many years.\nI felt this can be removed.\nAttached patch contains the changes to handle removal of apply_typmod\npresent in #if 0.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 26 Oct 2019 14:06:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Cleanup - Removal of apply_typmod function in #if 0" }, { "msg_contents": "On 2019-10-26 10:36, vignesh C wrote:\n> One of the function apply_typmod in numeric.c file present within #if 0.\n> This is like this for many years.\n> I felt this can be removed.\n> Attached patch contains the changes to handle removal of apply_typmod\n> present in #if 0.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Mar 2020 08:57:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - Removal of apply_typmod function in #if 0" }, { "msg_contents": "On Mon, Mar 2, 2020 at 1:27 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-10-26 10:36, vignesh C wrote:\n> > One of the function apply_typmod in numeric.c file present within #if 0.\n> > This is like this for many years.\n> > I felt this can be removed.\n> > Attached patch contains the changes to handle removal of apply_typmod\n> > present in #if 0.\n>\n> committed\n>\n\nThanks Peter for committing.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Mar 2020 14:48:38 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Cleanup - Removal of apply_typmod function in #if 0" } ]
[ { "msg_contents": "When the user modifies the REPLICA IDENTIFY field type, the logical \nreplication settings are lost.\n\nFor example:\n\npostgres=# \\d+ t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats \ntarget | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n col1 | integer | | | | plain | \n |\n col2 | integer | | not null | | plain | \n |\nIndexes:\n \"t1_col2_key\" UNIQUE CONSTRAINT, btree (col2) REPLICA IDENTITY\n\n\npostgres=# alter table t1 alter col2 type smallint;\nALTER TABLE\npostgres=# \\d+ t1\n Table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Stats \ntarget | Description\n--------+----------+-----------+----------+---------+---------+--------------+-------------\n col1 | integer | | | | plain | \n |\n col2 | smallint | | not null | | plain | \n |\nIndexes:\n \"t1_col2_key\" UNIQUE CONSTRAINT, btree (col2)\n\nIn fact, the replication property of the table has not been modified, \nand it is still 'i'(REPLICA_IDENTITY_INDEX). But the previously \nspecified index property 'indisreplident' is set to false because of the \nrebuild.\n\nSo I developed a patch. If the user modifies the field type. The \nassociated index is REPLICA IDENTITY. Rebuild and restore replication \nsettings.\n\nRegards,\nQuan Zongliang", "msg_date": "Sat, 26 Oct 2019 16:50:48 +0800", "msg_from": "Quan Zongliang <quanzongliang@gmail.com>", "msg_from_op": true, "msg_subject": "Restore replication settings when modifying a field type" }, { "msg_contents": "Hello.\n\n# The patch no longer applies on the current master. Needs a rebasing.\n\nAt Sat, 26 Oct 2019 16:50:48 +0800, Quan Zongliang <quanzongliang@gmail.com> wrote in \n> In fact, the replication property of the table has not been modified,\n> and it is still 'i'(REPLICA_IDENTITY_INDEX). But the previously\n> specified index property 'indisreplident' is set to false because of\n> the rebuild.\n\nI suppose that the behavior is intended. Change of column types on the\npublisher side can break the agreement on replica identity with\nsubscribers. Thus replica identity setting cannot be restored\nunconditionally. For (somewhat artifitial :p) example:\n\nP=# create table t (c1 integer, c2 text unique not null);\nP=# alter table t replica identity using index t_c2_key;\nP=# create publication p1 for table t;\nP=# insert into t values (0, '00'), (1, '01');\nS=# create table t (c1 integer, c2 text unique not null);\nS=# alter table t replica identity using index t_c2_key;\nS=# create subscription s1 connection '...' publication p1;\n\nYour patch allows change of the type of c2 into integer.\n\nP=# alter table t alter column c2 type integer using c2::integer;\nP=# update t set c1 = c1 + 1 where c2 = '01';\n\nThis change doesn't affect perhaps as expected.\n\nS=# select * from t;\n c1 | c2 \n----+----\n 0 | 00\n 1 | 01\n(2 rows)\n\n\n> So I developed a patch. If the user modifies the field type. The\n> associated index is REPLICA IDENTITY. Rebuild and restore replication\n> settings.\n\nExplicit setting of replica identity premises that they are sure that\nthe setting works correctly. Implicit rebuilding after a type change\ncan silently break it.\n\nAt least we need to guarantee that the restored replica identity\nsetting is truly compatible with all existing subscribers. I'm not\nsure about potential subscribers..\n\nAnyway I think it is a problem that replica identity setting is\ndropped silently. Perhaps a message something like \"REPLICA IDENTITY\nsetting is lost, please redefine after confirmation of compatibility\nwith subscribers.\" is needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Oct 2019 13:39:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On 2019/10/28 12:39, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> # The patch no longer applies on the current master. Needs a rebasing.\n> \n> At Sat, 26 Oct 2019 16:50:48 +0800, Quan Zongliang <quanzongliang@gmail.com> wrote in\n>> In fact, the replication property of the table has not been modified,\n>> and it is still 'i'(REPLICA_IDENTITY_INDEX). But the previously\n>> specified index property 'indisreplident' is set to false because of\n>> the rebuild.\n> \n> I suppose that the behavior is intended. Change of column types on the\n> publisher side can break the agreement on replica identity with\n> subscribers. Thus replica identity setting cannot be restored\n> unconditionally. For (somewhat artifitial :p) example:\n> \n> P=# create table t (c1 integer, c2 text unique not null);\n> P=# alter table t replica identity using index t_c2_key;\n> P=# create publication p1 for table t;\n> P=# insert into t values (0, '00'), (1, '01');\n> S=# create table t (c1 integer, c2 text unique not null);\n> S=# alter table t replica identity using index t_c2_key;\n> S=# create subscription s1 connection '...' publication p1;\n> \n> Your patch allows change of the type of c2 into integer.\n> \n> P=# alter table t alter column c2 type integer using c2::integer;\n> P=# update t set c1 = c1 + 1 where c2 = '01';\n> \n> This change doesn't affect perhaps as expected.\n> \n> S=# select * from t;\n> c1 | c2\n> ----+----\n> 0 | 00\n> 1 | 01\n> (2 rows)\n> \n> \n>> So I developed a patch. If the user modifies the field type. The\n>> associated index is REPLICA IDENTITY. Rebuild and restore replication\n>> settings.\n> \n> Explicit setting of replica identity premises that they are sure that\n> the setting works correctly. Implicit rebuilding after a type change\n> can silently break it.\n> \n> At least we need to guarantee that the restored replica identity\n> setting is truly compatible with all existing subscribers. I'm not\n> sure about potential subscribers..\n> \n> Anyway I think it is a problem that replica identity setting is\n> dropped silently. Perhaps a message something like \"REPLICA IDENTITY\n> setting is lost, please redefine after confirmation of compatibility\n> with subscribers.\" is needed.\n> \nIn fact, the scene we encountered is like this. The field of a user's \ntable is of type \"smallint\", and it turns out that this range is not \nsufficient. So change it to \"int\". At this point, the REPLICA IDENTITY \nis lost and the user does not realize it. When they found out, the \nlogical replication for this period of time did not output normally. \nUsers have to find other ways to get the data back.\nThe logical replication of this user is to issue standard SQL statements \nto other relational databases using the plugin developed by himself. And \nthey have thousands of tables to replicate.\nSo I think this patch is appropriate in this scenario. As for the \nmatching problem between publishers and subscribers, I'm afraid it's \nhard to solve here. If this is not a suitable modification, I can \nwithdraw it. And see if there's a better way.\n\nIf necessary, I'll modify it again. Rebase to the master branch.\n\n> regards.\n> \n\n\n\n", "msg_date": "Fri, 1 Nov 2019 09:41:57 +0800", "msg_from": "Quan Zongliang <quanzongliang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "Em seg, 28 de out de 2019 às 01:41, Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> escreveu:\n>\n> At Sat, 26 Oct 2019 16:50:48 +0800, Quan Zongliang <quanzongliang@gmail.com> wrote in\n> > In fact, the replication property of the table has not been modified,\n> > and it is still 'i'(REPLICA_IDENTITY_INDEX). But the previously\n> > specified index property 'indisreplident' is set to false because of\n> > the rebuild.\n>\n> I suppose that the behavior is intended. Change of column types on the\n> publisher side can break the agreement on replica identity with\n> subscribers. Thus replica identity setting cannot be restored\n> unconditionally. For (somewhat artifitial :p) example:\n>\nI don't think so. The actual logical replication behavior is that DDL\nwill always break replication. If you add a new column or drop a\ncolumn, you will stop replication for that table while you don't\nexecute the same DDL in the subscriber. What happens in the OP case is\nthat a DDL is *silently* breaking the logical replication. IMHO it is\na bug. If the behavior was intended it should clean\npg_class.relreplident but it is not.\n\n> Explicit setting of replica identity premises that they are sure that\n> the setting works correctly. Implicit rebuilding after a type change\n> can silently break it.\n>\nThe current behavior forces the OP to execute 2 DDLs in the same\ntransaction to ensure that it won't \"loose\" transactions for logical\nreplication.\n\n> At least we need to guarantee that the restored replica identity\n> setting is truly compatible with all existing subscribers. I'm not\n> sure about potential subscribers..\n>\nWhy? Replication will break and to fix it you should apply the same\nDDL you apply in publisher. It is the right thing to do.\n\n[poking the code...]\n\nATExecAlterColumnType records everything that depends on the column\nand for indexes it saves the definition (via pg_get_indexdef_string).\nDefinition is not sufficient for reconstructing the replica identity\ninformation because there is not such keyword for replica identity in\nCREATE INDEX. The new index should call relation_mark_replica_identity\nto fix pg_index.indisreplident.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Fri, 1 Nov 2019 00:39:03 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On 2019/10/28 12:39, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> # The patch no longer applies on the current master. Needs a rebasing.\n> \nNew patch, rebased on master branch.\n\n> At Sat, 26 Oct 2019 16:50:48 +0800, Quan Zongliang <quanzongliang@gmail.com> wrote in\n>> In fact, the replication property of the table has not been modified,\n>> and it is still 'i'(REPLICA_IDENTITY_INDEX). But the previously\n>> specified index property 'indisreplident' is set to false because of\n>> the rebuild.\n> \n> I suppose that the behavior is intended. Change of column types on the\n> publisher side can break the agreement on replica identity with\n> subscribers. Thus replica identity setting cannot be restored\n> unconditionally. For (somewhat artifitial :p) example:\n> \n> P=# create table t (c1 integer, c2 text unique not null);\n> P=# alter table t replica identity using index t_c2_key;\n> P=# create publication p1 for table t;\n> P=# insert into t values (0, '00'), (1, '01');\n> S=# create table t (c1 integer, c2 text unique not null);\n> S=# alter table t replica identity using index t_c2_key;\n> S=# create subscription s1 connection '...' publication p1;\n> \n> Your patch allows change of the type of c2 into integer.\n> \n> P=# alter table t alter column c2 type integer using c2::integer;\n> P=# update t set c1 = c1 + 1 where c2 = '01';\n> \n> This change doesn't affect perhaps as expected.\n> \n> S=# select * from t;\n> c1 | c2\n> ----+----\n> 0 | 00\n> 1 | 01\n> (2 rows)\n> \n> \n>> So I developed a patch. If the user modifies the field type. The\n>> associated index is REPLICA IDENTITY. Rebuild and restore replication\n>> settings.\n> \n> Explicit setting of replica identity premises that they are sure that\n> the setting works correctly. Implicit rebuilding after a type change\n> can silently break it.\n> \n> At least we need to guarantee that the restored replica identity\n> setting is truly compatible with all existing subscribers. I'm not\n> sure about potential subscribers..\n> \n> Anyway I think it is a problem that replica identity setting is\n> dropped silently. Perhaps a message something like \"REPLICA IDENTITY\n> setting is lost, please redefine after confirmation of compatibility\n> with subscribers.\" is needed.\n> \n> regards.\n>", "msg_date": "Tue, 5 Nov 2019 08:37:17 +0800", "msg_from": "Quan Zongliang <quanzongliang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On 2019-11-01 04:39, Euler Taveira wrote:\n> ATExecAlterColumnType records everything that depends on the column\n> and for indexes it saves the definition (via pg_get_indexdef_string).\n> Definition is not sufficient for reconstructing the replica identity\n> information because there is not such keyword for replica identity in\n> CREATE INDEX. The new index should call relation_mark_replica_identity\n> to fix pg_index.indisreplident.\n\nYeah, I don't think we need to do the full dance of reverse compiling \nthe SQL command and reexecuting it, as the patch currently does. That's \nonly necessary for rebuilding the index itself. For re-setting the \nreplica identity, we can just use the internal API as you say.\n\nAlso, a few test cases would be nice for this patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jan 2020 10:14:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On 2020/1/3 17:14, Peter Eisentraut wrote:\n> On 2019-11-01 04:39, Euler Taveira wrote:\n>> ATExecAlterColumnType records everything that depends on the column\n>> and for indexes it saves the definition (via pg_get_indexdef_string).\n>> Definition is not sufficient for reconstructing the replica identity\n>> information because there is not such keyword for replica identity in\n>> CREATE INDEX. The new index should call relation_mark_replica_identity\n>> to fix pg_index.indisreplident.\n> \n> Yeah, I don't think we need to do the full dance of reverse compiling \n> the SQL command and reexecuting it, as the patch currently does.  That's \n> only necessary for rebuilding the index itself.  For re-setting the \n> replica identity, we can just use the internal API as you say.\n> \n> Also, a few test cases would be nice for this patch.\n> \n\nI'm a little busy. I'll write a new patch in a few days.\n\n\n", "msg_date": "Wed, 15 Jan 2020 08:30:42 +0800", "msg_from": "Quan Zongliang <quanzongliang@foxmail.com>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On 2020/1/15 08:30, Quan Zongliang wrote:\n> On 2020/1/3 17:14, Peter Eisentraut wrote:\n>> On 2019-11-01 04:39, Euler Taveira wrote:\n>>> ATExecAlterColumnType records everything that depends on the column\n>>> and for indexes it saves the definition (via pg_get_indexdef_string).\n>>> Definition is not sufficient for reconstructing the replica identity\n>>> information because there is not such keyword for replica identity in\n>>> CREATE INDEX. The new index should call relation_mark_replica_identity\n>>> to fix pg_index.indisreplident.\n>>\n>> Yeah, I don't think we need to do the full dance of reverse compiling \n>> the SQL command and reexecuting it, as the patch currently does. \n>> That's only necessary for rebuilding the index itself.  For re-setting \n>> the replica identity, we can just use the internal API as you say.\n>>\n>> Also, a few test cases would be nice for this patch.\n>>\n> \n> I'm a little busy. I'll write a new patch in a few days.\n\nnew patch attached.\n\n\ntest case 1:\ncreate table t1 (col1 int,col2 int not null,\n col3 int not null,unique(col2,col3));\nalter table t1 replica identity using index t1_col2_col3_key;\nalter table t1 alter col3 type text;\nalter table t1 alter col3 type smallint using col3::int;\n\ntest case 2:\ncreate table t2 (col1 varchar(10), col2 text not null,\n col3 timestamp not null,unique(col2,col3),\n col4 int not null unique);\nalter table t2 replica identity using index t2_col2_col3_key;\nalter table t2 alter col3 type text;\nalter table t2 replica identity using index t2_col4_key;\nalter table t2 alter col4 type timestamp using '2020-02-11'::timestamp;", "msg_date": "Tue, 11 Feb 2020 07:38:48 +0800", "msg_from": "Quan Zongliang <quanzongliang@foxmail.com>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On 2020-02-11 00:38, Quan Zongliang wrote:\n> new patch attached.\n\nI didn't like so much how the updating of the replica identity was \nhacked into the middle of ATRewriteCatalogs(). I have an alternative \nproposal in the attached patch that queues up an ALTER TABLE ... REPLICA \nIDENTITY command into the normal ALTER TABLE processing. I have also \nadded tests to the test suite.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 5 Mar 2020 13:45:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On Thu, 5 Mar 2020 at 09:45, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-02-11 00:38, Quan Zongliang wrote:\n> > new patch attached.\n>\n> I didn't like so much how the updating of the replica identity was\n> hacked into the middle of ATRewriteCatalogs(). I have an alternative\n> proposal in the attached patch that queues up an ALTER TABLE ... REPLICA\n> IDENTITY command into the normal ALTER TABLE processing. I have also\n> added tests to the test suite.\n>\n> LGTM. Tests are ok. I've rebased it (because\n61d7c7bce3686ec02bd64abac742dd35ed9b9b01). Are you planning to backpatch\nit? IMHO you should because it is a bug (since REPLICA IDENTITY was\nintroduced in 9.4). This patch can be applied as-is in 12 but not to other\nolder branches. I attached new patches.\n\nRegards,\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 10 Mar 2020 10:16:22 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" }, { "msg_contents": "On 2020-03-10 14:16, Euler Taveira wrote:\n> On Thu, 5 Mar 2020 at 09:45, Peter Eisentraut \n> <peter.eisentraut@2ndquadrant.com \n> <mailto:peter.eisentraut@2ndquadrant.com>> wrote:\n> \n> On 2020-02-11 00:38, Quan Zongliang wrote:\n> > new patch attached.\n> \n> I didn't like so much how the updating of the replica identity was\n> hacked into the middle of ATRewriteCatalogs().  I have an alternative\n> proposal in the attached patch that queues up an ALTER TABLE ...\n> REPLICA\n> IDENTITY command into the normal ALTER TABLE processing.  I have also\n> added tests to the test suite.\n> \n> LGTM. Tests are ok. I've rebased it (because \n> 61d7c7bce3686ec02bd64abac742dd35ed9b9b01). Are you planning to backpatch \n> it? IMHO you should because it is a bug (since REPLICA IDENTITY was \n> introduced in 9.4). This patch can be applied as-is in 12 but not to \n> other older branches. I attached new patches.\n\nThanks. This has been committed and backpatched to 9.5.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 13:32:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Restore replication settings when modifying a field type" } ]
[ { "msg_contents": "Hi.\n\nI have noticed that it would be cool to use '==' in place of 'IS NOT\nDISTICT FROM'\n\nWhat do you think about this crazy idea?\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Sat, 26 Oct 2019 18:41:10 +0300", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On Sat, Oct 26, 2019 at 06:41:10PM +0300, Eugen Konkov wrote:\n> Hi.\n> \n> I have noticed that it would be cool to use '==' in place of 'IS NOT\n> DISTICT FROM'\n> \n> What do you think about this crazy idea?\n\nTurning \"IS NOT DISTINCT FROM\" into an operator sounds like a great\nidea. Let the name bike-shedding begin!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 26 Oct 2019 18:30:56 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "David Fetter <david@fetter.org> writes:\n> On Sat, Oct 26, 2019 at 06:41:10PM +0300, Eugen Konkov wrote:\n>> I have noticed that it would be cool to use '==' in place of 'IS NOT\n>> DISTICT FROM'\n>> What do you think about this crazy idea?\n\n> Turning \"IS NOT DISTINCT FROM\" into an operator sounds like a great\n> idea.\n\nNo it isn't. For starters, somebody very possibly has used that\noperator name in an extension. For another, it'd be really\ninconsistent to have an abbreviation for 'IS NOT DISTINCT FROM'\nbut not 'IS DISTINCT FROM', so you'd need another reserved operator\nname for that, making the risk of breakage worse.\n\nThere's an independent set of arguments around why we'd invent a\nproprietary replacement for perfectly good standard SQL.\n\nWe do have some unresolved issues around how to let dump/restore\ncontrol the interpretation of IS [NOT] DISTINCT FROM, cf\n\nhttps://www.postgresql.org/message-id/flat/ffefc172-a487-aa87-a0e7-472bf29735c8%40gmail.com\n\nbut I don't think this idea is helping with that at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Oct 2019 12:48:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On Sat, Oct 26, 2019 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Fetter <david@fetter.org> writes:\n> > On Sat, Oct 26, 2019 at 06:41:10PM +0300, Eugen Konkov wrote:\n> >> I have noticed that it would be cool to use '==' in place of 'IS NOT\n> >> DISTICT FROM'\n> >> What do you think about this crazy idea?\n>\n> > Turning \"IS NOT DISTINCT FROM\" into an operator sounds like a great\n> > idea.\n>\n> No it isn't.\n\n\n+1\n\n\n-- \nJonah H. Harris\n\nOn Sat, Oct 26, 2019 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Fetter <david@fetter.org> writes:\n> On Sat, Oct 26, 2019 at 06:41:10PM +0300, Eugen Konkov wrote:\n>> I  have  noticed that it would be cool to use '==' in place of 'IS NOT\n>> DISTICT FROM'\n>> What do you think about this crazy idea?\n\n> Turning \"IS NOT DISTINCT FROM\" into an operator sounds like a great\n> idea.\n\nNo it isn't.+1-- Jonah H. Harris", "msg_date": "Sat, 26 Oct 2019 13:01:55 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "I wrote:\n> We do have some unresolved issues around how to let dump/restore\n> control the interpretation of IS [NOT] DISTINCT FROM, cf\n> https://www.postgresql.org/message-id/flat/ffefc172-a487-aa87-a0e7-472bf29735c8%40gmail.com\n> but I don't think this idea is helping with that at all.\n\nBTW, taking a step back and viewing this suggestion as \"it'd be nice\nto have *some* shorter notation than IS [NOT] DISTINCT FROM\", maybe\nthere's a way to unify that desire with the dump/restore fix. What\nwe'd really need to fix the dump/restore problem, AFAICS, is to name\nthe underlying equality operator --- potentially with a schema\nqualification --- but then have some notation that says \"handle NULLs\nlike IS [NOT] DISTINCT FROM does\". So instead of\n\n\tx IS NOT DISTINCT FROM y\n\nI'm vaguely imagining\n\n\tx = {magic} y\n\nwhere unlike Eugen's suggestion, \"=\" is the real name of the underlying\ncomparison operator. For dump/restore this could be spelled verbosely\nas\n\n\tx OPERATOR(someplace.=) {magic} y\n\nThe hard part is to figure out some {magic} annotation that is both\nshort and unambiguous. We have to cover the IS DISTINCT variant, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Oct 2019 14:23:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Hi,\n\nOn 2019-10-26 14:23:49 -0400, Tom Lane wrote:\n> I wrote:\n> > We do have some unresolved issues around how to let dump/restore\n> > control the interpretation of IS [NOT] DISTINCT FROM, cf\n> > https://www.postgresql.org/message-id/flat/ffefc172-a487-aa87-a0e7-472bf29735c8%40gmail.com\n> > but I don't think this idea is helping with that at all.\n> \n> BTW, taking a step back and viewing this suggestion as \"it'd be nice\n> to have *some* shorter notation than IS [NOT] DISTINCT FROM\", maybe\n> there's a way to unify that desire with the dump/restore fix. What\n> we'd really need to fix the dump/restore problem, AFAICS, is to name\n> the underlying equality operator --- potentially with a schema\n> qualification --- but then have some notation that says \"handle NULLs\n> like IS [NOT] DISTINCT FROM does\". So instead of\n> \n> \tx IS NOT DISTINCT FROM y\n> \n> I'm vaguely imagining\n> \n> \tx = {magic} y\n> \n> where unlike Eugen's suggestion, \"=\" is the real name of the underlying\n> comparison operator. For dump/restore this could be spelled verbosely\n> as\n> \n> \tx OPERATOR(someplace.=) {magic} y\n> \n> The hard part is to figure out some {magic} annotation that is both\n> short and unambiguous. We have to cover the IS DISTINCT variant, too.\n\nLeaving the exact choice of how {magic} would look like, are you\nthinking of somehow making it work for every operator, or just for some\nsubset? It's intriguing to have something generic, but I'm not quite\nclear how that'd would work? It's not clear to me how we'd\nautomatically infer a sensible meaning for e.g. < etc.\n\nAnd even if we just restrict it to = (and presumably <> and !=), in\nwhich cases is this magic going to work? Would we tie it to the textual\n'=', '<>' operators? btree opclass members?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 26 Oct 2019 13:51:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-26 14:23:49 -0400, Tom Lane wrote:\n>> ... instead of\n>> \tx IS NOT DISTINCT FROM y\n>> I'm vaguely imagining\n>> \tx = {magic} y\n>> where unlike Eugen's suggestion, \"=\" is the real name of the underlying\n>> comparison operator. For dump/restore this could be spelled verbosely\n>> as\n>> \tx OPERATOR(someplace.=) {magic} y\n>> The hard part is to figure out some {magic} annotation that is both\n>> short and unambiguous. We have to cover the IS DISTINCT variant, too.\n\nTo clarify, what I have in mind here doesn't have any effect whatever\non the parse tree or the execution semantics, it's just about offering\nan alternative SQL textual representation.\n\n> Leaving the exact choice of how {magic} would look like, are you\n> thinking of somehow making it work for every operator, or just for some\n> subset? It's intriguing to have something generic, but I'm not quite\n> clear how that'd would work? It's not clear to me how we'd\n> automatically infer a sensible meaning for e.g. < etc.\n\nYeah, I think it could only be made to work sanely for underlying\noperators that have the semantics of equality. The NOT DISTINCT\nwrapper has the semantics\n\n\tNULL vs NULL\t\t-> true\n\tNULL vs not-NULL\t-> false\n\tnot-NULL vs NULL\t-> false\n\tnot-NULL vs not-NULL\t-> apply operator\n\nand while theoretically the operator needn't be equality, those\nNULL behaviors don't make much sense otherwise. (IS DISTINCT\njust inverts all the results, of course.)\n\nI suppose that we could also imagine generalizing DistinctExpr\ninto something that could work with other operator semantics,\nbut as you say, it's a bit hard to wrap ones head around what\nthat would look like.\n\n> And even if we just restrict it to = (and presumably <> and !=), in\n> which cases is this magic going to work? Would we tie it to the textual\n> '=', '<>' operators? btree opclass members?\n\nSee the other thread I cited --- right now, the underlying operator is\nalways \"=\" and it's looked up by name. Whether that ought to change\nseems like a separate can o' worms.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Oct 2019 17:16:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On 26/10/2019 17:41, Eugen Konkov wrote:\n> Hi.\n>\n> I have noticed that it would be cool to use '==' in place of 'IS NOT\n> DISTICT FROM'\n>\n> What do you think about this crazy idea?\n\n\nI think this is a terrible idea.  The only reason to do this would be to\nindex it, but indexes (btree at least) expect STRICT operators, which\nthis would not be.\n\n\n\n", "msg_date": "Sun, 27 Oct 2019 01:09:29 +0200", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Hi, \n\nOn October 26, 2019 4:09:29 PM PDT, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>On 26/10/2019 17:41, Eugen Konkov wrote:\n>> Hi.\n>>\n>> I have noticed that it would be cool to use '==' in place of 'IS\n>NOT\n>> DISTICT FROM'\n>>\n>> What do you think about this crazy idea?\n>\n>\n>I think this is a terrible idea.  The only reason to do this would be\n>to\n>index it, but indexes (btree at least) expect STRICT operators, which\n>this would not be.\n\nIt sounds like what's being suggested is just some abbreviated formulation of IS NOT DISTINCT. If implement that way, rather than manually adding non strict operators, I don't think there would be an indexing issue.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 26 Oct 2019 16:21:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "I wrote:\n> To clarify, what I have in mind here doesn't have any effect whatever\n> on the parse tree or the execution semantics, it's just about offering\n> an alternative SQL textual representation.\n\nContinuing this thread ... if we were just trying to fix the\ndump/restore issue without regard for verbosity, I think I'd propose\nthat we implement syntaxes like\n\n\tx IS DISTINCT FROM y\n\tx IS DISTINCT (=) FROM y\n\tx IS DISTINCT (schema.=) FROM y\n\tx IS NOT DISTINCT FROM y\n\tx IS NOT DISTINCT (=) FROM y\n\tx IS NOT DISTINCT (schema.=) FROM y\n\nwith the understanding that the parenthesized operator name is what\nto use for the underlying equality comparison, and that in the absence\nof any name, the parser looks up \"=\" (which is what it does today).\nThus the first two alternatives are precisely equivalent, as are the\nfourth and fifth. Also, to support row-wise comparisons, we could\nallow cases like\n\n\tROW(a,b) IS NOT DISTINCT (schema1.=, schema2.=) FROM ROW(x,y)\n\nruleutils.c could proceed by looking up the operator(s) normally,\nand skipping the verbose syntax when they would print as just \"=\",\nso that we don't need to emit nonstandard SQL for common cases.\n\nI haven't actually checked to ensure that Bison can handle this,\nbut since DISTINCT and FROM are both fully reserved words, it seems\nvirtually certain that this would work without syntactic ambiguity.\nIt also seems relatively understandable as to what it means.\n\nBut of course, this is the exact opposite of addressing Eugen's\nconcern about verbosity :-(. Can we pack the same functionality\ninto fewer characters?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 27 Oct 2019 12:17:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "> x IS NOT DISTINCT FROM y\n\n> I'm vaguely imagining\n\n> x = {magic} y\n\n> where unlike Eugen's suggestion, \"=\" is the real name of the underlying\n> comparison operator. For dump/restore this could be spelled verbosely\n> as\n\n> x OPERATOR(someplace.=) {magic} y\n\n> The hard part is to figure out some {magic} annotation that is both\n> short and unambiguous. We have to cover the IS DISTINCT variant, too.\n\nI am from Perl world. There are == and != operators.\nHere short snippet of code:\n\nmy $x = undef;\nmy $y = 'some value';\nmy $z = undef;\n$x == $y; # FALSE\n$x == $z; # TRUE\n$x != $y ; # TRUE\n$x != $z; # FALSE\n\n\n> x OPERATOR(someplace.=) {magic} y\nIf we should follow this form, then IS DISTINCT should be written as:\nx =! y\nThis looks unusual, because JavaScript also follow != form. so I hope\nit will be easy to detect/implement != form, which I used to read as:\nnegate the result of comparison\n\n\n\nCan we supply additional parameters to OPERATOR via double\nparentheses( double parentheses is another crazy idea)?\nx =(( 'NULL' )) y\n\nor\n\nx OPERATOR(someplace.=, magic ) y\nwhich will be internally converted( I suppose ) to OPERATOR(\nsomeplace.=, x, y, magic )\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Mon, 28 Oct 2019 13:39:38 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On Mon, 28 Oct 2019 at 07:39, Eugen Konkov <kes-kes@yandex.ru> wrote:\n\nIf we should follow this form, then IS DISTINCT should be written as:\n> x =! y\n> This looks unusual, because JavaScript also follow != form. so I hope\n> it will be easy to detect/implement != form, which I used to read as:\n> negate the result of comparison\n>\n\nPostgres already allows != as a synonym for <>. I think having =! mean\nsomething subtly but significantly different is a terrible idea. At a\nminimum we would have to remove the synonym, which would be a backwards\ncompatibility break.\n\nOn Mon, 28 Oct 2019 at 07:39, Eugen Konkov <kes-kes@yandex.ru> wrote:If we should follow this form, then IS DISTINCT should be written as:\nx =! y\nThis  looks unusual, because JavaScript also follow != form. so I hope\nit  will be easy to detect/implement != form, which I used to read as:\nnegate the result of comparisonPostgres already allows != as a synonym for <>. I think having =! mean something subtly but significantly different is a terrible idea. At a minimum we would have to remove the synonym, which would be a backwards compatibility break.", "msg_date": "Mon, 28 Oct 2019 07:54:32 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On Mon, Oct 28, 2019 at 7:54 AM Isaac Morland <isaac.morland@gmail.com> wrote:\n> Postgres already allows != as a synonym for <>. I think having =! mean something subtly but significantly different is a terrible idea. At a minimum we would have to remove the synonym, which would be a backwards compatibility break.\n\nI certainly agree with that. I do think, though, that IS DISTINCT FROM\nis a terribly verbose thing to have to write all the time. It's not\nthat bad when you write a query that contains one instance of it, but\nI've both seen and written queries where you need to use it a bunch of\ntimes, and that can get really annoying. So I don't think adding an\noperator that means the same thing is a bad idea. I don't think ==\nand !== would be crazy, for instance; Tom's statement that someone\nmight already be using == in an extension doesn't persuade me, because\n(1) even if it's true it's likely to inconvenience only a very small\npercentage of users and (2) the same argument can be applied to any\noperator name and is more likely to apply to operator names that don't\nlook like line noise, and I refuse to accept the idea that we should\ncommit either to never adding new operators ever again, or the\ncompeting idea that any we do add should look like line noise.\n\nAFAICS, Tom's got the right idea about how to fix the pg_dump\nschema-qualification issue, and the idea of creating an operator\nnotation is a separate and possibly harder problem. Whatever we need\nto add to the IS [NOT] DISTINCT FROM syntax for pg_dump can just be\nhard-coded, but I guess if we want new operators we'd have to run\naround and update all of our built-in data types and extensions, after\nthe (not so easy) preliminary step of reaching agreement on how it\nshould all work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 08:37:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "po 28. 10. 2019 v 12:39 odesílatel Eugen Konkov <kes-kes@yandex.ru> napsal:\n\n> > x IS NOT DISTINCT FROM y\n>\n> > I'm vaguely imagining\n>\n> > x = {magic} y\n>\n> > where unlike Eugen's suggestion, \"=\" is the real name of the underlying\n> > comparison operator. For dump/restore this could be spelled verbosely\n> > as\n>\n> > x OPERATOR(someplace.=) {magic} y\n>\n> > The hard part is to figure out some {magic} annotation that is both\n> > short and unambiguous. We have to cover the IS DISTINCT variant, too.\n>\n> I am from Perl world. There are == and != operators.\n> Here short snippet of code:\n>\n> my $x = undef;\n> my $y = 'some value';\n> my $z = undef;\n> $x == $y; # FALSE\n> $x == $z; # TRUE\n> $x != $y ; # TRUE\n> $x != $z; # FALSE\n>\n>\n> > x OPERATOR(someplace.=) {magic} y\n> If we should follow this form, then IS DISTINCT should be written as:\n> x =! y\n> This looks unusual, because JavaScript also follow != form. so I hope\n> it will be easy to detect/implement != form, which I used to read as:\n> negate the result of comparison\n>\n>\n>\n> Can we supply additional parameters to OPERATOR via double\n> parentheses( double parentheses is another crazy idea)?\n> x =(( 'NULL' )) y\n>\n\nIt's looks much more terrible than original IS DISTINCT FROM\n\n\n> or\n>\n> x OPERATOR(someplace.=, magic ) y\n> which will be internally converted( I suppose ) to OPERATOR(\n> someplace.=, x, y, magic )\n>\n\nI don't think so benefit of this is too valuable against possible problems.\n\nMySQL has special operator <=>, so if we implement some, then we should to\nimplement this. But better do nothing. I don't see significant benefit of\nthis against costs.\n\nPavel\n\n>\n> --\n> Best regards,\n> Eugen Konkov\n>\n>\n>\n>\n\npo 28. 10. 2019 v 12:39 odesílatel Eugen Konkov <kes-kes@yandex.ru> napsal:>         x IS NOT DISTINCT FROM y\n\n> I'm vaguely imagining\n\n>         x = {magic} y\n\n> where unlike Eugen's suggestion, \"=\" is the real name of the underlying\n> comparison operator.  For dump/restore this could be spelled verbosely\n> as\n\n>         x OPERATOR(someplace.=) {magic} y\n\n> The hard part is to figure out some {magic} annotation that is both\n> short and unambiguous.  We have to cover the IS DISTINCT variant, too.\n\nI  am  from  Perl  world.  There  are  == and != operators.\nHere short snippet of code:\n\nmy $x = undef;\nmy $y = 'some value';\nmy $z = undef;\n$x == $y; # FALSE\n$x == $z; # TRUE\n$x != $y ; # TRUE\n$x != $z;  # FALSE\n\n\n>         x OPERATOR(someplace.=) {magic} y\nIf we should follow this form, then IS DISTINCT should be written as:\nx =! y\nThis  looks unusual, because JavaScript also follow != form. so I hope\nit  will be easy to detect/implement != form, which I used to read as:\nnegate the result of comparison\n\n\n\nCan   we   supply   additional   parameters  to  OPERATOR  via  double\nparentheses( double parentheses is another crazy idea)?\nx =(( 'NULL' )) yIt's looks much more terrible than original IS DISTINCT FROM \n\nor\n\nx OPERATOR(someplace.=, magic ) y\nwhich   will  be  internally  converted(  I  suppose  )  to  OPERATOR(\nsomeplace.=, x, y, magic )I don't think so benefit of this is too valuable against possible problems.MySQL has special operator <=>, so if we implement some, then we should to implement this. But better do nothing. I don't see significant benefit of this against costs.Pavel\n\n-- \nBest regards,\nEugen Konkov", "msg_date": "Mon, 28 Oct 2019 13:48:24 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Would it be possible to just use `IS`, `IS NOT` instead of `IS [NOT]\nDISTINCT FROM`? It's always surprised me that you can write `IS NULL`, `IS\nTRUE`, etc. but they're all special-cased. I could see it introducing a\nparsing ambiguity, but it doesn't seem impossible to resolve?\n\nOn Mon, Oct 28, 2019 at 12:49 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> po 28. 10. 2019 v 12:39 odesílatel Eugen Konkov <kes-kes@yandex.ru>\n> napsal:\n>\n>> > x IS NOT DISTINCT FROM y\n>>\n>> > I'm vaguely imagining\n>>\n>> > x = {magic} y\n>>\n>> > where unlike Eugen's suggestion, \"=\" is the real name of the underlying\n>> > comparison operator. For dump/restore this could be spelled verbosely\n>> > as\n>>\n>> > x OPERATOR(someplace.=) {magic} y\n>>\n>> > The hard part is to figure out some {magic} annotation that is both\n>> > short and unambiguous. We have to cover the IS DISTINCT variant, too.\n>>\n>> I am from Perl world. There are == and != operators.\n>> Here short snippet of code:\n>>\n>> my $x = undef;\n>> my $y = 'some value';\n>> my $z = undef;\n>> $x == $y; # FALSE\n>> $x == $z; # TRUE\n>> $x != $y ; # TRUE\n>> $x != $z; # FALSE\n>>\n>>\n>> > x OPERATOR(someplace.=) {magic} y\n>> If we should follow this form, then IS DISTINCT should be written as:\n>> x =! y\n>> This looks unusual, because JavaScript also follow != form. so I hope\n>> it will be easy to detect/implement != form, which I used to read as:\n>> negate the result of comparison\n>>\n>>\n>>\n>> Can we supply additional parameters to OPERATOR via double\n>> parentheses( double parentheses is another crazy idea)?\n>> x =(( 'NULL' )) y\n>>\n>\n> It's looks much more terrible than original IS DISTINCT FROM\n>\n>\n>> or\n>>\n>> x OPERATOR(someplace.=, magic ) y\n>> which will be internally converted( I suppose ) to OPERATOR(\n>> someplace.=, x, y, magic )\n>>\n>\n> I don't think so benefit of this is too valuable against possible problems.\n>\n> MySQL has special operator <=>, so if we implement some, then we should to\n> implement this. But better do nothing. I don't see significant benefit of\n> this against costs.\n>\n> Pavel\n>\n>>\n>> --\n>> Best regards,\n>> Eugen Konkov\n>>\n>>\n>>\n>>\n\nWould it be possible to just use `IS`, `IS NOT` instead of `IS [NOT] DISTINCT FROM`? It's always surprised me that you can write `IS NULL`, `IS TRUE`, etc. but they're all special-cased. I could see it introducing a parsing ambiguity, but it doesn't seem impossible to resolve?On Mon, Oct 28, 2019 at 12:49 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:po 28. 10. 2019 v 12:39 odesílatel Eugen Konkov <kes-kes@yandex.ru> napsal:>         x IS NOT DISTINCT FROM y\n\n> I'm vaguely imagining\n\n>         x = {magic} y\n\n> where unlike Eugen's suggestion, \"=\" is the real name of the underlying\n> comparison operator.  For dump/restore this could be spelled verbosely\n> as\n\n>         x OPERATOR(someplace.=) {magic} y\n\n> The hard part is to figure out some {magic} annotation that is both\n> short and unambiguous.  We have to cover the IS DISTINCT variant, too.\n\nI  am  from  Perl  world.  There  are  == and != operators.\nHere short snippet of code:\n\nmy $x = undef;\nmy $y = 'some value';\nmy $z = undef;\n$x == $y; # FALSE\n$x == $z; # TRUE\n$x != $y ; # TRUE\n$x != $z;  # FALSE\n\n\n>         x OPERATOR(someplace.=) {magic} y\nIf we should follow this form, then IS DISTINCT should be written as:\nx =! y\nThis  looks unusual, because JavaScript also follow != form. so I hope\nit  will be easy to detect/implement != form, which I used to read as:\nnegate the result of comparison\n\n\n\nCan   we   supply   additional   parameters  to  OPERATOR  via  double\nparentheses( double parentheses is another crazy idea)?\nx =(( 'NULL' )) yIt's looks much more terrible than original IS DISTINCT FROM \n\nor\n\nx OPERATOR(someplace.=, magic ) y\nwhich   will  be  internally  converted(  I  suppose  )  to  OPERATOR(\nsomeplace.=, x, y, magic )I don't think so benefit of this is too valuable against possible problems.MySQL has special operator <=>, so if we implement some, then we should to implement this. But better do nothing. I don't see significant benefit of this against costs.Pavel\n\n-- \nBest regards,\nEugen Konkov", "msg_date": "Mon, 28 Oct 2019 13:08:15 +0000", "msg_from": "Diggory Blake <diggsey@googlemail.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "\nOn 10/28/19 8:37 AM, Robert Haas wrote:\n> On Mon, Oct 28, 2019 at 7:54 AM Isaac Morland <isaac.morland@gmail.com> wrote:\n>> Postgres already allows != as a synonym for <>. I think having =! mean something subtly but significantly different is a terrible idea. At a minimum we would have to remove the synonym, which would be a backwards compatibility break.\n> I certainly agree with that. I do think, though, that IS DISTINCT FROM\n> is a terribly verbose thing to have to write all the time. It's not\n> that bad when you write a query that contains one instance of it, but\n> I've both seen and written queries where you need to use it a bunch of\n> times, and that can get really annoying. \n\n\n\nHow about instead of new operators we just provide a nice shorthand way\nof saying these? e.g. ARE and AINT :-)\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 28 Oct 2019 09:31:59 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On Mon, 28 Oct 2019 at 13:31, Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> How about instead of new operators we just provide a nice shorthand way\n> of saying these? e.g. ARE and AINT :-)\n\nSeems to me like this is something that those users who want it can\nimplement for themselves with little to no effort without forcing the\nchange on everyone else.\n\nCREATE OR REPLACE FUNCTION fnnotdistinctfrom(anyelement, anyelement)\nRETURNS boolean LANGUAGE SQL AS $_$\n SELECT CASE WHEN $1 IS NOT DISTINCT FROM $2 THEN true ELSE false END;\n$_$;\nCREATE OR REPLACE FUNCTION fndistinctfrom(anyelement, anyelement)\nRETURNS boolean LANGUAGE SQL AS $_$\n SELECT CASE WHEN $1 IS DISTINCT FROM $2 THEN true ELSE false END;\n$_$;\nCREATE OPERATOR == (\n PROCEDURE = fnnotdistinctfrom,\n LEFTARG=anyelement,\n RIGHTARG=anyelement,\n NEGATOR = =!\n);\nCREATE OPERATOR =! (\n PROCEDURE = fndistinctfrom,\n LEFTARG = anyelement,\n RIGHTARG = anyelement,\n NEGATOR = ==\n);\n\nI'm at a loss to understand why anyone would want to implement what is\nbasically a personal preference for syntactic sugar at the system\nlevel. There's not even the advantage of other-system-compatibility.\n\nGeoff\n\n\n", "msg_date": "Mon, 28 Oct 2019 13:57:32 +0000", "msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 10/28/19 8:37 AM, Robert Haas wrote:\n>> I certainly agree with that. I do think, though, that IS DISTINCT FROM\n>> is a terribly verbose thing to have to write all the time. It's not\n>> that bad when you write a query that contains one instance of it, but\n>> I've both seen and written queries where you need to use it a bunch of\n>> times, and that can get really annoying. \n\n> How about instead of new operators we just provide a nice shorthand way\n> of saying these? e.g. ARE and AINT :-)\n\nThe thing about providing a shorthand that looks like an operator is\nthat then people will try to use it as an operator, and we'll be having\nto explain why constructs like \"ORDER BY ==\" or \"x == ANY (SELECT ...)\"\ndon't work. Or else make them work, but I think you'll find that\nthat moves this task well outside the easy-finger-exercise category.\n\nI kind of like AINT ;-) ... although adding two new short,\nfully-reserved words is likely to cause push-back from people\nwhose schemas get broken by that.\n\nA more practical answer might be to allow these to be abbreviated\nalong the lines of\n\n\tx DIST y\n\tx NOT DIST y\n\nif we're willing to make DIST a fully reserved word.\nIt's possible that we could make\n\n\tx IS DIST y\n\tx IS NOT DIST y\n\nwork without fully reserving DIST, but I've not tried it.\n\nOf course neither of those ideas is as short as \"==\", but\nI think we should put some weight on not breaking things.\nI do not believe Robert's position that nobody will complain\nif we break extensions' use of \"==\" just to save some typing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 10:07:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On Mon, Oct 28, 2019 at 10:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I kind of like AINT ;-) ... although adding two new short,\n> fully-reserved words is likely to cause push-back from people\n> whose schemas get broken by that.\n>\n> A more practical answer might be to allow these to be abbreviated\n> along the lines of\n>\n> x DIST y\n> x NOT DIST y\n>\n> if we're willing to make DIST a fully reserved word.\n> It's possible that we could make\n>\n> x IS DIST y\n> x IS NOT DIST y\n>\n> work without fully reserving DIST, but I've not tried it.\n\nI don't like either of these proposals much. I think DIST is not very\nclear: I think a variety of things other than DISTINCT might come to\nmind (distribution?) and we have no precedent for chopping off the\ntail end of an English word just to save keystrokes. And I think\nadding fully-reserved keywords would do far more damage than we can\njustify on account of this annoyance.\n\n> Of course neither of those ideas is as short as \"==\", but\n> I think we should put some weight on not breaking things.\n> I do not believe Robert's position that nobody will complain\n> if we break extensions' use of \"==\" just to save some typing.\n\nI mean, do we have to break the extensions? If we just added ==\noperators that behaved like IS NOT DISTINCT FROM to each datatype, why\nwould anything get broken? I mean, if someone out there has a\n==(int4,int4) operator, that would get broken, but what's the evidence\nthat any such thing exists, or that its semantics are any different\nfrom what we're talking about?\n\nIf we added == as a magic parser shortcut for IS NOT DISTINCT FROM,\nthat would be more likely to break things, because it would affect\nevery conceivable data type. I don't think that's a great idea, but\nI'd also be curious to see what evidence you have that there are\nenough extensions out there of sufficient popularity that this would\nbe a big problem. For instance, if PostGIS uses this operator name,\nthat'd be good evidence that it's a real problem, but if the only\nexamples we can find are things that are relatively obscure, then, at\nleast to me, that would be different.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 10:41:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 28, 2019 at 10:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Of course neither of those ideas is as short as \"==\", but\n>> I think we should put some weight on not breaking things.\n>> I do not believe Robert's position that nobody will complain\n>> if we break extensions' use of \"==\" just to save some typing.\n\n> I mean, do we have to break the extensions? If we just added ==\n> operators that behaved like IS NOT DISTINCT FROM to each datatype, why\n> would anything get broken?\n\nIs that the proposal? I certainly assumed that Eugen had in mind a\nparser-level hack, because adding dozens of new operators and their\nunderlying functions would be a Lot Of Tedious Work. But I agree\nthat if we did it like that, it (probably) wouldn't break anything.\n\nI'd be somewhat inclined to adopt \"===\" and \"!===\" as the standard\nnames, trading off one more keystroke to get to a point where we\nalmost certainly aren't conflicting with anybody's existing usage.\n\nOne objection to proceeding like that is that there'd be no\nvisible connection between a datatype's \"=\" and \"===\" operators,\nremoving any hope of someday optimizing, for example, \"x IS NOT\nDISTINCT FROM 42\" into an indexscan on x. We're certainly not\nvery bright about these constructs today, but at least there\nexists the possibility of doing better in future. I suppose\nwe could think about extending btree opclasses to allow for\nan === entry, but that'd be another pile of work ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:20:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "On Mon, Oct 28, 2019 at 11:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I mean, do we have to break the extensions? If we just added ==\n> > operators that behaved like IS NOT DISTINCT FROM to each datatype, why\n> > would anything get broken?\n>\n> Is that the proposal? I certainly assumed that Eugen had in mind a\n> parser-level hack, because adding dozens of new operators and their\n> underlying functions would be a Lot Of Tedious Work. But I agree\n> that if we did it like that, it (probably) wouldn't break anything.\n\nI'm not sure we've yet converged on a single proposal yet. This seems\nto be at the spitballing stage.\n\n> I'd be somewhat inclined to adopt \"===\" and \"!===\" as the standard\n> names, trading off one more keystroke to get to a point where we\n> almost certainly aren't conflicting with anybody's existing usage.\n\nMaybe. It's an open question in my mind which of those is more likely\nto be taken already. Javascript uses === and !== for a certain kind\nof equality comparison, so I'd guess that the chance of someone having\nused === is better-than-average for that reason. Also, if we decide\nthat the opposite of === is !=== rather than !==, someone may hate us.\n\n> One objection to proceeding like that is that there'd be no\n> visible connection between a datatype's \"=\" and \"===\" operators,\n> removing any hope of someday optimizing, for example, \"x IS NOT\n> DISTINCT FROM 42\" into an indexscan on x. We're certainly not\n> very bright about these constructs today, but at least there\n> exists the possibility of doing better in future. I suppose\n> we could think about extending btree opclasses to allow for\n> an === entry, but that'd be another pile of work ...\n\nYeah. If we went this route, I think we'd probably have to do that\nextension of the btree operator class machinery first. Virtually\nnobody is gonna want a new spelling of IS NOT DISTINCT FROM that is\nshorter but performs terribly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:35:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Hi,\n\nOn 2019-10-28 10:41:31 -0400, Robert Haas wrote:\n> I mean, do we have to break the extensions? If we just added ==\n> operators that behaved like IS NOT DISTINCT FROM to each datatype, why\n> would anything get broken? I mean, if someone out there has a\n> ==(int4,int4) operator, that would get broken, but what's the evidence\n> that any such thing exists, or that its semantics are any different\n> from what we're talking about?\n> \n> If we added == as a magic parser shortcut for IS NOT DISTINCT FROM,\n> that would be more likely to break things, because it would affect\n> every conceivable data type. I don't think that's a great idea, but\n\nWithout some magic, the amount of repetitive changes, the likelihood of\ninconsistencies, and the reduced information about semantic meaning to\nthe planner (it'd not be a btree op anymore!), all seem to argue against\nadding such an operator.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 28 Oct 2019 08:38:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" }, { "msg_contents": "Diggory Blake <diggsey@googlemail.com> writes:\n> Would it be possible to just use `IS`, `IS NOT` instead of `IS [NOT]\n> DISTINCT FROM`? It's always surprised me that you can write `IS NULL`, `IS\n> TRUE`, etc. but they're all special-cased. I could see it introducing a\n> parsing ambiguity, but it doesn't seem impossible to resolve?\n\nCute idea, but I'm afraid it breaks down when you come to\n\"x IS DOCUMENT\". We'd have to make DOCUMENT fully reserved\n(or at least more reserved --- maybe type_func_name_keyword\nwould be enough?) or it'd be unclear whether that meant a\nnot-distinct comparison to a column named \"document\".\nAnd I'd bet a lot that there are people out there with\ncolumns named \"document\", so even type_func_name_keyword\nreserved-ness would be enough to break their applications.\n\nIn the bigger picture, even if we were okay with that, I'm\nafraid that we'd constantly be in danger of the SQL committee\nadding some new \"x IS KEYWORD(s)\" test, causing new problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:41:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'" } ]
[ { "msg_contents": "At pgconf.eu, someone whose name I've forgotten pointed out to me\nthat this doesn't work:\n\nregression=# select (row(1, 2.0)).f1;\nERROR: could not identify column \"f1\" in record data type\nLINE 1: select (row(1, 2.0)).f1;\n ^\n\nThe fields of an anonymous rowtype are certainly named f1, f2, etc,\nso it seems like this *should* work. A related case is\n\nregression=# select (row(1, 2.0)).*;\nERROR: record type has not been registered\n\nAdmittedly, these probably aren't terribly useful cases in practice,\nbut it's unfortunate that they don't work as one would expect.\nSo I propose the attached patch to make them work.\n\nThe underlying reason for both of these failures is that RowExpr\ndoesn't carry a typmod, so if it's of type RECORD then\nget_expr_result_type doesn't know how to find a tupdesc for it.\nThe minimum-code solution is to teach get_expr_result_type to build\na tupdesc directly from the RowExpr, and that seems to be necessary\nfor complicated cases like\n\nselect (r).f1 from (select row(1, 2.0) as r) ss;\n\nIn an earlier version of the patch I chose to add in some fast-path\nlogic in ParseComplexProjection and ExpandRowReference, so as to\nmake the really simple cases shown above a bit less inefficient.\nBut on second thought, these are such corner cases that it doesn't\nseem worth carrying extra code for them. The cases that are more\nlikely to arise in practice are like that last example, and we\ncan't optimize that in the parser. (The planner will optimize\nFieldSelect-from-RowExpr after flattening subqueries, which is\nprobably as much as we really need to do here.)\n\nI don't feel a need to back-patch this, but I would like to push\nit into HEAD.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 27 Oct 2019 14:46:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Selecting fields from a RowExpr" }, { "msg_contents": "Hi\n\nne 27. 10. 2019 v 19:47 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> At pgconf.eu, someone whose name I've forgotten pointed out to me\n> that this doesn't work:\n>\n> regression=# select (row(1, 2.0)).f1;\n> ERROR: could not identify column \"f1\" in record data type\n> LINE 1: select (row(1, 2.0)).f1;\n> ^\n>\n> The fields of an anonymous rowtype are certainly named f1, f2, etc,\n> so it seems like this *should* work. A related case is\n>\n> regression=# select (row(1, 2.0)).*;\n> ERROR: record type has not been registered\n>\n> Admittedly, these probably aren't terribly useful cases in practice,\n> but it's unfortunate that they don't work as one would expect.\n> So I propose the attached patch to make them work.\n>\n> The underlying reason for both of these failures is that RowExpr\n> doesn't carry a typmod, so if it's of type RECORD then\n> get_expr_result_type doesn't know how to find a tupdesc for it.\n> The minimum-code solution is to teach get_expr_result_type to build\n> a tupdesc directly from the RowExpr, and that seems to be necessary\n> for complicated cases like\n>\n> select (r).f1 from (select row(1, 2.0) as r) ss;\n>\n> In an earlier version of the patch I chose to add in some fast-path\n> logic in ParseComplexProjection and ExpandRowReference, so as to\n> make the really simple cases shown above a bit less inefficient.\n> But on second thought, these are such corner cases that it doesn't\n> seem worth carrying extra code for them. The cases that are more\n> likely to arise in practice are like that last example, and we\n> can't optimize that in the parser. (The planner will optimize\n> FieldSelect-from-RowExpr after flattening subqueries, which is\n> probably as much as we really need to do here.)\n>\n> I don't feel a need to back-patch this, but I would like to push\n> it into HEAD.\n>\n\nsome times I hit this limit, an can be nice more consistent behave of\ncomposite types.\n\nIt's new feature - and there is not a reason for back-patching\n\nRegards\n\nPavel\n\n>\n> Thoughts?\n>\n> regards, tom lane\n>\n>\n\nHine 27. 10. 2019 v 19:47 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:At pgconf.eu, someone whose name I've forgotten pointed out to me\nthat this doesn't work:\n\nregression=# select (row(1, 2.0)).f1;\nERROR:  could not identify column \"f1\" in record data type\nLINE 1: select (row(1, 2.0)).f1;\n                ^\n\nThe fields of an anonymous rowtype are certainly named f1, f2, etc,\nso it seems like this *should* work.  A related case is\n\nregression=# select (row(1, 2.0)).*;\nERROR:  record type has not been registered\n\nAdmittedly, these probably aren't terribly useful cases in practice,\nbut it's unfortunate that they don't work as one would expect.\nSo I propose the attached patch to make them work.\n\nThe underlying reason for both of these failures is that RowExpr\ndoesn't carry a typmod, so if it's of type RECORD then\nget_expr_result_type doesn't know how to find a tupdesc for it.\nThe minimum-code solution is to teach get_expr_result_type to build\na tupdesc directly from the RowExpr, and that seems to be necessary\nfor complicated cases like\n\nselect (r).f1 from (select row(1, 2.0) as r) ss;\n\nIn an earlier version of the patch I chose to add in some fast-path\nlogic in ParseComplexProjection and ExpandRowReference, so as to\nmake the really simple cases shown above a bit less inefficient.\nBut on second thought, these are such corner cases that it doesn't\nseem worth carrying extra code for them.  The cases that are more\nlikely to arise in practice are like that last example, and we\ncan't optimize that in the parser.  (The planner will optimize\nFieldSelect-from-RowExpr after flattening subqueries, which is\nprobably as much as we really need to do here.)\n\nI don't feel a need to back-patch this, but I would like to push\nit into HEAD.some times I hit this limit, an can be nice more consistent behave of composite types.It's new feature - and there is not a reason for back-patchingRegardsPavel\n\nThoughts?\n\n                        regards, tom lane", "msg_date": "Sun, 27 Oct 2019 19:59:07 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Selecting fields from a RowExpr" } ]
[ { "msg_contents": "commit #898e5e32 (Allow ATTACH PARTITION with only ShareUpdateExclusiveLock)\nupdates ddl.sgml but not alter_table.sgml, which only says:\n\nhttps://www.postgresql.org/docs/12/release-12.html\n|An ACCESS EXCLUSIVE lock is held unless explicitly noted.\n\nFind attached patch, which also improve language in several related places.\n\n\"Without such a constraint\": SUCH could refer to either of the constraints..\n\n\"because it is no longer necessary.\": In our use case, we prefer to keep the\nredundant constraint, to avoid having to add it back if we detach/reattach\nagain in the future..", "msg_date": "Sun, 27 Oct 2019 19:12:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "update ALTER TABLE with ATTACH PARTITION lock mode" }, { "msg_contents": "On Sun, Oct 27, 2019 at 07:12:07PM -0500, Justin Pryzby wrote:\n> commit #898e5e32 (Allow ATTACH PARTITION with only ShareUpdateExclusiveLock)\n> updates ddl.sgml but not alter_table.sgml, which only says:\n> \n> https://www.postgresql.org/docs/12/release-12.html\n> |An ACCESS EXCLUSIVE lock is held unless explicitly noted.\n\n+ <para>\n+ Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal>\n+ lock on the partitioned table, in addition to an\n+ <literal>ACCESS EXCLUSIVE</literal> lock on the partition.\n+ </para>\nUpdating the docs of ALTER TABLE sounds like a good idea. This\nsentence looks fine to me. Perhaps others have suggestions?\n\n> Find attached patch, which also improve language in several related places.\n\nNot sure that these are actually improvements.\n--\nMichael", "msg_date": "Mon, 28 Oct 2019 16:55:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode" }, { "msg_contents": "On 2019-Oct-28, Michael Paquier wrote:\n\n> On Sun, Oct 27, 2019 at 07:12:07PM -0500, Justin Pryzby wrote:\n> > commit #898e5e32 (Allow ATTACH PARTITION with only ShareUpdateExclusiveLock)\n> > updates ddl.sgml but not alter_table.sgml, which only says:\n> > \n> > https://www.postgresql.org/docs/12/release-12.html\n> > |An ACCESS EXCLUSIVE lock is held unless explicitly noted.\n> \n> + <para>\n> + Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal>\n> + lock on the partitioned table, in addition to an\n> + <literal>ACCESS EXCLUSIVE</literal> lock on the partition.\n> + </para>\n> Updating the docs of ALTER TABLE sounds like a good idea. This\n> sentence looks fine to me. Perhaps others have suggestions?\n\nDoesn't the command also acquire a lock on the default partition if\nthere is one? It sounds worth noting.\n\n> > Find attached patch, which also improve language in several related places.\n> \n> Not sure that these are actually improvements.\n\nI think some of them (most?) are clear improvements.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 28 Oct 2019 12:06:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode" }, { "msg_contents": "On Mon, Oct 28, 2019 at 12:06:44PM -0300, Alvaro Herrera wrote:\n> On 2019-Oct-28, Michael Paquier wrote:\n> \n> > On Sun, Oct 27, 2019 at 07:12:07PM -0500, Justin Pryzby wrote:\n> > > commit #898e5e32 (Allow ATTACH PARTITION with only ShareUpdateExclusiveLock)\n> > > updates ddl.sgml but not alter_table.sgml, which only says:\n> > > \n> > > https://www.postgresql.org/docs/12/release-12.html\n> > > |An ACCESS EXCLUSIVE lock is held unless explicitly noted.\n> > \n> > + <para>\n> > + Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal>\n> > + lock on the partitioned table, in addition to an\n> > + <literal>ACCESS EXCLUSIVE</literal> lock on the partition.\n> > + </para>\n> > Updating the docs of ALTER TABLE sounds like a good idea. This\n> > sentence looks fine to me. Perhaps others have suggestions?\n> \n> Doesn't the command also acquire a lock on the default partition if\n> there is one? It sounds worth noting.\n\nI suppose it should something other than partition(ed), since partitions can be\npartitioned, too...\n\n Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal>\n lock on the parent table, in addition to\n <literal>ACCESS EXCLUSIVE</literal> locks on the child table and the\n <literal>DEFAULT</literal> partition (if any).\n\nThanks,\nJustin\n\n\n", "msg_date": "Mon, 28 Oct 2019 22:56:33 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode (docs)" }, { "msg_contents": "Hello,\n\nOn Tue, Oct 29, 2019 at 12:13 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Oct-28, Michael Paquier wrote:\n> > On Sun, Oct 27, 2019 at 07:12:07PM -0500, Justin Pryzby wrote:\n> > > commit #898e5e32 (Allow ATTACH PARTITION with only ShareUpdateExclusiveLock)\n> > > updates ddl.sgml but not alter_table.sgml, which only says:\n> > >\n> > > https://www.postgresql.org/docs/12/release-12.html\n> > > |An ACCESS EXCLUSIVE lock is held unless explicitly noted.\n> >\n> > + <para>\n> > + Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal>\n> > + lock on the partitioned table, in addition to an\n> > + <literal>ACCESS EXCLUSIVE</literal> lock on the partition.\n> > + </para>\n> > Updating the docs of ALTER TABLE sounds like a good idea. This\n> > sentence looks fine to me. Perhaps others have suggestions?\n>\n> Doesn't the command also acquire a lock on the default partition if\n> there is one? It sounds worth noting.\n>\n> > > Find attached patch, which also improve language in several related places.\n> >\n> > Not sure that these are actually improvements.\n>\n> I think some of them (most?) are clear improvements.\n\nAs someone who has written some of those lines, I agree that Justin's\ntweaks make them more readable, so +1 to apply 0002 patch too.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 31 Oct 2019 17:00:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode" }, { "msg_contents": "On Mon, Oct 28, 2019 at 10:56:33PM -0500, Justin Pryzby wrote:\n> I suppose it should something other than partition(ed), since partitions can be\n> partitioned, too...\n> \n> Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal>\n> lock on the parent table, in addition to\n> <literal>ACCESS EXCLUSIVE</literal> locks on the child table and the\n> <literal>DEFAULT</literal> partition (if any).\n\nIn this context, \"on the child table\" sounds a bit confusing? Would\nit make more sense to say the \"on the table to be attached\" instead?\n--\nMichael", "msg_date": "Thu, 31 Oct 2019 18:07:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode (docs)" }, { "msg_contents": "On Thu, Oct 31, 2019 at 06:07:34PM +0900, Michael Paquier wrote:\n> On Mon, Oct 28, 2019 at 10:56:33PM -0500, Justin Pryzby wrote:\n> > I suppose it should something other than partition(ed), since partitions can be\n> > partitioned, too...\n> > \n> > Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal>\n> > lock on the parent table, in addition to\n> > <literal>ACCESS EXCLUSIVE</literal> locks on the child table and the\n> > <literal>DEFAULT</literal> partition (if any).\n> \n> In this context, \"on the child table\" sounds a bit confusing? Would\n> it make more sense to say the \"on the table to be attached\" instead?\n\nI guess you mean because it's not a child until after the ALTER. Yes, that\nmakes sense.\n\nThanks,\nJustin\n\n\n", "msg_date": "Fri, 1 Nov 2019 08:59:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode (docs)" }, { "msg_contents": "On Fri, Nov 01, 2019 at 08:59:48AM -0500, Justin Pryzby wrote:\n> I guess you mean because it's not a child until after the ALTER. Yes, that\n> makes sense.\n\nYes, perhaps you have another idea than mine on how to shape this\nsentence?\n--\nMichael", "msg_date": "Fri, 1 Nov 2019 23:01:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode (docs)" }, { "msg_contents": "On Fri, Nov 01, 2019 at 11:01:22PM +0900, Michael Paquier wrote:\n> On Fri, Nov 01, 2019 at 08:59:48AM -0500, Justin Pryzby wrote:\n> > I guess you mean because it's not a child until after the ALTER. Yes, that\n> > makes sense.\n> \n> Yes, perhaps you have another idea than mine on how to shape this\n> sentence?\n\nI can't think of anything better.\n\nAttaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal> lock\non the parent table, in addition to <literal>ACCESS EXCLUSIVE</literal> locks\non the table to be attached and the <literal>DEFAULT</literal> partition (if\nany). \n\nJustin\n\n\n", "msg_date": "Fri, 1 Nov 2019 11:58:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode (docs)" }, { "msg_contents": "On Fri, Nov 01, 2019 at 11:58:43AM -0500, Justin Pryzby wrote:\n> Attaching a partition acquires a <literal>SHARE UPDATE EXCLUSIVE</literal> lock\n> on the parent table, in addition to <literal>ACCESS EXCLUSIVE</literal> locks\n> on the table to be attached and the <literal>DEFAULT</literal> partition (if\n> any). \n\nSounds fine. So gathering everything I get the attached. Any\nthoughts from others?\n--\nMichael", "msg_date": "Sat, 2 Nov 2019 17:19:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode (docs)" }, { "msg_contents": "On Sat, Nov 02, 2019 at 05:19:11PM +0900, Michael Paquier wrote:\n> Sounds fine. So gathering everything I get the attached. Any\n> thoughts from others?\n\nCommitted after splitting the changes in two as originally proposed.\n--\nMichael", "msg_date": "Tue, 5 Nov 2019 10:35:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: update ALTER TABLE with ATTACH PARTITION lock mode (docs)" } ]
[ { "msg_contents": "Hi folks\n\nI was recently surprised to notice that log_line_prefix doesn't support a\ncluster_name placeholder. I suggest adding one. If I don't hear objections\nI'll send a patch.\n\nBefore anyone asks \"but why?!\":\n\n* A constant (short) string in log_line_prefix is immensely useful when\nworking with logs from multi-node systems. Whether that's physical\nstreaming replication, logical replication, Citus, whatever, it doesn't\nmatter. It's worth paying the small storage price for sanity when looking\nat logs.\n\n* Yes you can embed it directly into log_line_prefix. But then it gets\ncopied by pg_basebackup or whatever you're using to clone standbys etc, so\nyou can easily land up with multiple instances reporting the same name.\nThis rather defeats the purpose.\n\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nHi folksI was recently surprised to notice that log_line_prefix doesn't support a cluster_name placeholder. I suggest adding one. If I don't hear objections I'll send a patch.Before anyone asks \"but why?!\":* A constant (short) string in log_line_prefix is immensely useful when working with logs from multi-node systems. Whether that's physical streaming replication, logical replication, Citus, whatever, it doesn't matter. It's worth paying the small storage price for sanity when looking at logs. * Yes you can embed it directly into log_line_prefix. But then it gets copied by pg_basebackup or whatever you're using to clone standbys etc, so you can easily land up with multiple instances reporting the same name. This rather defeats the purpose.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Mon, 28 Oct 2019 12:33:00 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Allow cluster_name in log_line_prefix" }, { "msg_contents": "On Mon, Oct 28, 2019 at 3:33 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n> I was recently surprised to notice that log_line_prefix doesn't support a cluster_name placeholder. I suggest adding one. If I don't hear objections I'll send a patch.\n\n+1\n\n\n", "msg_date": "Thu, 31 Oct 2019 13:54:17 +1100", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow cluster_name in log_line_prefix" }, { "msg_contents": "> Hi folks\n> \n> I was recently surprised to notice that log_line_prefix doesn't support a\n> cluster_name placeholder. I suggest adding one. If I don't hear objections\n> I'll send a patch.\n\nI think it'd be a good thing for users.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 31 Oct 2019 13:41:02 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Allow cluster_name in log_line_prefix" }, { "msg_contents": "On Mon, Oct 28, 2019 at 1:33 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n> Hi folks\n>\n> I was recently surprised to notice that log_line_prefix doesn't support a cluster_name placeholder. I suggest adding one. If I don't hear objections I'll send a patch.\n\nIf we do this, cluster_name should be included in csvlog?\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Thu, 31 Oct 2019 16:47:55 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow cluster_name in log_line_prefix" }, { "msg_contents": "Hi,\n\nOn 2019-10-28 12:33:00 +0800, Craig Ringer wrote:\n> I was recently surprised to notice that log_line_prefix doesn't support a\n> cluster_name placeholder. I suggest adding one. If I don't hear objections\n> I'll send a patch.\n> \n> Before anyone asks \"but why?!\":\n> \n> * A constant (short) string in log_line_prefix is immensely useful when\n> working with logs from multi-node systems. Whether that's physical\n> streaming replication, logical replication, Citus, whatever, it doesn't\n> matter. It's worth paying the small storage price for sanity when looking\n> at logs.\n> \n> * Yes you can embed it directly into log_line_prefix. But then it gets\n> copied by pg_basebackup or whatever you're using to clone standbys etc, so\n> you can easily land up with multiple instances reporting the same name.\n> This rather defeats the purpose.\n\n+1. For a while this was part of the patch that added cluster_name\n(possibly worthwhile digging it up from that thread), but some people\nthought it was unnecessary, so it was excised from the patch to get the\nbasic feature...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Oct 2019 09:36:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow cluster_name in log_line_prefix" }, { "msg_contents": "On 31/10/2019 08:47, Fujii Masao wrote:\n> On Mon, Oct 28, 2019 at 1:33 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n>> Hi folks\n>>\n>> I was recently surprised to notice that log_line_prefix doesn't support a cluster_name placeholder. I suggest adding one. If I don't hear objections I'll send a patch.\n> If we do this, cluster_name should be included in csvlog?\n\n\nYes, absolutely.\n\n-- \n\nVik Fearing\n\n\n\n", "msg_date": "Sun, 3 Nov 2019 00:22:54 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow cluster_name in log_line_prefix" }, { "msg_contents": "On Sun, 3 Nov 2019 at 07:22, Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n> On 31/10/2019 08:47, Fujii Masao wrote:\n> > On Mon, Oct 28, 2019 at 1:33 PM Craig Ringer <craig@2ndquadrant.com>\n> wrote:\n> >> Hi folks\n> >>\n> >> I was recently surprised to notice that log_line_prefix doesn't support\n> a cluster_name placeholder. I suggest adding one. If I don't hear\n> objections I'll send a patch.\n> > If we do this, cluster_name should be included in csvlog?\n>\n>\n> Yes, absolutely.\n>\n\nOk, I can put that together soon then.\n\nI don't think it's too likely that people will shout about it being added\nto csvlog. People using csvlog tend to be ingesting and postprocessing\ntheir logs anyway. Plus gzip is really, really good at dealing with\nredundancy.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Sun, 3 Nov 2019 at 07:22, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:On 31/10/2019 08:47, Fujii Masao wrote:\n> On Mon, Oct 28, 2019 at 1:33 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n>> Hi folks\n>>\n>> I was recently surprised to notice that log_line_prefix doesn't support a cluster_name placeholder. I suggest adding one. If I don't hear objections I'll send a patch.\n> If we do this, cluster_name should be included in csvlog?\n\n\nYes, absolutely.Ok, I can put that together soon then.I don't think it's too likely that people will shout about it being added to csvlog. People using csvlog tend to be ingesting and postprocessing their logs anyway. Plus gzip is really, really good at dealing with redundancy.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Sun, 10 Nov 2019 17:51:17 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow cluster_name in log_line_prefix" } ]
[ { "msg_contents": "Hi,\n\nCurrently, we need to scan the WHOLE shared buffers when VACUUM\ntruncated off any empty pages at end of transaction or when relation\nis TRUNCATEd.\nAs for our customer case, we periodically truncate thousands of tables,\nand it's possible to TRUNCATE single table per transaction. This can be\nproblematic later on during recovery which could take longer, especially\nwhen a sudden failover happens after those TRUNCATEs and when we\nhave to scan a large-sized shared buffer. In the performance test below,\nit took almost 12.5 minutes for recovery to complete for 100GB shared\nbuffers. But we want to keep failover very short (within 10 seconds).\n\nPreviously, I made an improvement in speeding the truncates of relation\nforks from 3 scans to one scan.[1] This time, the aim of this patch is\nto further speedup the invalidation of pages, by linking the cached pages\nof the target relation in a doubly-linked list and just traversing it\ninstead of scanning the whole shared buffers. In DropRelFileNodeBuffers,\nwe just get the number of target buffers to invalidate for the relation.\nThere is a significant win in this patch, because we were able to\ncomplete failover and recover in 3 seconds more or less.\n\nI performed similar tests to what I did in the speedup truncates of\nrelations forks.[1][2] However, this time using 100GB shared_buffers.\n\n[Machine spec used in testing]\nIntel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz\nCPU: 16, Number of cores per socket: 8\nRHEL6.5, Memory: 256GB++\n\n[Test]\n1. (Master) Create table (ex. 10,000 tables). Insert data to tables.\n2. (Master) DELETE FROM TABLE (ex. all rows of 10,000 tables)\n(Standby) To test with failover, pause the WAL replay on standby server.\n(SELECT pg_wal_replay_pause();)\n3. (M) psql -c \"\\timing on\" (measures total execution of SQL queries)\n4. (M) VACUUM (whole db)\n5. (M) Stop primary server. pg_ctl stop -D $PGDATA -w\n6. (S) Resume wal replay and promote standby.[2]\n\n[Results]\n\nA. HEAD (origin/master branch)\nA1. Vacuum execution on Primary server\n Time: 730932.408 ms (12:10.932) ~12min 11s\nA2. Vacuum + Failover (WAL Recovery on Standby)\n waiting for server to promote...........................\n .................................... stopped waiting\n pg_ctl: server did not promote in time\n 2019/10/25_12:13:09.692─┐\n 2019/10/25_12:25:43.576─┘\n -->Total: 12min34s\n\nB. PATCH\nB1. Vacuum execution on Primary/Master\n Time: 6.518333s = 6518.333 ms\nB2. Vacuum + Failover (WAL Recovery on Standby)\n 2019/10/25_14:17:21.822\n waiting for server to promote...... done\n server promoted\n 2019/10/25_14:17:24.827\n 2019/10/25_14:17:24.833\n -->Total: 3.011s\n\n[Other Notes]\nMaybe one disadvantage is that we can have a variable number of\nrelations, and allocated the same number of relation structures as\nthe size of shared buffers. I tried to reduce the use of memory when\ndoing hash table lookup operation by having a fixed size array (100)\nor threshold of target buffers to invalidate.\nWhen doing CachedBufLookup() to scan the count of each buffer in the\ndlist, I made sure to reduce the number of scans (2x at most).\nFirst, we scan the dlist of cached buffers of relations.\nThen store the target buffers in buf_id_array. Non-target buffers\nwould be removed from dlist but added to temporary dlist.\nAfter reaching end of main dlist, we append the temporary dlist to\ntail of main dlist.\nI also performed pgbench buffer test, and this patch did not cause\noverhead to normal DB access performance.\n\nAnother one that I'd need feedback of is the use of new dlist operations\nfor this cached buffer list. I did not use in this patch the existing\nPostgres dlist architecture (ilist.h) because I want to save memory space\nas much as possible especially when NBuffers become large. Both dlist_node\n& dlist_head are 16 bytes. OTOH, two int pointers for this patch is 8 bytes.\n\nHope to hear your feedback and comments.\n\nThanks in advance,\nKirk Jamison\n\n[1] https://www.postgresql.org/message-id/flat/D09B13F772D2274BB348A310EE3027C64E2067%40g01jpexmbkw24\n[2] https://www.postgresql.org/message-id/D09B13F772D2274BB348A310EE3027C6502672%40g01jpexmbkw24", "msg_date": "Mon, 28 Oct 2019 08:13:19 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "[Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\n\n\n> Another one that I'd need feedback of is the use of new dlist operations\n\n> for this cached buffer list. I did not use in this patch the existing\n\n> Postgres dlist architecture (ilist.h) because I want to save memory space\n\n> as much as possible especially when NBuffers become large. Both dlist_node\n\n> & dlist_head are 16 bytes. OTOH, two int pointers for this patch is 8 bytes.\n\nIn cb_dlist_combine(), the code block below can impact performance\nespecially for cases when the doubly linked list is long (IOW, many cached buffers).\n /* Point to the tail of main dlist */\n while (curr_main->next != CACHEDBLOCK_END_OF_LIST)\n curr_main = cb_dlist_next(curr_main);\n\nAttached is an improved version of the previous patch, which adds a pointer\ninformation of the TAIL field in order to speed up the abovementioned operation.\nI stored the tail field in the prev pointer of the head entry (maybe not a typical\napproach). A more typical one is by adding a tail field (int tail) to CachedBufferEnt,\nbut I didn’t do that because as I mentioned in previous email I want to avoid\nusing more memory as much as possible.\nThe patch worked as intended and passed the tests.\n\nAny thoughts?\n\n\nRegards,\nKirk Jamison", "msg_date": "Tue, 5 Nov 2019 09:58:22 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Kirk,\n\nOn Tue, Nov 05, 2019 at 09:58:22AM +0000, k.jamison@fujitsu.com wrote:\n>Hi,\n>\n>\n>> Another one that I'd need feedback of is the use of new dlist operations\n>\n>> for this cached buffer list. I did not use in this patch the existing\n>\n>> Postgres dlist architecture (ilist.h) because I want to save memory space\n>\n>> as much as possible especially when NBuffers become large. Both dlist_node\n>\n>> & dlist_head are 16 bytes. OTOH, two int pointers for this patch is 8 bytes.\n>\n>In cb_dlist_combine(), the code block below can impact performance\n>especially for cases when the doubly linked list is long (IOW, many cached buffers).\n> /* Point to the tail of main dlist */\n> while (curr_main->next != CACHEDBLOCK_END_OF_LIST)\n> curr_main = cb_dlist_next(curr_main);\n>\n>Attached is an improved version of the previous patch, which adds a pointer\n>information of the TAIL field in order to speed up the abovementioned operation.\n>I stored the tail field in the prev pointer of the head entry (maybe not a typical\n>approach). A more typical one is by adding a tail field (int tail) to CachedBufferEnt,\n>but I didn’t do that because as I mentioned in previous email I want to avoid\n>using more memory as much as possible.\n>The patch worked as intended and passed the tests.\n>\n>Any thoughts?\n>\n\nA couple of comments based on briefly looking at the patch.\n\n1) I don't think you should / need to expose most of the ne stuff in\n buf_internals.h. It's only used from buf_internals.c and having all\n the various cb_dlist_* function in .h seems strange.\n\n2) This adds another hashtable maintenance to BufferAlloc etc. but\n you've only done tests / benchmark for the case this optimizes. I\n think we need to see a benchmark for workload that allocates and\n invalidates lot of buffers. A pgbench with a workload that fits into\n RAM but not into shared buffers would be interesting.\n\n3) I see this triggered a failure on cputube, in the commit_ts TAP test.\n That's a bit strange, someone should investigate I guess.\n \n https://travis-ci.org/postgresql-cfbot/postgresql/builds/607563900\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 5 Nov 2019 16:34:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Nov 5, 2019 at 10:34 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> 2) This adds another hashtable maintenance to BufferAlloc etc. but\n> you've only done tests / benchmark for the case this optimizes. I\n> think we need to see a benchmark for workload that allocates and\n> invalidates lot of buffers. A pgbench with a workload that fits into\n> RAM but not into shared buffers would be interesting.\n\nYeah, it seems pretty hard to believe that this won't be bad for some\nworkloads. Not only do you have the overhead of the hash table\noperations, but you also have locking overhead around that. A whole\nnew set of LWLocks where you have to take and release one of them\nevery time you allocate or invalidate a buffer seems likely to cause a\npretty substantial contention problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 6 Nov 2019 11:27:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thurs, November 7, 2019 1:27 AM (GMT+9), Robert Haas wrote:\r\n> On Tue, Nov 5, 2019 at 10:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\r\n> wrote:\r\n> > 2) This adds another hashtable maintenance to BufferAlloc etc. but\r\n> > you've only done tests / benchmark for the case this optimizes. I\r\n> > think we need to see a benchmark for workload that allocates and\r\n> > invalidates lot of buffers. A pgbench with a workload that fits into\r\n> > RAM but not into shared buffers would be interesting.\r\n> \r\n> Yeah, it seems pretty hard to believe that this won't be bad for some workloads.\r\n> Not only do you have the overhead of the hash table operations, but you also\r\n> have locking overhead around that. A whole new set of LWLocks where you have\r\n> to take and release one of them every time you allocate or invalidate a buffer\r\n> seems likely to cause a pretty substantial contention problem.\r\n\r\nI'm sorry for the late reply. Thank you Tomas and Robert for checking this patch.\r\nAttached is the v3 of the patch.\r\n- I moved the unnecessary items from buf_internals.h to cached_buf.c since most of\r\n of those items are only used in that file.\r\n- Fixed the bug of v2. Seems to pass both RT and TAP test now\r\n\r\nThanks for the advice on benchmark test. Please refer below for test and results.\r\n\r\n[Machine spec]\r\nCPU: 16, Number of cores per socket: 8\r\nRHEL6.5, Memory: 240GB\r\n\r\nscale: 3125 (about 46GB DB size)\r\nshared_buffers = 8GB\r\n\r\n[workload that fits into RAM but not into shared buffers]\r\npgbench -i -s 3125 cachetest\r\npgbench -c 16 -j 8 -T 600 cachetest\r\n\r\n[Patched]\r\nscaling factor: 3125\r\nquery mode: simple\r\nnumber of clients: 16\r\nnumber of threads: 8\r\nduration: 600 s\r\nnumber of transactions actually processed: 8815123\r\nlatency average = 1.089 ms\r\ntps = 14691.436343 (including connections establishing)\r\ntps = 14691.482714 (excluding connections establishing)\r\n\r\n[Master/Unpatched]\r\n...\r\nnumber of transactions actually processed: 8852327\r\nlatency average = 1.084 ms\r\ntps = 14753.814648 (including connections establishing)\r\ntps = 14753.861589 (excluding connections establishing)\r\n\r\n\r\nMy patch caused a little overhead of about 0.42-0.46%, which I think is small.\r\nKindly let me know your opinions/comments about the patch or tests, etc.\r\n\r\nThanks,\r\nKirk Jamison", "msg_date": "Tue, 12 Nov 2019 10:49:49 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Nov 12, 2019 at 10:49:49AM +0000, k.jamison@fujitsu.com wrote:\n>On Thurs, November 7, 2019 1:27 AM (GMT+9), Robert Haas wrote:\n>> On Tue, Nov 5, 2019 at 10:34 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> wrote:\n>> > 2) This adds another hashtable maintenance to BufferAlloc etc. but\n>> > you've only done tests / benchmark for the case this optimizes. I\n>> > think we need to see a benchmark for workload that allocates and\n>> > invalidates lot of buffers. A pgbench with a workload that fits into\n>> > RAM but not into shared buffers would be interesting.\n>>\n>> Yeah, it seems pretty hard to believe that this won't be bad for some workloads.\n>> Not only do you have the overhead of the hash table operations, but you also\n>> have locking overhead around that. A whole new set of LWLocks where you have\n>> to take and release one of them every time you allocate or invalidate a buffer\n>> seems likely to cause a pretty substantial contention problem.\n>\n>I'm sorry for the late reply. Thank you Tomas and Robert for checking this patch.\n>Attached is the v3 of the patch.\n>- I moved the unnecessary items from buf_internals.h to cached_buf.c since most of\n> of those items are only used in that file.\n>- Fixed the bug of v2. Seems to pass both RT and TAP test now\n>\n>Thanks for the advice on benchmark test. Please refer below for test and results.\n>\n>[Machine spec]\n>CPU: 16, Number of cores per socket: 8\n>RHEL6.5, Memory: 240GB\n>\n>scale: 3125 (about 46GB DB size)\n>shared_buffers = 8GB\n>\n>[workload that fits into RAM but not into shared buffers]\n>pgbench -i -s 3125 cachetest\n>pgbench -c 16 -j 8 -T 600 cachetest\n>\n>[Patched]\n>scaling factor: 3125\n>query mode: simple\n>number of clients: 16\n>number of threads: 8\n>duration: 600 s\n>number of transactions actually processed: 8815123\n>latency average = 1.089 ms\n>tps = 14691.436343 (including connections establishing)\n>tps = 14691.482714 (excluding connections establishing)\n>\n>[Master/Unpatched]\n>...\n>number of transactions actually processed: 8852327\n>latency average = 1.084 ms\n>tps = 14753.814648 (including connections establishing)\n>tps = 14753.861589 (excluding connections establishing)\n>\n>\n>My patch caused a little overhead of about 0.42-0.46%, which I think is small.\n>Kindly let me know your opinions/comments about the patch or tests, etc.\n>\n\nNow try measuring that with a read-only workload, with prepared\nstatements. I've tried that on a machine with 16 cores, doing\n\n # 16 clients\n pgbench -n -S -j 16 -c 16 -M prepared -T 60 test\n\n # 1 client\n pgbench -n -S -c 1 -M prepared -T 60 test\n\nand average from 30 runs of each looks like this:\n\n # clients master patched %\n ---------------------------------------------------------\n 1 29690 27833 93.7%\n 16 300935 283383 94.1%\n\nThat's quite significant regression, considering it's optimizing an\noperation that is expected to be pretty rare (people are generally not\ndropping dropping objects as often as they query them).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 Nov 2019 20:19:33 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Nov 13, 2019 4:20AM (GMT +9), Tomas Vondra wrote:\r\n> On Tue, Nov 12, 2019 at 10:49:49AM +0000, k.jamison@fujitsu.com wrote:\r\n> >On Thurs, November 7, 2019 1:27 AM (GMT+9), Robert Haas wrote:\r\n> >> On Tue, Nov 5, 2019 at 10:34 AM Tomas Vondra\r\n> >> <tomas.vondra@2ndquadrant.com>\r\n> >> wrote:\r\n> >> > 2) This adds another hashtable maintenance to BufferAlloc etc. but\r\n> >> > you've only done tests / benchmark for the case this optimizes. I\r\n> >> > think we need to see a benchmark for workload that allocates and\r\n> >> > invalidates lot of buffers. A pgbench with a workload that fits into\r\n> >> > RAM but not into shared buffers would be interesting.\r\n> >>\r\n> >> Yeah, it seems pretty hard to believe that this won't be bad for some\r\n> workloads.\r\n> >> Not only do you have the overhead of the hash table operations, but\r\n> >> you also have locking overhead around that. A whole new set of\r\n> >> LWLocks where you have to take and release one of them every time you\r\n> >> allocate or invalidate a buffer seems likely to cause a pretty substantial\r\n> contention problem.\r\n> >\r\n> >I'm sorry for the late reply. Thank you Tomas and Robert for checking this\r\n> patch.\r\n> >Attached is the v3 of the patch.\r\n> >- I moved the unnecessary items from buf_internals.h to cached_buf.c\r\n> >since most of\r\n> > of those items are only used in that file.\r\n> >- Fixed the bug of v2. Seems to pass both RT and TAP test now\r\n> >\r\n> >Thanks for the advice on benchmark test. Please refer below for test and\r\n> results.\r\n> >\r\n> >[Machine spec]\r\n> >CPU: 16, Number of cores per socket: 8\r\n> >RHEL6.5, Memory: 240GB\r\n> >\r\n> >scale: 3125 (about 46GB DB size)\r\n> >shared_buffers = 8GB\r\n> >\r\n> >[workload that fits into RAM but not into shared buffers] pgbench -i -s\r\n> >3125 cachetest pgbench -c 16 -j 8 -T 600 cachetest\r\n> >\r\n> >[Patched]\r\n> >scaling factor: 3125\r\n> >query mode: simple\r\n> >number of clients: 16\r\n> >number of threads: 8\r\n> >duration: 600 s\r\n> >number of transactions actually processed: 8815123 latency average =\r\n> >1.089 ms tps = 14691.436343 (including connections establishing) tps =\r\n> >14691.482714 (excluding connections establishing)\r\n> >\r\n> >[Master/Unpatched]\r\n> >...\r\n> >number of transactions actually processed: 8852327 latency average =\r\n> >1.084 ms tps = 14753.814648 (including connections establishing) tps =\r\n> >14753.861589 (excluding connections establishing)\r\n> >\r\n> >\r\n> >My patch caused a little overhead of about 0.42-0.46%, which I think is small.\r\n> >Kindly let me know your opinions/comments about the patch or tests, etc.\r\n> >\r\n> \r\n> Now try measuring that with a read-only workload, with prepared statements.\r\n> I've tried that on a machine with 16 cores, doing\r\n> \r\n> # 16 clients\r\n> pgbench -n -S -j 16 -c 16 -M prepared -T 60 test\r\n> \r\n> # 1 client\r\n> pgbench -n -S -c 1 -M prepared -T 60 test\r\n> \r\n> and average from 30 runs of each looks like this:\r\n> \r\n> # clients master patched %\r\n> ---------------------------------------------------------\r\n> 1 29690 27833 93.7%\r\n> 16 300935 283383 94.1%\r\n> \r\n> That's quite significant regression, considering it's optimizing an\r\n> operation that is expected to be pretty rare (people are generally not\r\n> dropping dropping objects as often as they query them).\r\n\r\nI updated the patch and reduced the lock contention of new LWLock,\r\nwith tunable definitions in the code and instead of using rnode as the hash key,\r\nI also added the modulo of block number.\r\n#define NUM_MAP_PARTITIONS_FOR_REL\t128\t/* relation-level */\r\n#define NUM_MAP_PARTITIONS_IN_REL\t4\t/* block-level */\r\n#define NUM_MAP_PARTITIONS \\\r\n\t(NUM_MAP_PARTITIONS_FOR_REL * NUM_MAP_PARTITIONS_IN_REL) \r\n\r\nI executed again a benchmark for read-only workload,\r\nbut regression currently sits at 3.10% (reduced from v3's 6%).\r\n\r\nAverage of 10 runs, 16 clients\r\nread-only, prepared query mode\r\n\r\n[Master]\r\nnum of txn processed: 11,950,983.67\r\nlatency average = 0.080 ms\r\ntps = 199,182.24\r\ntps = 199,189.54\r\n\r\n[V4 Patch]\r\nnum of txn processed: 11,580,256.36 \r\nlatency average = 0.083 ms\r\ntps = 193,003.52\r\ntps = 193,010.76\r\n\r\n\r\nI checked the wait event statistics (non-impactful events omitted)\r\nand got the following below.\r\nI reset the stats before running the pgbench script,\r\nThen showed the stats right after the run.\r\n\r\n[Master]\r\n wait_event_type | wait_event | calls | microsec\r\n-----------------+-----------------------+----------+----------\r\n Client | ClientRead | 25116 | 49552452\r\n IO | DataFileRead | 14467109 | 92113056\r\n LWLock | buffer_mapping | 204618 | 1364779\r\n\r\n[Patch V4]\r\n wait_event_type | wait_event | calls | microsec\r\n-----------------+-----------------------+----------+----------\r\n Client | ClientRead | 111393 | 68773946\r\n IO | DataFileRead | 14186773 | 90399833\r\n LWLock | buffer_mapping | 463844 | 4025198\r\n LWLock | cached_buf_tranche_id | 83390 | 336080\r\n\r\nIt seems the buffer_mapping LWLock wait is 4x slower.\r\nHowever, I'd like to continue working on this patch to next commitfest,\r\nand further reduce its impact to read-only workloads.\r\n\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Thu, 28 Nov 2019 03:18:59 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\r\n\r\nI have updated the patch (v5).\r\nI tried to reduce the lock waiting times by using spinlock\r\nwhen inserting/deleting buffers in the new hash table, and\r\nexclusive lock when doing lookup for buffers to be dropped.\r\nIn summary, instead of scanning the whole buffer pool in \r\nshared buffers, we just traverse the doubly-linked list of linked\r\nbuffers for the target relation and block.\r\n\r\nIn order to understand how this patch affects performance,\r\nI also measured the cache hit rates in addition to\r\nbenchmarking db with various shared buffer size settings.\r\n\r\nUsing the same machine specs, I used the default script\r\nof pgbench for read-only workload with prepared statement,\r\nand executed about 15 runs for varying shared buffer sizes.\r\n pgbench -i -s 3200 test //(about 48GB db size)\r\n pgbench -S -n -M prepared -c 16 -j 16 -T 60 test\r\n\r\n[TPS Regression]\r\n shbuf | tps(master) | tps(patch) | %reg \r\n---------+-----------------+-----------------+-------\r\n 5GB | 195,737.23 | 191,422.23 | 2.23\r\n 10GB | 197,067.93 | 194,011.66 | 1.55\r\n 20GB | 200,241.18 | 200,425.29 | -0.09\r\n 40GB | 208,772.81 | 209,807.38 | -0.50\r\n 50GB | 215,684.33 | 218,955.43 | -1.52\r\n\r\n[CACHE HIT RATE]\r\n Shbuf | master | patch\r\n----------+--------------+----------\r\n 10GB | 0.141536 | 0.141485\r\n 20GB | 0.330088 | 0.329894\r\n 30GB | 0.573383 | 0.573377\r\n 40GB | 0.819499 | 0.819264\r\n 50GB | 0.999237 | 0.999577\r\n\r\nFor this workload, the regression increases for below 20GB\r\nshared_buffers size. However, the cache hit rate both for\r\nmaster and patch is 32% (20 GB shbuf). Therefore, I think we\r\ncan consider this kind of workload with low shared buffers\r\nsize as a “special case”, because in terms of db performance\r\ntuning we want as much as possible for the db to have a higher\r\ncache hit rate (99.9%, or maybe let's say 80% is acceptable).\r\nAnd in this workload, ideal shared_buffers size would be\r\naround 40GB more or less to hit that acceptable cache hit rate.\r\nLooking at this patch's performance result, if it's within the acceptable\r\ncache hit rate, there would be at least no regression and the results als\r\n show almost similar tps compared to master.\r\n\r\nYour feedback about the patch and tests are welcome.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Fri, 13 Dec 2019 10:18:46 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\r\n\r\nI have rebased the patch to keep the CFbot happy.\r\nApparently, in the previous patch there was a possibility of infinite loop\r\nwhen allocating buffers, so I fixed that part and also removed some whitespaces.\r\n\r\nKindly check the attached V6 patch.\r\nAny thoughts on this?\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Tue, 4 Feb 2020 09:57:26 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Feb 4, 2020 at 4:57 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n> Kindly check the attached V6 patch.\n> Any thoughts on this?\n\nUnfortunately, I don't have time for detailed review of this. I am\nsuspicious that there are substantial performance regressions that you\njust haven't found yet. I would not take the position that this is a\ncompletely hopeless approach, or anything like that, but neither would\nI conclude that the tests shown so far are anywhere near enough to be\nconfident that there are no problems.\n\nAlso, systems with very large shared_buffers settings are becoming\nmore common, and probably will continue to become more common, so I\ndon't think we can dismiss that as an edge case any more. People don't\nwant to run with an 8GB cache on a 1TB server.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Feb 2020 10:12:52 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\r\n\r\nI know this might already be late at end of CommitFest, but attached\r\nis the latest version of the patch. The previous version only includes buffer\r\ninvalidation improvement for VACUUM. The new patch adds the same\r\nroutine for TRUNCATE WAL replay.\r\n\r\nIn summary, this patch aims to improve the buffer invalidation process\r\nof VACUUM and TRUNCATE. Although it may not be a common use\r\ncase, our customer uses these commands a lot. Recovery and WAL\r\nreplay of these commands can take time depending on the size of\r\ndatabase buffers. So this patch optimizes that using the newly-added\r\nauxiliary cache and doubly-linked list on the shared memory, so that\r\nwe don't need to scan the shared buffers anymore.\r\n\r\nAs for the performance and how it affects the read-only workloads.\r\nUsing pgbench, I've kept the overload to a minimum, less than 1%.\r\nI'll post follow-up results.\r\n\r\nAlthough the additional hash table utilizes shared memory, there's\r\na significant performance gain for both TRUNCATE and VACUUM\r\nfrom execution to recovery.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Wed, 25 Mar 2020 06:24:32 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, March 25, 2020 3:25 PM, Kirk Jamison wrote:\r\n> As for the performance and how it affects the read-only workloads.\r\n> Using pgbench, I've kept the overload to a minimum, less than 1%.\r\n> I'll post follow-up results.\r\n\r\nHere's the follow-up results.\r\nI executed the similar tests from top of the thread.\r\nI hope the performance test results shown below would suffice.\r\nIf not, I'd appreciate any feedback with regards to test or the patch itself.\r\n\r\nA. VACUUM execution + Failover test\r\n- 100GB shared_buffers\r\n\r\n1. 1000 tables (18MB)\r\n1.1. Execution Time\r\n- [MASTER] 77755.218 ms (01:17.755)\r\n- [PATCH] Execution Time: 2147.914 ms (00:02.148)\r\n1.2. Failover Time (Recovery WAL Replay):\r\n- [MASTER] 01:37.084 (1 min 37.884 s)\r\n- [PATCH] 1627 ms (1.627 s)\r\n\r\n2. 10000 tables (110MB)\r\n2.1. Execution Time\r\n- [MASTER] 844174.572 ms (14:04.175) ~14 min 4.175 s\r\n- [PATCH] 75678.559 ms (01:15.679) ~1 min 15.679 s\r\n\r\n2.2. Failover Time:\r\n- [MASTER] est. 14 min++\r\n (I didn't measure anymore because recovery takes\r\n as much as the execution time.)\r\n- [PATCH] 01:25.559 (1 min 25.559 s)\r\n\r\nSignificant performance results for VACUUM.\r\n\r\n\r\nB. TPS Regression for READ-ONLY workload\r\n(PREPARED QUERY MODE, NO VACUUM)\r\n\r\n# [16 Clients]\r\n- pgbench -n -S -j 16 -c 16 -M prepared -T 60 cachetest\r\n\r\n|shbuf |Master |Patch |% reg |\r\n|----------|--------------|---------------|----------|\r\n|128MB| 77,416.76 | 77,162.78 |0.33% |\r\n|1GB | 81,941.30 | 81,812.05 |0.16% |\r\n|2GB | 84,273.69 | 84,356.38 |-0.10%|\r\n|100GB| 83,807.30 | 83,924.68 |-0.14%|\r\n\r\n# [1 Client]\r\n- pgbench -n -S -c 1 -M prepared -T 60 cachetest\r\n\r\n|shbuf |Master |Patch |% reg |\r\n|----------|--------------|---------------|----------|\r\n|128MB| 12,044.54 | 12,037.13 |0.06% |\r\n|1GB | 12,736.57 | 12,774.77 |-0.30%|\r\n|2GB | 12,948.98 | 13,159.90 |-1.63%|\r\n|100GB| 12,982.98 | 13,064.04 |-0.62%|\r\n\r\nBoth were run for 10 times and average tps and % regression are\r\nshown above. At some point only minimal overload was caused by\r\nthe patch. As for other cases, it has higher tps compared to master.\r\n\r\nIf it does not make it this CF, I hope to receive feedback in the future\r\non how to proceed. Thanks in advance!\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Mon, 30 Mar 2020 11:59:08 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\r\n\r\nSince the last posted version of the patch fails, attached is a rebased version.\r\nWritten upthread were performance results and some benefits and challenges.\r\nI'd appreciate your feedback/comments.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Wed, 17 Jun 2020 06:14:35 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "\n\nOn 17.06.2020 09:14, k.jamison@fujitsu.com wrote:\n> Hi,\n>\n> Since the last posted version of the patch fails, attached is a rebased version.\n> Written upthread were performance results and some benefits and challenges.\n> I'd appreciate your feedback/comments.\n>\n> Regards,\n> Kirk Jamison\nAs far as i understand this patch can provide significant improvement of \nperformance only in case of\nrecovery  of truncates of large number of tables. You have added shared \nhash of relation buffers and certainly if adds some\nextra overhead. According to your latest results this overhead is quite \nsmall. But it will be hard to prove that there will be no noticeable \nregression\nat some workloads.\n\nI wonder if you have considered case of local hash (maintained only \nduring recovery)?\nIf there is after-crash recovery, then there will be no concurrent \naccess to shared buffers and this hash will be up-to-date.\nin case of hot-standby replica we can use some simple invalidation (just \none flag or counter which indicates that buffer cache was updated).\nThis hash also can be constructed on demand when DropRelFileNodeBuffers \nis called first time (so w have to scan all buffers once, but subsequent \ndrop operation will be fast.\n\ni have not thought much about it, but it seems to me that as far as this \nproblem only affects recovery, we do not need shared hash for it.\n\n\n\n", "msg_date": "Wed, 29 Jul 2020 10:54:45 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, July 29, 2020 4:55 PM, Konstantin Knizhnik wrote:\r\n> On 17.06.2020 09:14, k.jamison@fujitsu.com wrote:\r\n> > Hi,\r\n> >\r\n> > Since the last posted version of the patch fails, attached is a rebased version.\r\n> > Written upthread were performance results and some benefits and challenges.\r\n> > I'd appreciate your feedback/comments.\r\n> >\r\n> > Regards,\r\n> > Kirk Jamison\r\n\r\n> As far as i understand this patch can provide significant improvement of\r\n> performance only in case of recovery  of truncates of large number of tables. You\r\n> have added shared hash of relation buffers and certainly if adds some extra\r\n> overhead. According to your latest results this overhead is quite small. But it will\r\n> be hard to prove that there will be no noticeable regression at some workloads.\r\n\r\nThank you for taking a look at this.\r\n\r\nYes, one of the aims is to speed up recovery of truncations, but at the same time the\r\npatch also improves autovacuum, vacuum and relation truncate index executions. \r\nI showed results of pgbench results above for different types of workloads,\r\nbut I am not sure if those are validating enough...\r\n\r\n> I wonder if you have considered case of local hash (maintained only during\r\n> recovery)?\r\n> If there is after-crash recovery, then there will be no concurrent access to shared\r\n> buffers and this hash will be up-to-date.\r\n> in case of hot-standby replica we can use some simple invalidation (just one flag\r\n> or counter which indicates that buffer cache was updated).\r\n> This hash also can be constructed on demand when DropRelFileNodeBuffers is\r\n> called first time (so w have to scan all buffers once, but subsequent drop\r\n> operation will be fast.\r\n> \r\n> i have not thought much about it, but it seems to me that as far as this problem\r\n> only affects recovery, we do not need shared hash for it.\r\n> \r\n\r\nThe idea of the patch is to mark the relation buffers to be dropped after scanning\r\nthe whole shared buffers, and store them into shared memory maintained in a dlist,\r\nand traverse the dlist on the next scan.\r\nBut I understand the point that it is expensive and may cause overhead, that is why\r\nI tried to define a macro to limit the number of pages that we can cache for cases\r\nthat lookup cost can be problematic (i.e. too many pages of relation).\r\n\r\n#define BUF_ID_ARRAY_SIZE 100\r\nint buf_id_array[BUF_ID_ARRAY_SIZE];\r\nint forknum_indexes[BUF_ID_ARRAY_SIZE];\r\n\r\nIn DropRelFileNodeBuffers\r\ndo\r\n{\r\n nbufs = CachedBlockLookup(..., forknum_indexes, buf_id_array, lengthof(buf_id_array));\r\n for (i = 0; i < nbufs; i++)\r\n {\r\n ...\r\n }\r\n} while (nbufs == lengthof(buf_id_array));\r\n\r\n\r\nPerhaps the patch affects complexities so we want to keep it simpler, or commit piece by piece?\r\nI will look further into your suggestion of maintaining local hash only during recovery.\r\nThank you for the suggestion.\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Thu, 30 Jul 2020 07:57:40 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI have tested this patch at various workloads and hardware (including Power2 server with 384 virtual cores)\r\nand didn't find performance regression.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 30 Jul 2020 17:37:10 +0000", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, July 31, 2020 2:37 AM, Konstantin Knizhnik wrote:\r\n\r\n> The following review has been posted through the commitfest application:\r\n> make installcheck-world: tested, passed\r\n> Implements feature: tested, passed\r\n> Spec compliant: not tested\r\n> Documentation: not tested\r\n> \r\n> I have tested this patch at various workloads and hardware (including Power2\r\n> server with 384 virtual cores) and didn't find performance regression.\r\n> \r\n> The new status of this patch is: Ready for Committer\r\n\r\nThank you very much, Konstantin, for testing the patch for different workloads.\r\nI wonder if I need to modify some documentations.\r\nI'll leave the final review to the committer/s as well.\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Fri, 31 Jul 2020 05:12:13 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Unfortunately, I don't have time for detailed review of this. I am\n> suspicious that there are substantial performance regressions that you\n> just haven't found yet. I would not take the position that this is a\n> completely hopeless approach, or anything like that, but neither would\n> I conclude that the tests shown so far are anywhere near enough to be\n> confident that there are no problems.\n\nI took a quick look through the v8 patch, since it's marked RFC, and\nmy feeling is about the same as Robert's: it is just about impossible\nto believe that doubling (or more) the amount of hashtable manipulation\ninvolved in allocating a buffer won't hurt common workloads. The\noffered pgbench results don't reassure me; we've so often found that\npgbench fails to expose performance problems, except maybe when it's\nused just so.\n\nBut aside from that, I noted a number of things I didn't like a bit:\n\n* The amount of new shared memory this needs seems several orders\nof magnitude higher than what I'd call acceptable: according to my\nmeasurements it's over 10KB per shared buffer! Most of that is going\ninto the CachedBufTableLock data structure, which seems fundamentally\nmisdesigned --- how could we be needing a lock per map partition *per\nbuffer*? For comparison, the space used by buf_table.c is about 56\nbytes per shared buffer; I think this needs to stay at least within\nhailing distance of there.\n\n* It is fairly suspicious that the new data structure is manipulated\nwhile holding per-partition locks for the existing buffer hashtable.\nAt best that seems bad for concurrency, and at worst it could result\nin deadlocks, because I doubt we can assume that the new hash table\nhas partition boundaries identical to the old one.\n\n* More generally, it seems like really poor design that this has been\nwritten completely independently of the existing buffer hash table.\nCan't we get any benefit by merging them somehow?\n\n* I do not like much of anything in the code details. \"CachedBuf\"\nis as unhelpful as could be as a data structure identifier --- what\nexactly is not \"cached\" about shared buffers already? \"CombinedLock\"\nis not too helpful either, nor could I find any documentation explaining\nwhy you need to invent new locking technology in the first place.\nAt best, CombinedLockAcquireSpinLock seems like a brute-force approach\nto an undocumented problem.\n\n* The commentary overall is far too sparse to be of any value ---\nbasically, any reader will have to reverse-engineer your entire design.\nThat's not how we do things around here. There should be either a README,\nor a long file header comment, explaining what's going on, how the data\nstructure is organized, and what the locking requirements are.\nSee src/backend/storage/buffer/README for the sort of documentation\nthat I think this needs.\n\nEven if I were convinced that there's no performance gotchas,\nI wouldn't commit this in anything like its current form.\n\nRobert again:\n> Also, systems with very large shared_buffers settings are becoming\n> more common, and probably will continue to become more common, so I\n> don't think we can dismiss that as an edge case any more. People don't\n> want to run with an 8GB cache on a 1TB server.\n\nI do agree that it'd be great to improve this area. Just not convinced\nthat this is how.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 31 Jul 2020 13:39:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\n\nOn 2020-07-31 13:39:37 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Unfortunately, I don't have time for detailed review of this. I am\n> > suspicious that there are substantial performance regressions that you\n> > just haven't found yet. I would not take the position that this is a\n> > completely hopeless approach, or anything like that, but neither would\n> > I conclude that the tests shown so far are anywhere near enough to be\n> > confident that there are no problems.\n> \n> I took a quick look through the v8 patch, since it's marked RFC, and\n> my feeling is about the same as Robert's: it is just about impossible\n> to believe that doubling (or more) the amount of hashtable manipulation\n> involved in allocating a buffer won't hurt common workloads. The\n> offered pgbench results don't reassure me; we've so often found that\n> pgbench fails to expose performance problems, except maybe when it's\n> used just so.\n\nIndeed. The buffer mapping hashtable already is visible as a major\nbottleneck in a number of workloads. Even in readonly pgbench if s_b is\nlarge enough (so the hashtable is larger than the cache). Not to speak\nof things like a cached sequential scan with a cheap qual and wide rows.\n\n\n> Robert again:\n> > Also, systems with very large shared_buffers settings are becoming\n> > more common, and probably will continue to become more common, so I\n> > don't think we can dismiss that as an edge case any more. People don't\n> > want to run with an 8GB cache on a 1TB server.\n> \n> I do agree that it'd be great to improve this area. Just not convinced\n> that this is how.\n\nWonder if the temporary fix is just to do explicit hashtable probes for\nall pages iff the size of the relation is < s_b / 500 or so. That'll\naddress the case where small tables are frequently dropped - and\ndropping large relations is more expensive from the OS and data loading\nperspective, so it's not gonna happen as often.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 31 Jul 2020 12:17:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Indeed. The buffer mapping hashtable already is visible as a major\n> bottleneck in a number of workloads. Even in readonly pgbench if s_b is\n> large enough (so the hashtable is larger than the cache). Not to speak\n> of things like a cached sequential scan with a cheap qual and wide rows.\n\nTo be fair, the added overhead is in buffer allocation not buffer lookup.\nSo it shouldn't add cost to fully-cached cases. As Tomas noted upthread,\nthe potential trouble spot is where the working set is bigger than shared\nbuffers but still fits in RAM (so there's no actual I/O needed, but we do\nstill have to shuffle buffers a lot).\n\n> Wonder if the temporary fix is just to do explicit hashtable probes for\n> all pages iff the size of the relation is < s_b / 500 or so. That'll\n> address the case where small tables are frequently dropped - and\n> dropping large relations is more expensive from the OS and data loading\n> perspective, so it's not gonna happen as often.\n\nOooh, interesting idea. We'd need a reliable idea of how long the\nrelation had been (preferably without adding an lseek call), but maybe\nthat's do-able.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 31 Jul 2020 15:50:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\n\nOn 2020-07-31 15:50:04 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Indeed. The buffer mapping hashtable already is visible as a major\n> > bottleneck in a number of workloads. Even in readonly pgbench if s_b is\n> > large enough (so the hashtable is larger than the cache). Not to speak\n> > of things like a cached sequential scan with a cheap qual and wide rows.\n> \n> To be fair, the added overhead is in buffer allocation not buffer lookup.\n> So it shouldn't add cost to fully-cached cases. As Tomas noted upthread,\n> the potential trouble spot is where the working set is bigger than shared\n> buffers but still fits in RAM (so there's no actual I/O needed, but we do\n> still have to shuffle buffers a lot).\n\nOh, right, not sure what I was thinking.\n\n\n> > Wonder if the temporary fix is just to do explicit hashtable probes for\n> > all pages iff the size of the relation is < s_b / 500 or so. That'll\n> > address the case where small tables are frequently dropped - and\n> > dropping large relations is more expensive from the OS and data loading\n> > perspective, so it's not gonna happen as often.\n> \n> Oooh, interesting idea. We'd need a reliable idea of how long the\n> relation had been (preferably without adding an lseek call), but maybe\n> that's do-able.\n\nIIRC we already do smgrnblocks nearby, when doing the truncation (to\nfigure out which segments we need to remove). Perhaps we can arrange to\ncombine the two? The layering probably makes that somewhat ugly :(\n\nWe could also just use pg_class.relpages. It'll probably mostly be\naccurate enough?\n\nOr we could just cache the result of the last smgrnblocks call...\n\n\nOne of the cases where this type of strategy is most intersting to me is\nthe partial truncations that autovacuum does... There we even know the\nrange of tables ahead of time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 31 Jul 2020 13:23:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Saturday, August 1, 2020 5:24 AM, Andres Freund wrote:\n\nHi,\nThank you for your constructive review and comments.\nSorry for the late reply.\n\n> Hi,\n> \n> On 2020-07-31 15:50:04 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > Indeed. The buffer mapping hashtable already is visible as a major\n> > > bottleneck in a number of workloads. Even in readonly pgbench if s_b\n> > > is large enough (so the hashtable is larger than the cache). Not to\n> > > speak of things like a cached sequential scan with a cheap qual and wide\n> rows.\n> >\n> > To be fair, the added overhead is in buffer allocation not buffer lookup.\n> > So it shouldn't add cost to fully-cached cases. As Tomas noted\n> > upthread, the potential trouble spot is where the working set is\n> > bigger than shared buffers but still fits in RAM (so there's no actual\n> > I/O needed, but we do still have to shuffle buffers a lot).\n> \n> Oh, right, not sure what I was thinking.\n> \n> \n> > > Wonder if the temporary fix is just to do explicit hashtable probes\n> > > for all pages iff the size of the relation is < s_b / 500 or so.\n> > > That'll address the case where small tables are frequently dropped -\n> > > and dropping large relations is more expensive from the OS and data\n> > > loading perspective, so it's not gonna happen as often.\n> >\n> > Oooh, interesting idea. We'd need a reliable idea of how long the\n> > relation had been (preferably without adding an lseek call), but maybe\n> > that's do-able.\n> \n> IIRC we already do smgrnblocks nearby, when doing the truncation (to figure out\n> which segments we need to remove). Perhaps we can arrange to combine the\n> two? The layering probably makes that somewhat ugly :(\n> \n> We could also just use pg_class.relpages. It'll probably mostly be accurate\n> enough?\n> \n> Or we could just cache the result of the last smgrnblocks call...\n> \n> \n> One of the cases where this type of strategy is most intersting to me is the partial\n> truncations that autovacuum does... There we even know the range of tables\n> ahead of time.\n\nKonstantin tested it on various workloads and saw no regression.\nBut I understand the sentiment on the added overhead on BufferAlloc.\nRegarding the case where the patch would potentially affect workloads that fit into\nRAM but not into shared buffers, could one of Andres' suggested idea/s above address\nthat, in addition to this patch's possible shared invalidation fix? Could that settle\nthe added overhead in BufferAlloc() as temporary fix?\nThomas Munro is also working on caching relation sizes [1], maybe that way we\ncould get the latest known relation size. Currently, it's possible only during\nrecovery in smgrnblocks.\n\nTom Lane wrote:\n> But aside from that, I noted a number of things I didn't like a bit:\n> \n> * The amount of new shared memory this needs seems several orders of\n> magnitude higher than what I'd call acceptable: according to my measurements\n> it's over 10KB per shared buffer! Most of that is going into the\n> CachedBufTableLock data structure, which seems fundamentally misdesigned ---\n> how could we be needing a lock per map partition *per buffer*? For comparison,\n> the space used by buf_table.c is about 56 bytes per shared buffer; I think this\n> needs to stay at least within hailing distance of there.\n> \n> * It is fairly suspicious that the new data structure is manipulated while holding\n> per-partition locks for the existing buffer hashtable.\n> At best that seems bad for concurrency, and at worst it could result in deadlocks,\n> because I doubt we can assume that the new hash table has partition boundaries\n> identical to the old one.\n> \n> * More generally, it seems like really poor design that this has been written\n> completely independently of the existing buffer hash table.\n> Can't we get any benefit by merging them somehow?\n\nThe original aim is to just shorten the recovery process, and eventually the speedup\non both vacuum and truncate process are just added bonus.\nGiven that we don't have a shared invalidation mechanism in place yet like radix tree\nbuffer mapping which is complex, I thought a patch like mine could be an alternative\napproach to that. So I want to improve the patch further. \nI hope you can help me clarify the direction, so that I can avoid going farther away\nfrom what the community wants.\n 1. Both normal operations and recovery process\n 2. Improve recovery process only\n\nFor 1, the current patch aims to touch on that, but further design improvement is needed.\nIt would be ideal to modify the BufferDesc, but that cannot be expanded anymore because\nit would exceed the CPU cache line size. So I added new data structures (hash table,\ndlist, lock) instead of modifying the existing ones.\nThe new hash table ensures that it's identical to the old one with the use of the same\nRelfilenode in the key and a lock when inserting and deleting buffers from buffer table,\nas well as during lookups. As for the partition locking, I added it to reduce lock contention.\nTomas Vondra reported regression and mainly its due to buffer mapping locks in V4 and\nprevious patch versions. So from V5, I used spinlock when inserting/deleting buffers,\nto prevent modification when concurrent lookup is happening. LWLock is acquired when\nwe're doing lookup operation.\nIf we want this direction, I hope to address Tom's comments in the next patch version.\nI admit that this patch needs reworking on shmem resource consumption and clarifying\nthe design/approach more, i.e. how it affects the existing buffer allocation and\ninvalidation process, lock mechanism, etc.\n\nIf we're going for 2, Konstantin suggested an idea in the previous email:\n\n> I wonder if you have considered case of local hash (maintained only during recovery)?\n> If there is after-crash recovery, then there will be no concurrent \n> access to shared buffers and this hash will be up-to-date.\n> in case of hot-standby replica we can use some simple invalidation (just \n> one flag or counter which indicates that buffer cache was updated).\n> This hash also can be constructed on demand when DropRelFileNodeBuffers \n> is called first time (so w have to scan all buffers once, but subsequent \n> drop operation will be fast.\n\nI'm examining this, but I am not sure if I got the correct understanding. Please correct\nme if I'm wrong.\nI think above is a suggestion wherein the postgres startup process uses local hash table\nto keep track of the buffers of relations. Since there may be other read-only sessions which\nread from disk, evict cached blocks, and modify the shared_buffers, the flag would be updated.\nWe could do it during recovery, then release it as recovery completes.\n\nI haven't looked deeply yet into the source code but we maybe can modify the REDO\n(main redo do-while loop) in StartupXLOG() once the read-only connections are consistent.\nIt would also be beneficial to construct this local hash when DropRefFileNodeBuffers()\nis called for the first time, so the whole share buffers is scanned initially, then as\nyou mentioned subsequent dropping will be fast. (similar behavior to what the patch does)\n\nDo you think this is feasible to be implemented? Or should we explore another approach?\n\nI'd really appreciate your ideas, feedback, suggestions, and advice.\nThank you again for the review.\n\nRegards\nKirk Jamison\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGKEW7-9pq%2Bs2_4Q-Fcpr9cc7_0b3pkno5qzPKC3y2nOPA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 6 Aug 2020 01:23:31 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Aug 06, 2020 at 01:23:31AM +0000, k.jamison@fujitsu.com wrote:\n>On Saturday, August 1, 2020 5:24 AM, Andres Freund wrote:\n>\n>Hi,\n>Thank you for your constructive review and comments.\n>Sorry for the late reply.\n>\n>> Hi,\n>>\n>> On 2020-07-31 15:50:04 -0400, Tom Lane wrote:\n>> > Andres Freund <andres@anarazel.de> writes:\n>> > > Indeed. The buffer mapping hashtable already is visible as a major\n>> > > bottleneck in a number of workloads. Even in readonly pgbench if s_b\n>> > > is large enough (so the hashtable is larger than the cache). Not to\n>> > > speak of things like a cached sequential scan with a cheap qual and wide\n>> rows.\n>> >\n>> > To be fair, the added overhead is in buffer allocation not buffer lookup.\n>> > So it shouldn't add cost to fully-cached cases. As Tomas noted\n>> > upthread, the potential trouble spot is where the working set is\n>> > bigger than shared buffers but still fits in RAM (so there's no actual\n>> > I/O needed, but we do still have to shuffle buffers a lot).\n>>\n>> Oh, right, not sure what I was thinking.\n>>\n>>\n>> > > Wonder if the temporary fix is just to do explicit hashtable probes\n>> > > for all pages iff the size of the relation is < s_b / 500 or so.\n>> > > That'll address the case where small tables are frequently dropped -\n>> > > and dropping large relations is more expensive from the OS and data\n>> > > loading perspective, so it's not gonna happen as often.\n>> >\n>> > Oooh, interesting idea. We'd need a reliable idea of how long the\n>> > relation had been (preferably without adding an lseek call), but maybe\n>> > that's do-able.\n>>\n>> IIRC we already do smgrnblocks nearby, when doing the truncation (to figure out\n>> which segments we need to remove). Perhaps we can arrange to combine the\n>> two? The layering probably makes that somewhat ugly :(\n>>\n>> We could also just use pg_class.relpages. It'll probably mostly be accurate\n>> enough?\n>>\n>> Or we could just cache the result of the last smgrnblocks call...\n>>\n>>\n>> One of the cases where this type of strategy is most intersting to me is the partial\n>> truncations that autovacuum does... There we even know the range of tables\n>> ahead of time.\n>\n>Konstantin tested it on various workloads and saw no regression.\n\nUnfortunately Konstantin did not share any details about what workloads\nhe tested, what config etc. But I find the \"no regression\" hypothesis\nrather hard to believe, because we're adding non-trivial amount of code\nto a place that can be quite hot.\n\nAnd I can trivially reproduce measurable (and significant) regression\nusing a very simple pgbench read-only test, with amount of data that\nexceeds shared buffers but fits into RAM.\n\nThe following numbers are from a x86_64 machine with 16 cores (32 w HT),\n64GB of RAM, and 8GB shared buffers, using pgbench scale 1000 (so 16GB,\ni.e. twice the SB size).\n\nWith simple \"pgbench -S\" tests (warmup and then 15 x 1-minute runs with\n1, 8 and 16 clients - see the attached script for details) I see this:\n\n 1 client 8 clients 16 clients\n ----------------------------------------------\n master 38249 236336 368591\n patched 35853 217259 349248\n -6% -8% -5%\n\nThis is average of the runs, but the conclusions for medians are almost\nexactly te same.\n\n>But I understand the sentiment on the added overhead on BufferAlloc.\n>Regarding the case where the patch would potentially affect workloads\n>that fit into RAM but not into shared buffers, could one of Andres'\n>suggested idea/s above address that, in addition to this patch's\n>possible shared invalidation fix? Could that settle the added overhead\n>in BufferAlloc() as temporary fix?\n\nNot sure.\n\n>Thomas Munro is also working on caching relation sizes [1], maybe that\n>way we could get the latest known relation size. Currently, it's\n>possible only during recovery in smgrnblocks.\n\nIt's not clear to me how would knowing the relation size help reducing\nthe overhead of this patch?\n\nCan't we somehow identify cases when this optimization might help and\nonly actually enable it in those cases? Like in a recovery, with a lot\nof truncates, or something like that.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 6 Aug 2020 23:33:34 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Aug 6, 2020 at 6:53 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Saturday, August 1, 2020 5:24 AM, Andres Freund wrote:\n>\n> Hi,\n> Thank you for your constructive review and comments.\n> Sorry for the late reply.\n>\n> > Hi,\n> >\n> > On 2020-07-31 15:50:04 -0400, Tom Lane wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > Indeed. The buffer mapping hashtable already is visible as a major\n> > > > bottleneck in a number of workloads. Even in readonly pgbench if s_b\n> > > > is large enough (so the hashtable is larger than the cache). Not to\n> > > > speak of things like a cached sequential scan with a cheap qual and wide\n> > rows.\n> > >\n> > > To be fair, the added overhead is in buffer allocation not buffer lookup.\n> > > So it shouldn't add cost to fully-cached cases. As Tomas noted\n> > > upthread, the potential trouble spot is where the working set is\n> > > bigger than shared buffers but still fits in RAM (so there's no actual\n> > > I/O needed, but we do still have to shuffle buffers a lot).\n> >\n> > Oh, right, not sure what I was thinking.\n> >\n> >\n> > > > Wonder if the temporary fix is just to do explicit hashtable probes\n> > > > for all pages iff the size of the relation is < s_b / 500 or so.\n> > > > That'll address the case where small tables are frequently dropped -\n> > > > and dropping large relations is more expensive from the OS and data\n> > > > loading perspective, so it's not gonna happen as often.\n> > >\n> > > Oooh, interesting idea. We'd need a reliable idea of how long the\n> > > relation had been (preferably without adding an lseek call), but maybe\n> > > that's do-able.\n> >\n> > IIRC we already do smgrnblocks nearby, when doing the truncation (to figure out\n> > which segments we need to remove). Perhaps we can arrange to combine the\n> > two? The layering probably makes that somewhat ugly :(\n> >\n> > We could also just use pg_class.relpages. It'll probably mostly be accurate\n> > enough?\n> >\n> > Or we could just cache the result of the last smgrnblocks call...\n> >\n> >\n> > One of the cases where this type of strategy is most intersting to me is the partial\n> > truncations that autovacuum does... There we even know the range of tables\n> > ahead of time.\n>\n> Konstantin tested it on various workloads and saw no regression.\n> But I understand the sentiment on the added overhead on BufferAlloc.\n> Regarding the case where the patch would potentially affect workloads that fit into\n> RAM but not into shared buffers, could one of Andres' suggested idea/s above address\n> that, in addition to this patch's possible shared invalidation fix? Could that settle\n> the added overhead in BufferAlloc() as temporary fix?\n>\n\nYes, I think so. Because as far as I can understand he is suggesting\nto do changes only in the Truncate/Vacuum code path. Basically, I\nthink you need to change DropRelFileNodeBuffers or similar functions.\nThere shouldn't be any change in the BufferAlloc or the common code\npath, so there is no question of regression in such cases. I am not\nsure what you have in mind for this but feel free to clarify your\nunderstanding before implementation.\n\n> Thomas Munro is also working on caching relation sizes [1], maybe that way we\n> could get the latest known relation size. Currently, it's possible only during\n> recovery in smgrnblocks.\n>\n> Tom Lane wrote:\n> > But aside from that, I noted a number of things I didn't like a bit:\n> >\n> > * The amount of new shared memory this needs seems several orders of\n> > magnitude higher than what I'd call acceptable: according to my measurements\n> > it's over 10KB per shared buffer! Most of that is going into the\n> > CachedBufTableLock data structure, which seems fundamentally misdesigned ---\n> > how could we be needing a lock per map partition *per buffer*? For comparison,\n> > the space used by buf_table.c is about 56 bytes per shared buffer; I think this\n> > needs to stay at least within hailing distance of there.\n> >\n> > * It is fairly suspicious that the new data structure is manipulated while holding\n> > per-partition locks for the existing buffer hashtable.\n> > At best that seems bad for concurrency, and at worst it could result in deadlocks,\n> > because I doubt we can assume that the new hash table has partition boundaries\n> > identical to the old one.\n> >\n> > * More generally, it seems like really poor design that this has been written\n> > completely independently of the existing buffer hash table.\n> > Can't we get any benefit by merging them somehow?\n>\n> The original aim is to just shorten the recovery process, and eventually the speedup\n> on both vacuum and truncate process are just added bonus.\n> Given that we don't have a shared invalidation mechanism in place yet like radix tree\n> buffer mapping which is complex, I thought a patch like mine could be an alternative\n> approach to that. So I want to improve the patch further.\n> I hope you can help me clarify the direction, so that I can avoid going farther away\n> from what the community wants.\n> 1. Both normal operations and recovery process\n> 2. Improve recovery process only\n>\n\nI feel Andres's suggestion will help in both cases.\n\n> > I wonder if you have considered case of local hash (maintained only during recovery)?\n> > If there is after-crash recovery, then there will be no concurrent\n> > access to shared buffers and this hash will be up-to-date.\n> > in case of hot-standby replica we can use some simple invalidation (just\n> > one flag or counter which indicates that buffer cache was updated).\n> > This hash also can be constructed on demand when DropRelFileNodeBuffers\n> > is called first time (so w have to scan all buffers once, but subsequent\n> > drop operation will be fast.\n>\n> I'm examining this, but I am not sure if I got the correct understanding. Please correct\n> me if I'm wrong.\n> I think above is a suggestion wherein the postgres startup process uses local hash table\n> to keep track of the buffers of relations. Since there may be other read-only sessions which\n> read from disk, evict cached blocks, and modify the shared_buffers, the flag would be updated.\n> We could do it during recovery, then release it as recovery completes.\n>\n> I haven't looked deeply yet into the source code but we maybe can modify the REDO\n> (main redo do-while loop) in StartupXLOG() once the read-only connections are consistent.\n> It would also be beneficial to construct this local hash when DropRefFileNodeBuffers()\n> is called for the first time, so the whole share buffers is scanned initially, then as\n> you mentioned subsequent dropping will be fast. (similar behavior to what the patch does)\n>\n> Do you think this is feasible to be implemented? Or should we explore another approach?\n>\n\nI think we should try what Andres is suggesting as that seems like a\npromising idea and can address most of the common problems in this\narea but if you feel otherwise, then do let us know.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Aug 2020 09:07:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 7, 2020 at 3:03 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> >But I understand the sentiment on the added overhead on BufferAlloc.\n> >Regarding the case where the patch would potentially affect workloads\n> >that fit into RAM but not into shared buffers, could one of Andres'\n> >suggested idea/s above address that, in addition to this patch's\n> >possible shared invalidation fix? Could that settle the added overhead\n> >in BufferAlloc() as temporary fix?\n>\n> Not sure.\n>\n> >Thomas Munro is also working on caching relation sizes [1], maybe that\n> >way we could get the latest known relation size. Currently, it's\n> >possible only during recovery in smgrnblocks.\n>\n> It's not clear to me how would knowing the relation size help reducing\n> the overhead of this patch?\n>\n\nAFAICU the idea is to directly call BufTableLookup (similar to how we\ndo in BufferAlloc) to find the buf_id in function\nDropRelFileNodeBuffers and then invalidate the required buffers. And,\nwe need to do this when the size of the relation is less than some\nthreshold. So, I think the crux would be to reliably get the number of\nblocks information. So, probably relation size cache stuff might help.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Aug 2020 09:19:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-07-31 15:50:04 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n>\n> > > Wonder if the temporary fix is just to do explicit hashtable probes for\n> > > all pages iff the size of the relation is < s_b / 500 or so. That'll\n> > > address the case where small tables are frequently dropped - and\n> > > dropping large relations is more expensive from the OS and data loading\n> > > perspective, so it's not gonna happen as often.\n> >\n> > Oooh, interesting idea. We'd need a reliable idea of how long the\n> > relation had been (preferably without adding an lseek call), but maybe\n> > that's do-able.\n>\n> IIRC we already do smgrnblocks nearby, when doing the truncation (to\n> figure out which segments we need to remove). Perhaps we can arrange to\n> combine the two? The layering probably makes that somewhat ugly :(\n>\n> We could also just use pg_class.relpages. It'll probably mostly be\n> accurate enough?\n>\n\nDon't we need the accurate 'number of blocks' if we want to invalidate\nall the buffers? Basically, I think we need to perform BufTableLookup\nfor all the blocks in the relation and then Invalidate all buffers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Aug 2020 09:22:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <andres@anarazel.de> wrote:\n>> We could also just use pg_class.relpages. It'll probably mostly be\n>> accurate enough?\n\n> Don't we need the accurate 'number of blocks' if we want to invalidate\n> all the buffers? Basically, I think we need to perform BufTableLookup\n> for all the blocks in the relation and then Invalidate all buffers.\n\nYeah, there is no room for \"good enough\" here. If a dirty buffer remains\nin the system, the checkpointer will eventually try to flush it, and fail\n(because there's no file to write it to), and then checkpointing will be\nstuck. So we cannot afford to risk missing any buffers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Aug 2020 00:03:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "\n\nOn 07.08.2020 00:33, Tomas Vondra wrote:\n>\n> Unfortunately Konstantin did not share any details about what workloads\n> he tested, what config etc. But I find the \"no regression\" hypothesis\n> rather hard to believe, because we're adding non-trivial amount of code\n> to a place that can be quite hot.\n\nSorry, that I have not explained� my test scenarios.\nAs far as Postgres is pgbench-oriented database:) I have also used pgbench:\nread-only case and sip-some updates.\nFor this patch most critical is number of buffer allocations,\nso I used small enough database (scale=100), but shared buffer was set \nto 1Gb.\nAs a result, all data is cached in memory (in file system cache), but \nthere is intensive swapping at Postgres buffer manager level.\nI have tested it both with relatively small (100) and large (1000) \nnumber of clients.\nI repeated this tests at my notebook (quadcore, 16Gb RAM, SSD) and IBM \nPower2 server with about 380 virtual cores� and about 1Tb of memory.\nI the last case results are vary very much I think because of NUMA \narchitecture) but I failed to find some noticeable regression of patched \nversion.\n\n\nBut I have to agree that adding parallel hash (in addition to existed \nbuffer manager hash) is not so good idea.\nThis cache really quite frequently becomes bottleneck.\nMy explanation of why I have not observed some noticeable regression was \nthat this patch uses almost the same lock partitioning schema\nas already used it adds not so much new conflicts. May be in case of \nPOwer2 server, overhead of NUMA is much higher than other factors\n(although shared hash is one of the main thing suffering from NUMA \narchitecture).\nBut in principle I agree that having two independent caches may decrease \nspeed up to two times� (or even more).\n\nI hope that everybody will agree that this problem is really critical. \nIt is certainly not the most common case when there are hundreds of \nrelation which are frequently truncated. But having quadratic complexity \nin drop function is not acceptable from my point of view.\nAnd it is not only recovery-specific problem, this is why solution with \nlocal cache is not enough.\n\nI do not know good solution of the problem. Just some thoughts.\n- We can somehow combine locking used for main buffer manager cache (by \nrelid/blockno) and cache for relid. It will eliminates double locking \noverhead.\n- We can use something like sorted tree (like std::map) instead of hash \n- it will allow to locate blocks both by relid/blockno and by relid only.\n\n\n", "msg_date": "Fri, 7 Aug 2020 10:08:23 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, August 7, 2020 12:38 PM, Amit Kapila wrote:\r\nHi,\r\n> On Thu, Aug 6, 2020 at 6:53 AM k.jamison@fujitsu.com <k.jamison@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Saturday, August 1, 2020 5:24 AM, Andres Freund wrote:\r\n> >\r\n> > Hi,\r\n> > Thank you for your constructive review and comments.\r\n> > Sorry for the late reply.\r\n> >\r\n> > > Hi,\r\n> > >\r\n> > > On 2020-07-31 15:50:04 -0400, Tom Lane wrote:\r\n> > > > Andres Freund <andres@anarazel.de> writes:\r\n> > > > > Indeed. The buffer mapping hashtable already is visible as a\r\n> > > > > major bottleneck in a number of workloads. Even in readonly\r\n> > > > > pgbench if s_b is large enough (so the hashtable is larger than\r\n> > > > > the cache). Not to speak of things like a cached sequential scan\r\n> > > > > with a cheap qual and wide\r\n> > > rows.\r\n> > > >\r\n> > > > To be fair, the added overhead is in buffer allocation not buffer lookup.\r\n> > > > So it shouldn't add cost to fully-cached cases. As Tomas noted\r\n> > > > upthread, the potential trouble spot is where the working set is\r\n> > > > bigger than shared buffers but still fits in RAM (so there's no\r\n> > > > actual I/O needed, but we do still have to shuffle buffers a lot).\r\n> > >\r\n> > > Oh, right, not sure what I was thinking.\r\n> > >\r\n> > >\r\n> > > > > Wonder if the temporary fix is just to do explicit hashtable\r\n> > > > > probes for all pages iff the size of the relation is < s_b / 500 or so.\r\n> > > > > That'll address the case where small tables are frequently\r\n> > > > > dropped - and dropping large relations is more expensive from\r\n> > > > > the OS and data loading perspective, so it's not gonna happen as often.\r\n> > > >\r\n> > > > Oooh, interesting idea. We'd need a reliable idea of how long the\r\n> > > > relation had been (preferably without adding an lseek call), but\r\n> > > > maybe that's do-able.\r\n> > >\r\n> > > IIRC we already do smgrnblocks nearby, when doing the truncation (to\r\n> > > figure out which segments we need to remove). Perhaps we can arrange\r\n> > > to combine the two? The layering probably makes that somewhat ugly\r\n> > > :(\r\n> > >\r\n> > > We could also just use pg_class.relpages. It'll probably mostly be\r\n> > > accurate enough?\r\n> > >\r\n> > > Or we could just cache the result of the last smgrnblocks call...\r\n> > >\r\n> > >\r\n> > > One of the cases where this type of strategy is most intersting to\r\n> > > me is the partial truncations that autovacuum does... There we even\r\n> > > know the range of tables ahead of time.\r\n> >\r\n> > Konstantin tested it on various workloads and saw no regression.\r\n> > But I understand the sentiment on the added overhead on BufferAlloc.\r\n> > Regarding the case where the patch would potentially affect workloads\r\n> > that fit into RAM but not into shared buffers, could one of Andres'\r\n> > suggested idea/s above address that, in addition to this patch's\r\n> > possible shared invalidation fix? Could that settle the added overhead in\r\n> BufferAlloc() as temporary fix?\r\n> >\r\n> \r\n> Yes, I think so. Because as far as I can understand he is suggesting to do changes\r\n> only in the Truncate/Vacuum code path. Basically, I think you need to change\r\n> DropRelFileNodeBuffers or similar functions.\r\n> There shouldn't be any change in the BufferAlloc or the common code path, so\r\n> there is no question of regression in such cases. I am not sure what you have in\r\n> mind for this but feel free to clarify your understanding before implementation.\r\n>\r\n> > Thomas Munro is also working on caching relation sizes [1], maybe that\r\n> > way we could get the latest known relation size. Currently, it's\r\n> > possible only during recovery in smgrnblocks.\r\n> >\r\n> > Tom Lane wrote:\r\n> > > But aside from that, I noted a number of things I didn't like a bit:\r\n> > >\r\n> > > * The amount of new shared memory this needs seems several orders of\r\n> > > magnitude higher than what I'd call acceptable: according to my\r\n> > > measurements it's over 10KB per shared buffer! Most of that is\r\n> > > going into the CachedBufTableLock data structure, which seems\r\n> > > fundamentally misdesigned --- how could we be needing a lock per map\r\n> > > partition *per buffer*? For comparison, the space used by\r\n> > > buf_table.c is about 56 bytes per shared buffer; I think this needs to stay at\r\n> least within hailing distance of there.\r\n> > >\r\n> > > * It is fairly suspicious that the new data structure is manipulated\r\n> > > while holding per-partition locks for the existing buffer hashtable.\r\n> > > At best that seems bad for concurrency, and at worst it could result\r\n> > > in deadlocks, because I doubt we can assume that the new hash table\r\n> > > has partition boundaries identical to the old one.\r\n> > >\r\n> > > * More generally, it seems like really poor design that this has\r\n> > > been written completely independently of the existing buffer hash table.\r\n> > > Can't we get any benefit by merging them somehow?\r\n> >\r\n> > The original aim is to just shorten the recovery process, and\r\n> > eventually the speedup on both vacuum and truncate process are just added\r\n> bonus.\r\n> > Given that we don't have a shared invalidation mechanism in place yet\r\n> > like radix tree buffer mapping which is complex, I thought a patch\r\n> > like mine could be an alternative approach to that. So I want to improve the\r\n> patch further.\r\n> > I hope you can help me clarify the direction, so that I can avoid\r\n> > going farther away from what the community wants.\r\n> > 1. Both normal operations and recovery process 2. Improve recovery\r\n> > process only\r\n> >\r\n> \r\n> I feel Andres's suggestion will help in both cases.\r\n> \r\n> > > I wonder if you have considered case of local hash (maintained only during\r\n> recovery)?\r\n> > > If there is after-crash recovery, then there will be no concurrent\r\n> > > access to shared buffers and this hash will be up-to-date.\r\n> > > in case of hot-standby replica we can use some simple invalidation\r\n> > > (just one flag or counter which indicates that buffer cache was updated).\r\n> > > This hash also can be constructed on demand when\r\n> > > DropRelFileNodeBuffers is called first time (so w have to scan all\r\n> > > buffers once, but subsequent drop operation will be fast.\r\n> >\r\n> > I'm examining this, but I am not sure if I got the correct\r\n> > understanding. Please correct me if I'm wrong.\r\n> > I think above is a suggestion wherein the postgres startup process\r\n> > uses local hash table to keep track of the buffers of relations. Since\r\n> > there may be other read-only sessions which read from disk, evict cached\r\n> blocks, and modify the shared_buffers, the flag would be updated.\r\n> > We could do it during recovery, then release it as recovery completes.\r\n> >\r\n> > I haven't looked deeply yet into the source code but we maybe can\r\n> > modify the REDO (main redo do-while loop) in StartupXLOG() once the\r\n> read-only connections are consistent.\r\n> > It would also be beneficial to construct this local hash when\r\n> > DropRefFileNodeBuffers() is called for the first time, so the whole\r\n> > share buffers is scanned initially, then as you mentioned subsequent\r\n> > dropping will be fast. (similar behavior to what the patch does)\r\n> >\r\n> > Do you think this is feasible to be implemented? Or should we explore another\r\n> approach?\r\n> >\r\n> \r\n> I think we should try what Andres is suggesting as that seems like a promising\r\n> idea and can address most of the common problems in this area but if you feel\r\n> otherwise, then do let us know.\r\n> \r\n> --\r\n> With Regards,\r\n> Amit Kapila.\r\n\r\nHi, thank you for the review.\r\nI just wanted to confirm so that I can hopefully cover it in the patch revision.\r\nBasically, we don't want the added overhead in BufferAlloc(), so I'll just make\r\na way to get both the last known relation size and nblocks, and modify the\r\noperations for dropping of relation of buffers, based from the comments\r\nand suggestions of the reviewers. Hopefully I can also provide performance\r\ntest results by next CF.\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Fri, 7 Aug 2020 08:44:10 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 07, 2020 at 10:08:23AM +0300, Konstantin Knizhnik wrote:\n>\n>\n>On 07.08.2020 00:33, Tomas Vondra wrote:\n>>\n>>Unfortunately Konstantin did not share any details about what workloads\n>>he tested, what config etc. But I find the \"no regression\" hypothesis\n>>rather hard to believe, because we're adding non-trivial amount of code\n>>to a place that can be quite hot.\n>\n>Sorry, that I have not explained� my test scenarios.\n>As far as Postgres is pgbench-oriented database:) I have also used pgbench:\n>read-only case and sip-some updates.\n>For this patch most critical is number of buffer allocations,\n>so I used small enough database (scale=100), but shared buffer was set \n>to 1Gb.\n>As a result, all data is cached in memory (in file system cache), but \n>there is intensive swapping at Postgres buffer manager level.\n>I have tested it both with relatively small (100) and large (1000) \n>number of clients.\n>\n>I repeated this tests at my notebook (quadcore, 16Gb RAM, SSD) and IBM \n>Power2 server with about 380 virtual cores� and about 1Tb of memory.\n>I the last case results are vary very much I think because of NUMA \n>architecture) but I failed to find some noticeable regression of \n>patched version.\n>\n\nIMO using such high numbers of clients is pointless - it's perfectly\nfine to test just a single client, and the 'basic overhead' should be\nvisible. It might have some impact on concurrency, but I think that's\njust a secondary effect I think. In fact, I wouldn't be surprised if\nhigh client counts made it harder to observe the overhead, due to\nconcurrency problems (I doubt you have a laptop with this many cores).\n\nAnother thing you might try doing is using taskset to attach processes\nto particular CPU cores, and also make sure there's no undesirable\ninfluence from CPU power management etc. Laptops are very problematic in\nthis regard, but even servers can have that enabled in BIOS.\n\n>\n>But I have to agree that adding parallel hash (in addition to existed \n>buffer manager hash) is not so good idea.\n>This cache really quite frequently becomes bottleneck.\n>My explanation of why I have not observed some noticeable regression \n>was that this patch uses almost the same lock partitioning schema\n>as already used it adds not so much new conflicts. May be in case of \n>POwer2 server, overhead of NUMA is much higher than other factors\n>(although shared hash is one of the main thing suffering from NUMA \n>architecture).\n>But in principle I agree that having two independent caches may \n>decrease speed up to two times� (or even more).\n>\n>I hope that everybody will agree that this problem is really critical. \n>It is certainly not the most common case when there are hundreds of \n>relation which are frequently truncated. But having quadratic \n>complexity in drop function is not acceptable from my point of view.\n>And it is not only recovery-specific problem, this is why solution \n>with local cache is not enough.\n>\n\nWell, ultimately it's a balancing act - we need to consider the risk of\nregressions vs. how common the improved scenario is. I've seen multiple\napplications that e.g. drop many relations (after all, that's why I\noptimized that in 9.3) so it's not entirely bogus case.\n\n>I do not know good solution of the problem. Just some thoughts.\n>- We can somehow combine locking used for main buffer manager cache \n>(by relid/blockno) and cache for relid. It will eliminates double \n>locking overhead.\n>- We can use something like sorted tree (like std::map) instead of \n>hash - it will allow to locate blocks both by relid/blockno and by \n>relid only.\n>\n\nI don't know. I think the ultimate problem here is that we're adding\ncode to a fairly hot codepath - it does not matter if it's hash, list,\nstd::map or something else I think. All of that has overhead.\n\nThat's the beauty of Andres' proposal to just loop over the blocks of\nthe relation and evict them one by one - that adds absolutely nothing to\nBufferAlloc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 7 Aug 2020 14:20:27 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 7, 2020 at 12:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, there is no room for \"good enough\" here. If a dirty buffer remains\n> in the system, the checkpointer will eventually try to flush it, and fail\n> (because there's no file to write it to), and then checkpointing will be\n> stuck. So we cannot afford to risk missing any buffers.\n\nThis comment suggests another possible approach to the problem, which\nis to just make a note someplace in shared memory when we drop a\nrelation. If we later find any of its buffers, we drop them without\nwriting them out. This is not altogether simple, because (1) we don't\nhave infinite room in shared memory to accumulate such notes and (2)\nit's not impossible for the OID counter to wrap around and permit the\ncreation of a new relation with the same OID, which would be a problem\nif the previous note is still around.\n\nBut this might be solvable. Suppose we create a shared hash table\nkeyed by <dboid, reload> with room for 1 entry per 1000 shared\nbuffers. When you drop a relation, you insert into the hash table.\nPeriodically you \"clean\" the hash table by marking all the entries,\nscanning shared buffers to remove any matches, and then deleting all\nthe marked entries. This should be done periodically in the\nbackground, but if you try to drop a relation and find the hash table\nfull, or you try to create a relation and find the OID of your new\nrelation in the hash table, then you have to clean synchronously.\n\nRight now, the cost of dropping K relation when N shared buffers is\nO(KN). But with this approach, you only have to incur the O(N)\noverhead of scanning shared_buffers when the hash table fills up, and\nthe hash table size is proportional to N, so the amortized complexity\nis O(K); that is, dropping relations takes time proportional to the\nnumber of relations being dropped, but NOT proportional to the size of\nshared_buffers, because as shared_buffers grows the hash table gets\nproportionally bigger, so that scans don't need to be done as\nfrequently.\n\nAndres's approach (retail hash table lookups just for blocks less than\nthe relation size, rather than a full scan) is going to help most with\nsmall relations, whereas this approach helps with relations of any\nsize, but if you're trying to drop a lot of relations, they're\nprobably small, and if they are large, scanning shared buffers may not\nbe the dominant cost, since unlinking the files also takes time. Also,\nthis approach might turn out to slow down buffer eviction too much.\nThat could maybe be mitigated by having some kind of cheap fast-path\nthat gets used when the hash table is empty (like an atomic flag that\nindicates whether a hash table probe is needed), and then trying hard\nto keep it empty most of the time (e.g. by aggressive background\ncleaning, or by ruling that after some number of hash table lookups\nthe next process to evict a buffer is forced to perform a cleanup).\nBut you'd probably have to try it to see how well you can do.\n\nIt's also possible to combine the two approaches. Small relations\ncould use Andres's approach while larger ones could use this approach;\nor you could insert both large and small relations into this hash\ntable but use different strategies for cleaning out shared_buffers\ndepending on the relation size (which could also be preserved in the\nhash table).\n\nJust brainstorming here...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 10:39:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 7, 2020 at 12:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, there is no room for \"good enough\" here. If a dirty buffer remains\n>> in the system, the checkpointer will eventually try to flush it, and fail\n>> (because there's no file to write it to), and then checkpointing will be\n>> stuck. So we cannot afford to risk missing any buffers.\n\n> This comment suggests another possible approach to the problem, which\n> is to just make a note someplace in shared memory when we drop a\n> relation. If we later find any of its buffers, we drop them without\n> writing them out. This is not altogether simple, because (1) we don't\n> have infinite room in shared memory to accumulate such notes and (2)\n> it's not impossible for the OID counter to wrap around and permit the\n> creation of a new relation with the same OID, which would be a problem\n> if the previous note is still around.\n\nInteresting idea indeed.\n\nAs for (1), maybe we don't need to keep the info in shmem. I'll just\npoint out that the checkpointer has *already got* a complete list of all\nrecently-dropped relations, cf pendingUnlinks in sync.c. So you could\nimagine looking aside at that to discover that a dirty buffer belongs to a\nrecently-dropped relation. pendingUnlinks would need to be converted to a\nhashtable to make searches cheap, and it's not very clear what to do in\nbackends that haven't got access to that table, but maybe we could just\naccept that backends that are forced to flush dirty buffers might do some\nuseless writes in such cases.\n\nAs for (2), the reason why we have that list is that the physical unlink\ndoesn't happen till after the next checkpoint. So with some messing\naround here, we could probably guarantee that every buffer belonging\nto the relation has been scanned and deleted before the file unlink\nhappens --- and then, even if the OID counter has wrapped around, the\nOID won't be reassigned to a new relation before that happens.\n\nIn short, it seems like maybe we could shove the responsibility for\ncleaning up dropped relations' buffers onto the checkpointer without\ntoo much added cost. A possible problem with this is that recycling\nof those buffers will happen much more slowly than it does today,\nbut maybe that's okay?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Aug 2020 12:09:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 7, 2020 at 12:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As for (1), maybe we don't need to keep the info in shmem. I'll just\n> point out that the checkpointer has *already got* a complete list of all\n> recently-dropped relations, cf pendingUnlinks in sync.c. So you could\n> imagine looking aside at that to discover that a dirty buffer belongs to a\n> recently-dropped relation. pendingUnlinks would need to be converted to a\n> hashtable to make searches cheap, and it's not very clear what to do in\n> backends that haven't got access to that table, but maybe we could just\n> accept that backends that are forced to flush dirty buffers might do some\n> useless writes in such cases.\n\nI don't see how that can work. It's not that the writes are useless;\nit's that they will fail outright because the file doesn't exist.\n\n> As for (2), the reason why we have that list is that the physical unlink\n> doesn't happen till after the next checkpoint. So with some messing\n> around here, we could probably guarantee that every buffer belonging\n> to the relation has been scanned and deleted before the file unlink\n> happens --- and then, even if the OID counter has wrapped around, the\n> OID won't be reassigned to a new relation before that happens.\n\nThis seems right to me, though.\n\n> In short, it seems like maybe we could shove the responsibility for\n> cleaning up dropped relations' buffers onto the checkpointer without\n> too much added cost. A possible problem with this is that recycling\n> of those buffers will happen much more slowly than it does today,\n> but maybe that's okay?\n\nI suspect it's going to be advantageous to try to make cleaning up\ndropped buffers quick in normal cases and allow it to fall behind only\nwhen someone is dropping a lot of relations in quick succession, so\nthat buffer eviction remains cheap in normal cases. I hadn't thought\nabout the possible negative performance consequences of failing to put\nbuffers on the free list, but that's another reason to try to make it\nfast.\n\nMy viewpoint on this is - I have yet to see anybody really get hosed\nbecause they drop one relation and that causes a full scan of\nshared_buffers. I mean, it's slightly expensive, but computers are\nfast. Whatever. What hoses people is dropping a lot of relations in\nquick succession, either by spamming DROP TABLE commands or by running\nsomething like DROP SCHEMA, and then suddenly they're scanning\nshared_buffers over and over again, and their standby is doing the\nsame thing, and now it hurts. The problem on the standby is actually\nworse than the problem on the primary, because the primary can do\nother things while one process sits there and thinks about\nshared_buffers for a long time, but the standby can't, because the\nstartup process is all there is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 12:26:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 7, 2020 at 12:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... it's not very clear what to do in\n>> backends that haven't got access to that table, but maybe we could just\n>> accept that backends that are forced to flush dirty buffers might do some\n>> useless writes in such cases.\n\n> I don't see how that can work. It's not that the writes are useless;\n> it's that they will fail outright because the file doesn't exist.\n\nAt least in the case of segment zero, the file will still exist. It'll\nhave been truncated to zero length, and if the filesystem is stupid about\nholes in files then maybe a write to a high block number would consume\nexcessive disk space, but does anyone still care about such filesystems?\nI don't remember at the moment how we handle higher segments, but likely\nwe could make them still exist too, postponing all the unlinks till after\ncheckpoint. Or we could just have the backends give up on recycling a\nparticular buffer if they can't write it (which is the response to an I/O\nfailure already, I hope).\n\n> My viewpoint on this is - I have yet to see anybody really get hosed\n> because they drop one relation and that causes a full scan of\n> shared_buffers. I mean, it's slightly expensive, but computers are\n> fast. Whatever. What hoses people is dropping a lot of relations in\n> quick succession, either by spamming DROP TABLE commands or by running\n> something like DROP SCHEMA, and then suddenly they're scanning\n> shared_buffers over and over again, and their standby is doing the\n> same thing, and now it hurts.\n\nYeah, trying to amortize the cost across multiple drops seems like\nwhat we really want here. I'm starting to think about a \"relation\ndropper\" background process, which would be somewhat like the checkpointer\nbut it wouldn't have any interest in actually doing buffer I/O.\nWe'd send relation drop commands to it, and it would scan all of shared\nbuffers and flush related buffers, and then finally do the file truncates\nor unlinks. Amortization would happen by considering multiple target\nrelations during any one scan over shared buffers. I'm not very clear\non how this would relate to the checkpointer's handling of relation\ndrops, but it could be worked out; if we were lucky maybe the checkpointer\ncould stop worrying about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Aug 2020 12:52:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 7, 2020 at 12:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> At least in the case of segment zero, the file will still exist. It'll\n> have been truncated to zero length, and if the filesystem is stupid about\n> holes in files then maybe a write to a high block number would consume\n> excessive disk space, but does anyone still care about such filesystems?\n> I don't remember at the moment how we handle higher segments, but likely\n> we could make them still exist too, postponing all the unlinks till after\n> checkpoint. Or we could just have the backends give up on recycling a\n> particular buffer if they can't write it (which is the response to an I/O\n> failure already, I hope).\n\nNone of this sounds very appealing. Postponing the unlinks means\npostponing recovery of the space at the OS level, which I think will\nbe noticeable and undesirable for users. The other notions all seem to\ninvolve treating as valid on-disk states we currently treat as\ninvalid, and our sanity checks in this area are already far too weak.\nAnd all you're buying for it is putting a hash table that would\notherwise be shared memory into backend-private memory, which seems\nlike quite a minor gain. Having that information visible to everybody\nseems a lot cleaner.\n\n> Yeah, trying to amortize the cost across multiple drops seems like\n> what we really want here. I'm starting to think about a \"relation\n> dropper\" background process, which would be somewhat like the checkpointer\n> but it wouldn't have any interest in actually doing buffer I/O.\n> We'd send relation drop commands to it, and it would scan all of shared\n> buffers and flush related buffers, and then finally do the file truncates\n> or unlinks. Amortization would happen by considering multiple target\n> relations during any one scan over shared buffers. I'm not very clear\n> on how this would relate to the checkpointer's handling of relation\n> drops, but it could be worked out; if we were lucky maybe the checkpointer\n> could stop worrying about that.\n\nI considered that, too, but it might be overkill. I think that one\nscan of shared_buffers every now and then might be cheap enough that\nwe could just not worry too much about which process gets stuck doing\nit. So for example if the number of buffers allocated since the hash\ntable ended up non-empty reaches NBuffers, the process wanting to do\nthe next eviction gets handed the job of cleaning it out. Or maybe the\nbackground writer could help; it's not like it does much anyway, zing.\nIt's possible that a dedicated process is the right solution, but we\nmight want to at least poke a bit at other alternatives.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 13:33:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 7, 2020 at 9:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <andres@anarazel.de> wrote:\n> >> We could also just use pg_class.relpages. It'll probably mostly be\n> >> accurate enough?\n>\n> > Don't we need the accurate 'number of blocks' if we want to invalidate\n> > all the buffers? Basically, I think we need to perform BufTableLookup\n> > for all the blocks in the relation and then Invalidate all buffers.\n>\n> Yeah, there is no room for \"good enough\" here. If a dirty buffer remains\n> in the system, the checkpointer will eventually try to flush it, and fail\n> (because there's no file to write it to), and then checkpointing will be\n> stuck. So we cannot afford to risk missing any buffers.\n>\n\nRight, this reminds me of the discussion we had last time on this\ntopic where we decided that we can't even rely on using smgrnblocks to\nfind the exact number of blocks because lseek might lie about the EOF\nposition [1]. So, we anyway need some mechanism to push the\ninformation related to the \"to be truncated or dropped relations\" to\nthe background worker (checkpointer and or others) to avoid flush\nissues. But, maybe it is better to push the responsibility of\ninvalidating the buffers for truncated/dropped relation to the\nbackground process. However, I feel for some cases where relation size\nis greater than the number of shared buffers there might not be much\nbenefit in pushing this operation to background unless there are\nalready a few other relation entries (for dropped relations) so that\ncost of scanning the buffers can be amortized.\n\n[1] - https://www.postgresql.org/message-id/16664.1435414204%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 8 Aug 2020 15:22:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 7, 2020 at 11:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Aug 7, 2020 at 12:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > At least in the case of segment zero, the file will still exist. It'll\n> > have been truncated to zero length, and if the filesystem is stupid about\n> > holes in files then maybe a write to a high block number would consume\n> > excessive disk space, but does anyone still care about such filesystems?\n> > I don't remember at the moment how we handle higher segments,\n> >\n\nWe do unlink them and register the request to forget the Fsync\nrequests for those. See mdunlinkfork.\n\n> > but likely\n> > we could make them still exist too, postponing all the unlinks till after\n> > checkpoint. Or we could just have the backends give up on recycling a\n> > particular buffer if they can't write it (which is the response to an I/O\n> > failure already, I hope).\n> >\n\nNote that we don't often try to flush the buffers from the backend. We\nfirst try to forward the request to checkpoint queue and only if the\nqueue is full, the backend tries to flush it, so even if we decide to\ngive up flushing such a buffer (where we get an error) via backend, it\nshouldn't impact very many cases. I am not sure but if we can somehow\nreliably distinguish this type of error from any other I/O failure\nthen we can probably give up on flushing this buffer and continue or\nmaybe just retry to push this request to checkpointer.\n\n>\n> None of this sounds very appealing. Postponing the unlinks means\n> postponing recovery of the space at the OS level, which I think will\n> be noticeable and undesirable for users. The other notions all seem to\n> involve treating as valid on-disk states we currently treat as\n> invalid, and our sanity checks in this area are already far too weak.\n> And all you're buying for it is putting a hash table that would\n> otherwise be shared memory into backend-private memory, which seems\n> like quite a minor gain. Having that information visible to everybody\n> seems a lot cleaner.\n>\n\nThe one more benefit of giving this responsibility to a single process\nlike checkpointer is that we can avoid unlinking the relation until we\nscan all the buffers corresponding to it. Now, surely keeping it in\nshared memory and allow other processes to work on it has other merits\nwhich are that such buffers might get invalidated faster but not sure\nwe can retain the benefit of another approach which is to perform all\nsuch invalidation of buffers before unlinking the relation's first\nsegment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 8 Aug 2020 16:02:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Aug 7, 2020 at 9:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <andres@anarazel.de> wrote:\n> >> We could also just use pg_class.relpages. It'll probably mostly be\n> >> accurate enough?\n>\n> > Don't we need the accurate 'number of blocks' if we want to invalidate\n> > all the buffers? Basically, I think we need to perform BufTableLookup\n> > for all the blocks in the relation and then Invalidate all buffers.\n>\n> Yeah, there is no room for \"good enough\" here. If a dirty buffer remains\n> in the system, the checkpointer will eventually try to flush it, and fail\n> (because there's no file to write it to), and then checkpointing will be\n> stuck. So we cannot afford to risk missing any buffers.\n>\n\nToday, again thinking about this point it occurred to me that during\nrecovery we can reliably find the relation size and after Thomas's\nrecent commit c5315f4f44 (Cache smgrnblocks() results in recovery), we\nmight not need to even incur the cost of lseek. Why don't we fix this\nfirst for 'recovery' (by following something on the lines of what\nAndres suggested) and then later once we have a generic mechanism for\n\"caching the relation size\" [1], we can do it for non-recovery paths.\nI think that will at least address the reported use case with some\nminimal changes.\n\n[1] - https://www.postgresql.org/message-id/CAEepm%3D3SSw-Ty1DFcK%3D1rU-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Aug 2020 11:34:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, August 18, 2020 3:05 PM (GMT+9), Amit Kapila wrote: \r\n> On Fri, Aug 7, 2020 at 9:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> >\r\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\r\n> > > On Sat, Aug 1, 2020 at 1:53 AM Andres Freund <andres@anarazel.de>\r\n> wrote:\r\n> > >> We could also just use pg_class.relpages. It'll probably mostly be\r\n> > >> accurate enough?\r\n> >\r\n> > > Don't we need the accurate 'number of blocks' if we want to\r\n> > > invalidate all the buffers? Basically, I think we need to perform\r\n> > > BufTableLookup for all the blocks in the relation and then Invalidate all\r\n> buffers.\r\n> >\r\n> > Yeah, there is no room for \"good enough\" here. If a dirty buffer\r\n> > remains in the system, the checkpointer will eventually try to flush\r\n> > it, and fail (because there's no file to write it to), and then\r\n> > checkpointing will be stuck. So we cannot afford to risk missing any\r\n> buffers.\r\n> >\r\n> \r\n> Today, again thinking about this point it occurred to me that during recovery\r\n> we can reliably find the relation size and after Thomas's recent commit\r\n> c5315f4f44 (Cache smgrnblocks() results in recovery), we might not need to\r\n> even incur the cost of lseek. Why don't we fix this first for 'recovery' (by\r\n> following something on the lines of what Andres suggested) and then later\r\n> once we have a generic mechanism for \"caching the relation size\" [1], we can\r\n> do it for non-recovery paths.\r\n> I think that will at least address the reported use case with some minimal\r\n> changes.\r\n> \r\n> [1] -\r\n> https://www.postgresql.org/message-id/CAEepm%3D3SSw-Ty1DFcK%3D1r\r\n> U-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com\r\n> \r\n\r\nAttached is an updated V9 version with minimal code changes only and\r\navoids the previous overhead in the BufferAlloc. This time, I only updated\r\nthe recovery path as suggested by Amit, and followed Andres' suggestion\r\nof referring to the cached blocks in smgrnblocks.\r\nThe layering is kinda tricky so the logic may be wrong. But as of now,\r\nit passes the regression tests. I'll follow up with the performance results.\r\nIt seems there's regression for smaller shared_buffers. I'll update if I find bugs.\r\nBut I'd also appreciate your reviews in case I missed something.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Tue, 1 Sep 2020 13:02:28 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hello.\n\nAt Tue, 1 Sep 2020 13:02:28 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> On Tuesday, August 18, 2020 3:05 PM (GMT+9), Amit Kapila wrote: \n> > Today, again thinking about this point it occurred to me that during recovery\n> > we can reliably find the relation size and after Thomas's recent commit\n> > c5315f4f44 (Cache smgrnblocks() results in recovery), we might not need to\n> > even incur the cost of lseek. Why don't we fix this first for 'recovery' (by\n> > following something on the lines of what Andres suggested) and then later\n> > once we have a generic mechanism for \"caching the relation size\" [1], we can\n> > do it for non-recovery paths.\n> > I think that will at least address the reported use case with some minimal\n> > changes.\n> > \n> > [1] -\n> > https://www.postgresql.org/message-id/CAEepm%3D3SSw-Ty1DFcK%3D1r\n> > U-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com\n\nIsn't a relation always locked asscess-exclusively, at truncation\ntime? If so, isn't even the result of lseek reliable enough? And if\nwe don't care the cost of lseek, we can do the same optimization also\nfor non-recovery paths. Since anyway we perform actual file-truncation\njust after so I think the cost of lseek is negligible here.\n\n> Attached is an updated V9 version with minimal code changes only and\n> avoids the previous overhead in the BufferAlloc. This time, I only updated\n> the recovery path as suggested by Amit, and followed Andres' suggestion\n> of referring to the cached blocks in smgrnblocks.\n> The layering is kinda tricky so the logic may be wrong. But as of now,\n> it passes the regression tests. I'll follow up with the performance results.\n> It seems there's regression for smaller shared_buffers. I'll update if I find bugs.\n> But I'd also appreciate your reviews in case I missed something.\n\nBUF_DROP_THRESHOLD seems to be misued. IIUC it defines the maximum\nnumber of file pages that we make relation-targetted search for\nbuffers. Otherwise we scan through all buffers. On the other hand the\nlatest patch just leaves all buffers for relation forks longer than\nthe threshold.\n\nI think we should determine whether to do targetted-scan or full-scan\nbased on the ratio of (expectedly maximum) total number of pages for\nall (specified) forks in a relation against total number of buffers.\n\nBy the way\n\n> #define BUF_DROP_THRESHOLD\t\t500\t/* NBuffers divided by 2 */\n\nNBuffers is not a constant. Even if we wanted to set the macro as\ndescribed in the comment, we should have used (NBuffers/2) instead of\n\"500\". But I suppose you might wanted to use (NBuffders / 500) as Tom\nsuggested upthread. And the name of the macro seems too generic. I\nthink more specific names like BUF_DROP_FULLSCAN_THRESHOLD would be\nbetter.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n \n\n\n", "msg_date": "Wed, 02 Sep 2020 10:31:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "I'd like make a subtle correction.\n\nAt Wed, 02 Sep 2020 10:31:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> By the way\n> \n> > #define BUF_DROP_THRESHOLD\t\t500\t/* NBuffers divided by 2 */\n> \n> NBuffers is not a constant. Even if we wanted to set the macro as\n> described in the comment, we should have used (NBuffers/2) instead of\n> \"500\". But I suppose you might wanted to use (NBuffders / 500) as Tom\n> suggested upthread. And the name of the macro seems too generic. I\n\nWho made the suggestion is Andres, not Tom. Sorry for the mistake.\n\n> think more specific names like BUF_DROP_FULLSCAN_THRESHOLD would be\n> better.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 02 Sep 2020 10:36:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 2, 2020 at 7:01 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> At Tue, 1 Sep 2020 13:02:28 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in\n> > On Tuesday, August 18, 2020 3:05 PM (GMT+9), Amit Kapila wrote:\n> > > Today, again thinking about this point it occurred to me that during recovery\n> > > we can reliably find the relation size and after Thomas's recent commit\n> > > c5315f4f44 (Cache smgrnblocks() results in recovery), we might not need to\n> > > even incur the cost of lseek. Why don't we fix this first for 'recovery' (by\n> > > following something on the lines of what Andres suggested) and then later\n> > > once we have a generic mechanism for \"caching the relation size\" [1], we can\n> > > do it for non-recovery paths.\n> > > I think that will at least address the reported use case with some minimal\n> > > changes.\n> > >\n> > > [1] -\n> > > https://www.postgresql.org/message-id/CAEepm%3D3SSw-Ty1DFcK%3D1r\n> > > U-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com\n>\n> Isn't a relation always locked asscess-exclusively, at truncation\n> time? If so, isn't even the result of lseek reliable enough?\n>\n\nEven if the relation is locked, background processes like checkpointer\ncan still touch the relation which might cause problems. Consider a\ncase where we extend the relation but didn't flush the newly added\npages. Now during truncate operation, checkpointer can still flush\nthose pages which can cause trouble for truncate. But, I think in the\nrecovery path such cases won't cause a problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Sep 2020 08:18:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Even if the relation is locked, background processes like checkpointer\n> can still touch the relation which might cause problems. Consider a\n> case where we extend the relation but didn't flush the newly added\n> pages. Now during truncate operation, checkpointer can still flush\n> those pages which can cause trouble for truncate. But, I think in the\n> recovery path such cases won't cause a problem.\n\nI wouldn't count on that staying true ...\n\nhttps://www.postgresql.org/message-id/CA+hUKGJ8NRsqgkZEnsnRc2MFROBV-jCnacbYvtpptK2A9YYp9Q@mail.gmail.com\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Sep 2020 23:47:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, September 2, 2020 10:31 AM, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Tue, 1 Sep 2020 13:02:28 +0000, \"k.jamison@fujitsu.com\"\n> <k.jamison@fujitsu.com> wrote in\n> > On Tuesday, August 18, 2020 3:05 PM (GMT+9), Amit Kapila wrote:\n> > > Today, again thinking about this point it occurred to me that during\n> > > recovery we can reliably find the relation size and after Thomas's\n> > > recent commit\n> > > c5315f4f44 (Cache smgrnblocks() results in recovery), we might not\n> > > need to even incur the cost of lseek. Why don't we fix this first\n> > > for 'recovery' (by following something on the lines of what Andres\n> > > suggested) and then later once we have a generic mechanism for\n> > > \"caching the relation size\" [1], we can do it for non-recovery paths.\n> > > I think that will at least address the reported use case with some\n> > > minimal changes.\n> > >\n> > > [1] -\n> > >\n> https://www.postgresql.org/message-id/CAEepm%3D3SSw-Ty1DFcK%3D1r\n> > > U-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com\n> \n> Isn't a relation always locked asscess-exclusively, at truncation time? If so,\n> isn't even the result of lseek reliable enough? And if we don't care the cost of\n> lseek, we can do the same optimization also for non-recovery paths. Since\n> anyway we perform actual file-truncation just after so I think the cost of lseek\n> is negligible here.\n\nThe reason for that is when I read the comment in smgrnblocks in smgr.c\nI thought that smgrnblocks can only be reliably used during recovery here\nto ensure that we have the correct size.\nPlease correct me if my understanding is wrong, and I'll fix the patch.\n\n\t * For now, we only use cached values in recovery due to lack of a shared\n\t * invalidation mechanism for changes in file size.\n\t */\n\tif (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n\t\treturn reln->smgr_cached_nblocks[forknum]; \n\n> > Attached is an updated V9 version with minimal code changes only and\n> > avoids the previous overhead in the BufferAlloc. This time, I only\n> > updated the recovery path as suggested by Amit, and followed Andres'\n> > suggestion of referring to the cached blocks in smgrnblocks.\n> > The layering is kinda tricky so the logic may be wrong. But as of now,\n> > it passes the regression tests. I'll follow up with the performance results.\n> > It seems there's regression for smaller shared_buffers. I'll update if I find\n> bugs.\n> > But I'd also appreciate your reviews in case I missed something.\n> \n> BUF_DROP_THRESHOLD seems to be misued. IIUC it defines the maximum\n> number of file pages that we make relation-targetted search for buffers.\n> Otherwise we scan through all buffers. On the other hand the latest patch just\n> leaves all buffers for relation forks longer than the threshold.\n\nRight, I missed the part or condition for that part. Fixed in the latest one.\n \n> I think we should determine whether to do targetted-scan or full-scan based\n> on the ratio of (expectedly maximum) total number of pages for all (specified)\n> forks in a relation against total number of buffers.\n\t\n> By the way\n> \n> > #define BUF_DROP_THRESHOLD\t\t500\t/* NBuffers divided\n> by 2 */\n> \n> NBuffers is not a constant. Even if we wanted to set the macro as described\n> in the comment, we should have used (NBuffers/2) instead of \"500\". But I\n> suppose you might wanted to use (NBuffders / 500) as Tom suggested\n> upthread. And the name of the macro seems too generic. I think more\n> specific names like BUF_DROP_FULLSCAN_THRESHOLD would be better.\n\nFixed.\n\nThank you for the review!\nAttached is the v10 of the patch.\n\nBest regards,\nKirk Jamison", "msg_date": "Wed, 2 Sep 2020 03:48:55 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 2, 2020 at 9:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Even if the relation is locked, background processes like checkpointer\n> > can still touch the relation which might cause problems. Consider a\n> > case where we extend the relation but didn't flush the newly added\n> > pages. Now during truncate operation, checkpointer can still flush\n> > those pages which can cause trouble for truncate. But, I think in the\n> > recovery path such cases won't cause a problem.\n>\n> I wouldn't count on that staying true ...\n>\n> https://www.postgresql.org/message-id/CA+hUKGJ8NRsqgkZEnsnRc2MFROBV-jCnacbYvtpptK2A9YYp9Q@mail.gmail.com\n>\n\nI don't think that proposal will matter after commit c5315f4f44\nbecause we are caching the size/blocks for recovery while doing extend\n(smgrextend). In the above scenario, we would have cached the blocks\nwhich will be used at later point of time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Sep 2020 14:19:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, September 2, 2020 5:49 PM, Amit Kapila wrote:\r\n> On Wed, Sep 2, 2020 at 9:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> >\r\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\r\n> > > Even if the relation is locked, background processes like\r\n> > > checkpointer can still touch the relation which might cause\r\n> > > problems. Consider a case where we extend the relation but didn't\r\n> > > flush the newly added pages. Now during truncate operation,\r\n> > > checkpointer can still flush those pages which can cause trouble for\r\n> > > truncate. But, I think in the recovery path such cases won't cause a\r\n> problem.\r\n> >\r\n> > I wouldn't count on that staying true ...\r\n> >\r\n> >\r\n> https://www.postgresql.org/message-id/CA+hUKGJ8NRsqgkZEnsnRc2MFR\r\n> OBV-jC\r\n> > nacbYvtpptK2A9YYp9Q@mail.gmail.com\r\n> >\r\n> \r\n> I don't think that proposal will matter after commit c5315f4f44 because we are\r\n> caching the size/blocks for recovery while doing extend (smgrextend). In the\r\n> above scenario, we would have cached the blocks which will be used at later\r\n> point of time.\r\n> \r\n\r\nHi,\r\n\r\nI'm guessing we can still pursue this idea of improving the recovery path first.\r\n\r\nI'm working on an updated patch version, because the CFBot's telling\r\nthat postgres fails to build (one of the recovery TAP tests fails).\r\nI'm still working on refactoring my patch, but have yet to find a proper solution at the moment.\r\nSo I'm going to continue my investigation.\r\n\r\nAttached is an updated WIP patch.\r\nI'd appreciate if you could take a look at the patch as well.\r\n\r\nIn addition, attached also are the regression logs for the failure and other logs\r\nAccompanying it: wal_optimize_node_minimal and wal_optimize_node_replica.\r\n\r\nThe failures stated in my session was:\r\nt/018_wal_optimize.pl ................ 18/34 Bailout called.\r\nFurther testing stopped: pg_ctl start failed\r\nFAILED--Further testing stopped: pg_ctl start failed\r\n\r\nBest regards,\r\nKirk Jamison", "msg_date": "Mon, 7 Sep 2020 08:03:05 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Mon, Sep 7, 2020 at 1:33 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Wednesday, September 2, 2020 5:49 PM, Amit Kapila wrote:\n> > On Wed, Sep 2, 2020 at 9:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > > Even if the relation is locked, background processes like\n> > > > checkpointer can still touch the relation which might cause\n> > > > problems. Consider a case where we extend the relation but didn't\n> > > > flush the newly added pages. Now during truncate operation,\n> > > > checkpointer can still flush those pages which can cause trouble for\n> > > > truncate. But, I think in the recovery path such cases won't cause a\n> > problem.\n> > >\n> > > I wouldn't count on that staying true ...\n> > >\n> > >\n> > https://www.postgresql.org/message-id/CA+hUKGJ8NRsqgkZEnsnRc2MFR\n> > OBV-jC\n> > > nacbYvtpptK2A9YYp9Q@mail.gmail.com\n> > >\n> >\n> > I don't think that proposal will matter after commit c5315f4f44 because we are\n> > caching the size/blocks for recovery while doing extend (smgrextend). In the\n> > above scenario, we would have cached the blocks which will be used at later\n> > point of time.\n> >\n>\n> I'm guessing we can still pursue this idea of improving the recovery path first.\n>\n\nI think so.\n\n> I'm working on an updated patch version, because the CFBot's telling\n> that postgres fails to build (one of the recovery TAP tests fails).\n> I'm still working on refactoring my patch, but have yet to find a proper solution at the moment.\n> So I'm going to continue my investigation.\n>\n> Attached is an updated WIP patch.\n> I'd appreciate if you could take a look at the patch as well.\n>\n\nSo, I see the below log as one of the problems:\n2020-09-07 06:20:33.918 UTC [10914] LOG: redo starts at 0/15FFEC0\n2020-09-07 06:20:33.919 UTC [10914] FATAL: unexpected data beyond EOF\nin block 1 of relation base/13743/24581\n\nThis indicates that we missed invalidating some buffer which should\nhave been invalidated. If you are able to reproduce this locally then\nI suggest to first write a simple patch without the check of the\nthreshold, basically in recovery always try to use the new way to\ninvalidate the buffer. That will reduce the scope of the code that can\ncreate a problem. Let us know if the problem still exists and share\nthe logs. BTW, I think I see one problem in the code:\n\nif (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n+ bufHdr->tag.forkNum == forkNum[j] &&\n+ bufHdr->tag.blockNum >= firstDelBlock[j])\n\nHere, I think you need to use 'i' not 'j' for forkNum and\nfirstDelBlock as those are arrays w.r.t forks. That might fix the\nproblem but I am not sure as I haven't tried to reproduce it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Sep 2020 09:32:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "\tFrom: Amit Kapila <amit.kapila16@gmail.com>\r\n> if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\r\n> + bufHdr->tag.forkNum == forkNum[j] &&\r\n> + bufHdr->tag.blockNum >= firstDelBlock[j])\r\n> \r\n> Here, I think you need to use 'i' not 'j' for forkNum and\r\n> firstDelBlock as those are arrays w.r.t forks. That might fix the\r\n> problem but I am not sure as I haven't tried to reproduce it.\r\n\r\n\r\n(1)\r\n+\t\t\t\t\tINIT_BUFFERTAG(newTag, rnode.node, forkNum[j], firstDelBlock[j]);\r\n\r\nAnd you need to use i here, too.\r\n\r\nI advise you to suspect any character, any word, and any sentence. I've found many bugs for others so far. I'm afraid you're just seeing the code flow.\r\n\r\n\r\n(2)\r\n+\t\t\t\t\tLWLockAcquire(newPartitionLock, LW_SHARED);\r\n+\t\t\t\t\tbuf_id = BufTableLookup(&newTag, newHash);\r\n+\t\t\t\t\tLWLockRelease(newPartitionLock);\r\n+\r\n+\t\t\t\t\tbufHdr = GetBufferDescriptor(buf_id);\r\n\r\nCheck the result of BufTableLookup() and do nothing if the block is not in the shared buffers.\r\n\r\n\r\n(3)\r\n+\t\t\telse\r\n+\t\t\t{\r\n+\t\t\t\tfor (j = BUF_DROP_FULLSCAN_THRESHOLD; j < NBuffers; j++)\r\n+\t\t\t\t{\r\n\r\nWhat's the meaning of this loop? I don't understand the start condition. Should j be initialized to 0?\r\n\r\n\r\n(4)\r\n+#define BUF_DROP_FULLSCAN_THRESHOLD\t\t(NBuffers / 2)\r\n\r\nWasn't it 500 instead of 2? Anyway, I think we need to discuss this threshold later.\r\n\r\n\r\n(5)\r\n+\t\t\tif (((int)nblocks) < BUF_DROP_FULLSCAN_THRESHOLD)\r\n\r\nIt's better to define BUF_DROP_FULLSCAN_THRESHOLD as an uint32 value instead of casting the type here, as these values are blocks.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n \r\n", "msg_date": "Tue, 8 Sep 2020 05:49:21 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\r\n> (1)\r\n> +\t\t\t\t\tINIT_BUFFERTAG(newTag,\r\n> rnode.node, forkNum[j], firstDelBlock[j]);\r\n> \r\n> And you need to use i here, too.\r\n\r\nI remember the books \"Code Complete\" and/or \"Readable Code\" suggest to use meaningful loop variable names like fork_num and block_count, to prevent this type of mistakes.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n \r\n", "msg_date": "Tue, 8 Sep 2020 06:01:52 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, September 8, 2020 1:02 PM, Amit Kapila wrote:\r\nHello,\r\n> On Mon, Sep 7, 2020 at 1:33 PM k.jamison@fujitsu.com\r\n> <k.jamison@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, September 2, 2020 5:49 PM, Amit Kapila wrote:\r\n> > > On Wed, Sep 2, 2020 at 9:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> > > >\r\n> > > > Amit Kapila <amit.kapila16@gmail.com> writes:\r\n> > > > > Even if the relation is locked, background processes like\r\n> > > > > checkpointer can still touch the relation which might cause\r\n> > > > > problems. Consider a case where we extend the relation but\r\n> > > > > didn't flush the newly added pages. Now during truncate\r\n> > > > > operation, checkpointer can still flush those pages which can\r\n> > > > > cause trouble for truncate. But, I think in the recovery path\r\n> > > > > such cases won't cause a\r\n> > > problem.\r\n> > > >\r\n> > > > I wouldn't count on that staying true ...\r\n> > > >\r\n> > > >\r\n> > >\r\n> https://www.postgresql.org/message-id/CA+hUKGJ8NRsqgkZEnsnRc2MFR\r\n> > > OBV-jC\r\n> > > > nacbYvtpptK2A9YYp9Q@mail.gmail.com\r\n> > > >\r\n> > >\r\n> > > I don't think that proposal will matter after commit c5315f4f44\r\n> > > because we are caching the size/blocks for recovery while doing\r\n> > > extend (smgrextend). In the above scenario, we would have cached the\r\n> > > blocks which will be used at later point of time.\r\n> > >\r\n> >\r\n> > I'm guessing we can still pursue this idea of improving the recovery path\r\n> first.\r\n> >\r\n> \r\n> I think so.\r\n\r\nAlright, so I've updated the patch which passes the regression and TAP tests.\r\nIt compiles and builds as intended.\r\n\r\n> > I'm working on an updated patch version, because the CFBot's telling\r\n> > that postgres fails to build (one of the recovery TAP tests fails).\r\n> > I'm still working on refactoring my patch, but have yet to find a proper\r\n> solution at the moment.\r\n> > So I'm going to continue my investigation.\r\n> >\r\n> > Attached is an updated WIP patch.\r\n> > I'd appreciate if you could take a look at the patch as well.\r\n> >\r\n> \r\n> So, I see the below log as one of the problems:\r\n> 2020-09-07 06:20:33.918 UTC [10914] LOG: redo starts at 0/15FFEC0\r\n> 2020-09-07 06:20:33.919 UTC [10914] FATAL: unexpected data beyond EOF\r\n> in block 1 of relation base/13743/24581\r\n> \r\n> This indicates that we missed invalidating some buffer which should have\r\n> been invalidated. If you are able to reproduce this locally then I suggest to first\r\n> write a simple patch without the check of the threshold, basically in recovery\r\n> always try to use the new way to invalidate the buffer. That will reduce the\r\n> scope of the code that can create a problem. Let us know if the problem still\r\n> exists and share the logs. BTW, I think I see one problem in the code:\r\n> \r\n> if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\r\n> + bufHdr->tag.forkNum == forkNum[j] && tag.blockNum >= \r\n> + bufHdr->firstDelBlock[j])\r\n> \r\n> Here, I think you need to use 'i' not 'j' for forkNum and \r\n> firstDelBlock as those are arrays w.r.t forks. That might fix the \r\n> problem but I am not sure as I haven't tried to reproduce it.\r\n\r\nThanks for advice. Right, that seems to be the cause of error,\r\nand fixing that (using fork) solved the case.\r\nI also followed the advice of Tsunakawa-san of using more meaningful iterator\r\nInstead of using \"i\" & \"j\" for readability.\r\n\r\nI also added a new function when relation fork is bigger than the threshold\r\n If (nblocks > BUF_DROP_FULLSCAN_THRESHOLD)\r\n(DropRelFileNodeBuffersOfFork) Perhaps there's a better name for that function.\r\nHowever, as expected in the previous discussions, this is a bit slower than the\r\nstandard buffer invalidation process, because the whole shared buffers are scanned nfork times.\r\nCurrently, I set the threshold to (NBuffers / 500)\r\n\r\nFeedback on the patch/testing are very much welcome.\r\n\r\nBest regards,\r\nKirk Jamison", "msg_date": "Tue, 15 Sep 2020 01:40:30 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi, \r\n\r\n> BTW, I think I see one problem in the code:\r\n> >\r\n> > if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\r\n> > + bufHdr->tag.forkNum == forkNum[j] && tag.blockNum >=\r\n> > + bufHdr->firstDelBlock[j])\r\n> >\r\n> > Here, I think you need to use 'i' not 'j' for forkNum and\r\n> > firstDelBlock as those are arrays w.r.t forks. That might fix the\r\n> > problem but I am not sure as I haven't tried to reproduce it.\r\n> \r\n> Thanks for advice. Right, that seems to be the cause of error, and fixing that\r\n> (using fork) solved the case.\r\n> I also followed the advice of Tsunakawa-san of using more meaningful\r\n> iterator Instead of using \"i\" & \"j\" for readability.\r\n> \r\n> I also added a new function when relation fork is bigger than the threshold\r\n> If (nblocks > BUF_DROP_FULLSCAN_THRESHOLD)\r\n> (DropRelFileNodeBuffersOfFork) Perhaps there's a better name for that\r\n> function.\r\n> However, as expected in the previous discussions, this is a bit slower than the\r\n> standard buffer invalidation process, because the whole shared buffers are\r\n> scanned nfork times.\r\n> Currently, I set the threshold to (NBuffers / 500)\r\n\r\nI made a mistake in the v12. I replaced the firstDelBlock[fork_num] with firstDelBlock[block_num],\r\nIn the for-loop code block of block_num, because we want to process the current block of per-block loop\r\n\r\nOTOH, I used the firstDelBlock[fork_num] when relation fork is bigger than the threshold,\r\nor if the cached blocks of small relations were already invalidated.\r\n\r\nThe logic could be either correct or wrong, so I'd appreciate feedback and comments/advice.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Tue, 15 Sep 2020 11:11:26 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Wed, 2 Sep 2020 08:18:06 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Sep 2, 2020 at 7:01 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Isn't a relation always locked asscess-exclusively, at truncation\n> > time? If so, isn't even the result of lseek reliable enough?\n> >\n> \n> Even if the relation is locked, background processes like checkpointer\n> can still touch the relation which might cause problems. Consider a\n> case where we extend the relation but didn't flush the newly added\n> pages. Now during truncate operation, checkpointer can still flush\n> those pages which can cause trouble for truncate. But, I think in the\n> recovery path such cases won't cause a problem.\n\nI reconsided on this and still have a doubt.\n\nIs this means lseek(SEEK_END) doesn't count blocks that are\nwrite(2)'ed (by smgrextend) but not yet flushed? (I don't think so,\nfor clarity.) The nblocks cache is added just to reduce the number of\nlseek()s and expected to always have the same value with what lseek()\nis expected to return. The reason it is reliable only during recovery\nis that the cache is not shared but the startup process is the only\nprocess that changes the relation size during recovery.\n\nIf any other process can extend the relation while smgrtruncate is\nrunning, the current DropRelFileNodeBuffers should have the chance\nthat a new buffer for extended area is allocated at a buffer location\nwhere the function already have passed by, which is a disaster.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Sep 2020 11:16:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "The code doesn't seem to be working correctly.\r\n\r\n\r\n(1)\r\n+\t\t\t\tfor (block_num = 0; block_num <= nblocks; block_num++)\r\n\r\nshould be\r\n\r\n+\t\t\t\tfor (block_num = firstDelBlock[fork_num]; block_num < nblocks; block_num++)\r\n\r\nbecause:\r\n\r\n* You only want to invalidate blocks >= firstDelBlock[fork_num], don't you?\r\n* The relation's block number ranges from 0 to nblocks - 1.\r\n\r\n\r\n(2)\r\n+\t\t\t\t\tINIT_BUFFERTAG(newTag, rnode.node, forkNum[fork_num],\r\n+\t\t\t\t\t\t\t\t firstDelBlock[block_num]);\r\n\r\nReplace firstDelBlock[fork_num] with block_num, because you want to process the current block of per-block loop. Your code accesses memory out of the bounds of the array, and doesn't invalidate any buffer.\r\n\r\n\r\n(3)\r\n+\t\t\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\r\n+\t\t\t\t\t\tbufHdr->tag.forkNum == forkNum[fork_num] &&\r\n+\t\t\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[block_num])\r\n+\t\t\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases spinlock */\r\n+\t\t\t\t\telse\r\n+\t\t\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\r\n\r\nReplace\r\nbufHdr->tag.blockNum >= firstDelBlock[fork_num]\r\nwith\r\nbufHdr->tag.blockNum == block_num\r\nbecause you want to check if the found buffer is for the current block of the loop.\r\n\r\n\r\n(4)\r\n+\t\t\t\t/*\r\n+\t\t\t\t * We've invalidated the nblocks already. Scan the shared buffers\r\n+\t\t\t\t * for each fork.\r\n+\t\t\t\t */\r\n+\t\t\t\tif (block_num > nblocks)\r\n+\t\t\t\t{\r\n+\t\t\t\t\tDropRelFileNodeBuffersOfFork(rnode.node, forkNum[fork_num],\r\n+\t\t\t\t\t\t\t\t\t\t\t\t firstDelBlock[fork_num]);\r\n+\t\t\t\t}\r\n\r\nThis part is unnecessary. This invalidates all buffers that (2) failed to process, so the regression test succeeds.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 16 Sep 2020 02:38:17 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Thanks for the new version. Jamison.\n\nAt Tue, 15 Sep 2020 11:11:26 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> Hi, \n> \n> > BTW, I think I see one problem in the code:\n> > >\n> > > if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> > > + bufHdr->tag.forkNum == forkNum[j] && tag.blockNum >=\n> > > + bufHdr->firstDelBlock[j])\n> > >\n> > > Here, I think you need to use 'i' not 'j' for forkNum and\n> > > firstDelBlock as those are arrays w.r.t forks. That might fix the\n> > > problem but I am not sure as I haven't tried to reproduce it.\n> > \n> > Thanks for advice. Right, that seems to be the cause of error, and fixing that\n> > (using fork) solved the case.\n> > I also followed the advice of Tsunakawa-san of using more meaningful\n> > iterator Instead of using \"i\" & \"j\" for readability.\n\n(FWIW, I prefer short conventional names for short-term iterator variables.)\n\n\nmaster> * XXX currently it sequentially searches the buffer pool, should be\nmaster> * changed to more clever ways of searching. However, this routine\nmaster> * is used only in code paths that aren't very performance-critical,\nmaster> * and we shouldn't slow down the hot paths to make it faster ...\n\nThis comment needs a rewrite.\n\n\n+\t\tfor (fork_num = 0; fork_num < nforks; fork_num++)\n \t\t{\n \t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n-\t\t\t\tbufHdr->tag.forkNum == forkNum[j] &&\n-\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[j])\n+\t\t\t\tbufHdr->tag.forkNum == forkNum[fork_num] &&\n+\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[fork_num])\n\nfork_num is not actually a fork number, but the index of forkNum[].\nIt should be fork_idx (or just i, which I prefer..).\n\n-\t\t\tfor (j = 0; j < nforks; j++)\n-\t\t\t\tDropRelFileNodeLocalBuffers(rnode.node, forkNum[j],\n-\t\t\t\t\t\t\t\t\t\t\tfirstDelBlock[j]);\n+\t\t\tfor (fork_num = 0; fork_num < nforks; fork_num++)\n+\t\t\t\tDropRelFileNodeLocalBuffers(rnode.node, forkNum[fork_num],\n+\t\t\t\t\t\t\t\t\t\t\tfirstDelBlock[fork_num]);\n\nI think we don't need to include the irrelevant refactoring in this\npatch. (And I think j is better there.)\n\n+\t * We only speedup this path during recovery, because that's the only\n+\t * timing when we can get a valid cached value of blocks for relation.\n+\t * See comment in smgrnblocks() in smgr.c. Otherwise, proceed to usual\n+\t * buffer invalidation process (scanning of whole shared buffers).\n\nWe need an explanation of why we do this optimizaton only for the\nrecovery case.\n\n+\t\t\t/* Get the number of blocks for the supplied relation's fork */\n+\t\t\tnblocks = smgrnblocks(smgr_reln, forkNum[fork_num]);\n+\t\t\tAssert(BlockNumberIsValid(nblocks));\n+\n+\t\t\tif (nblocks < BUF_DROP_FULLSCAN_THRESHOLD)\n\nAs mentioned upthread, the criteria whether we do full-scan or\nlookup-drop is how large portion of NBUFFERS this relation-drop can be\ngoing to invalidate. So the nblocks above sould be the sum of number\nof blocks to be truncated (not just the total number of blocks) of all\ndesignated forks. Then once we decided to do loopup-drop method, we\ndo that for all forks.\n\n+\t\t\t\tfor (block_num = 0; block_num <= nblocks; block_num++)\n+\t\t\t\t{\n\nblock_num is quite confusing with nblocks, at least for\nme(:p). Likewise fork_num, I prefer that it is just j or iblk or\nsomething else anyway not confusing with nblocks. By the way, the\nloop runs nblocks + 1 times, which seems wrong. We can start the loop\nfrom firstDelBlock[fork_num], instead of 0 and that makes the check\nagainst firstDelBlock[] later useless.\n\n+\t\t\t\t\t/* create a tag with respect to the block so we can lookup the buffer */\n+\t\t\t\t\tINIT_BUFFERTAG(newTag, rnode.node, forkNum[fork_num],\n+\t\t\t\t\t\t\t\t firstDelBlock[block_num]);\n\nMmm. it is wrong that the tag is initialized using\nfirstDelBlock[block_num]. Why isn't is just block_num?\n\n\n+\t\t\t\t\tif (buf_id < 0)\n+\t\t\t\t\t{\n+\t\t\t\t\t\tLWLockRelease(newPartitionLock);\n+\t\t\t\t\t\tcontinue;\n+\t\t\t\t\t}\n+\t\t\t\t\tLWLockRelease(newPartitionLock);\n\nWe don't need two separate LWLockRelease()'s there.\n\n+ /*\n+ * We can make this a tad faster by prechecking the buffer tag before\n+ * we attempt to lock the buffer; this saves a lot of lock\n...\n+ */\n+ if (!RelFileNodeEquals(bufHdr->tag.rnode, rnode.node))\n+ \tcontinue;\n\nIn the original code, this is performed in order to avoid taking a\nlock on bufHder for irrelevant buffers. We have identified the buffer\nby looking up using the rnode, so I think we don't need to this\ncheck. Note that we are doing the same check after lock aquisition.\n\n+ \telse\n+ \t\tUnlockBufHdr(bufHdr, buf_state);\n+ }\n+ /*\n+ * We've invalidated the nblocks already. Scan the shared buffers\n+ * for each fork.\n+ */\n+ if (block_num > nblocks)\n+ {\n+ \tDropRelFileNodeBuffersOfFork(rnode.node, forkNum[fork_num],\n+ \t\t\t\t\t\t\t\t firstDelBlock[fork_num]);\n+ }\n\nMmm? block_num is always larger than nblocks there. And the function\ncall runs a whole Nbuffers scan for the just-processed fork. What is\nthe point of this code?\n\n\n> > I also added a new function when relation fork is bigger than the threshold\n> > If (nblocks > BUF_DROP_FULLSCAN_THRESHOLD)\n> > (DropRelFileNodeBuffersOfFork) Perhaps there's a better name for that\n> > function.\n> > However, as expected in the previous discussions, this is a bit slower than the\n> > standard buffer invalidation process, because the whole shared buffers are\n> > scanned nfork times.\n> > Currently, I set the threshold to (NBuffers / 500)\n> \n> I made a mistake in the v12. I replaced the firstDelBlock[fork_num] with firstDelBlock[block_num],\n> In the for-loop code block of block_num, because we want to process the current block of per-block loop\n> OTOH, I used the firstDelBlock[fork_num] when relation fork is bigger than the threshold,\n> or if the cached blocks of small relations were already invalidated.\n\nReally? I believe that firstDelBlock is an array has only nforks elements.\n\n> The logic could be either correct or wrong, so I'd appreciate feedback and comments/advice.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Sep 2020 11:56:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Wed, 16 Sep 2020 11:56:29 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n(Oops! Some of my comments duplicate with Tsunakawa-san, sorry.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Sep 2020 12:00:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 16, 2020 at 7:46 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 2 Sep 2020 08:18:06 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Wed, Sep 2, 2020 at 7:01 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > Isn't a relation always locked asscess-exclusively, at truncation\n> > > time? If so, isn't even the result of lseek reliable enough?\n> > >\n> >\n> > Even if the relation is locked, background processes like checkpointer\n> > can still touch the relation which might cause problems. Consider a\n> > case where we extend the relation but didn't flush the newly added\n> > pages. Now during truncate operation, checkpointer can still flush\n> > those pages which can cause trouble for truncate. But, I think in the\n> > recovery path such cases won't cause a problem.\n>\n> I reconsided on this and still have a doubt.\n>\n> Is this means lseek(SEEK_END) doesn't count blocks that are\n> write(2)'ed (by smgrextend) but not yet flushed? (I don't think so,\n> for clarity.) The nblocks cache is added just to reduce the number of\n> lseek()s and expected to always have the same value with what lseek()\n> is expected to return.\n>\n\nSee comments in ReadBuffer_common() which indicates such a possibility\n(\"Unfortunately, we have also seen this case occurring because of\nbuggy Linux kernels that sometimes return an lseek(SEEK_END) result\nthat doesn't account for a recent write.\"). Also, refer my previous\nemail [1] on this and another email link in that email which has a\ndiscussion on this point.\n\n> The reason it is reliable only during recovery\n> is that the cache is not shared but the startup process is the only\n> process that changes the relation size during recovery.\n>\n\nYes, that is why we are planning to do this optimization for recovery path.\n\n> If any other process can extend the relation while smgrtruncate is\n> running, the current DropRelFileNodeBuffers should have the chance\n> that a new buffer for extended area is allocated at a buffer location\n> where the function already have passed by, which is a disaster.\n>\n\nThe relation might have extended before smgrtruncate but the newly\nadded pages can be flushed by checkpointer during smgrtruncate.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LH2uQWznwtonD%2Bnch76kqzemdTQAnfB06z_LXa6NTFtQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 16 Sep 2020 08:33:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Wed, 16 Sep 2020 08:33:06 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Sep 16, 2020 at 7:46 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Is this means lseek(SEEK_END) doesn't count blocks that are\n> > write(2)'ed (by smgrextend) but not yet flushed? (I don't think so,\n> > for clarity.) The nblocks cache is added just to reduce the number of\n> > lseek()s and expected to always have the same value with what lseek()\n> > is expected to return.\n> >\n> \n> See comments in ReadBuffer_common() which indicates such a possibility\n> (\"Unfortunately, we have also seen this case occurring because of\n> buggy Linux kernels that sometimes return an lseek(SEEK_END) result\n> that doesn't account for a recent write.\"). Also, refer my previous\n> email [1] on this and another email link in that email which has a\n> discussion on this point.\n>\n> > The reason it is reliable only during recovery\n> > is that the cache is not shared but the startup process is the only\n> > process that changes the relation size during recovery.\n> >\n> \n> Yes, that is why we are planning to do this optimization for recovery path.\n> \n> > If any other process can extend the relation while smgrtruncate is\n> > running, the current DropRelFileNodeBuffers should have the chance\n> > that a new buffer for extended area is allocated at a buffer location\n> > where the function already have passed by, which is a disaster.\n> >\n> \n> The relation might have extended before smgrtruncate but the newly\n> added pages can be flushed by checkpointer during smgrtruncate.\n> \n> [1] - https://www.postgresql.org/message-id/CAA4eK1LH2uQWznwtonD%2Bnch76kqzemdTQAnfB06z_LXa6NTFtQ%40mail.gmail.com\n\nAh! I understood that! The reason we can rely on the cahce is that the\ncached value is *not* what lseek returned but how far we intended to\nextend. Thank you for the explanation.\n\nBy the way I'm not sure that actually happens, but if one smgrextend\ncall exnteded the relation by two or more blocks, the cache is\ninvalidated and succeeding smgrnblocks returns lseek()'s result. Don't\nwe need to guarantee the cache to be valid while recovery?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Sep 2020 12:32:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 16, 2020 at 9:02 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 16 Sep 2020 08:33:06 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Wed, Sep 16, 2020 at 7:46 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > Is this means lseek(SEEK_END) doesn't count blocks that are\n> > > write(2)'ed (by smgrextend) but not yet flushed? (I don't think so,\n> > > for clarity.) The nblocks cache is added just to reduce the number of\n> > > lseek()s and expected to always have the same value with what lseek()\n> > > is expected to return.\n> > >\n> >\n> > See comments in ReadBuffer_common() which indicates such a possibility\n> > (\"Unfortunately, we have also seen this case occurring because of\n> > buggy Linux kernels that sometimes return an lseek(SEEK_END) result\n> > that doesn't account for a recent write.\"). Also, refer my previous\n> > email [1] on this and another email link in that email which has a\n> > discussion on this point.\n> >\n> > > The reason it is reliable only during recovery\n> > > is that the cache is not shared but the startup process is the only\n> > > process that changes the relation size during recovery.\n> > >\n> >\n> > Yes, that is why we are planning to do this optimization for recovery path.\n> >\n> > > If any other process can extend the relation while smgrtruncate is\n> > > running, the current DropRelFileNodeBuffers should have the chance\n> > > that a new buffer for extended area is allocated at a buffer location\n> > > where the function already have passed by, which is a disaster.\n> > >\n> >\n> > The relation might have extended before smgrtruncate but the newly\n> > added pages can be flushed by checkpointer during smgrtruncate.\n> >\n> > [1] - https://www.postgresql.org/message-id/CAA4eK1LH2uQWznwtonD%2Bnch76kqzemdTQAnfB06z_LXa6NTFtQ%40mail.gmail.com\n>\n> Ah! I understood that! The reason we can rely on the cahce is that the\n> cached value is *not* what lseek returned but how far we intended to\n> extend. Thank you for the explanation.\n>\n> By the way I'm not sure that actually happens, but if one smgrextend\n> call exnteded the relation by two or more blocks, the cache is\n> invalidated and succeeding smgrnblocks returns lseek()'s result.\n>\n\nCan you think of any such case? I think in recovery we use\nXLogReadBufferExtended->ReadBufferWithoutRelcache for reading the page\nwhich seems to be extending page-by-page but there could be some case\nwhere that is not true. One idea is to run regressions and add an\nAssert to see if we are extending more than a block during recovery.\n\n> Don't\n> we need to guarantee the cache to be valid while recovery?\n>\n\nOne possibility could be that we somehow detect that the value we are\nusing is cached one and if so then only do this optimization.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 16 Sep 2020 10:05:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Wed, 16 Sep 2020 10:05:32 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Sep 16, 2020 at 9:02 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 16 Sep 2020 08:33:06 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Wed, Sep 16, 2020 at 7:46 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > By the way I'm not sure that actually happens, but if one smgrextend\n> > call exnteded the relation by two or more blocks, the cache is\n> > invalidated and succeeding smgrnblocks returns lseek()'s result.\n> >\n> \n> Can you think of any such case? I think in recovery we use\n> XLogReadBufferExtended->ReadBufferWithoutRelcache for reading the page\n> which seems to be extending page-by-page but there could be some case\n> where that is not true. One idea is to run regressions and add an\n> Assert to see if we are extending more than a block during recovery.\n\nI agree with you. Actually XLogReadBufferExtended is the only point to\nread a page while recovery and seems calling ReadBufferWithoutRelcache\npage by page up to the target page. The only case I found where the\ncache is invalidated is ALTER TABLE SET TABLESPACE while\nwal_level=minimal and not during recovery. smgrextend is called\nwithout smgrnblocks called at the time.\n\nConsidering that the behavior of lseek can be a problem only just after\nextending a file, an assertion in smgrextend seems to be\nenough. Although, I'm not confident on the diagnosis.\n\n--- a/src/backend/storage/smgr/smgr.c\n+++ b/src/backend/storage/smgr/smgr.c\n@@ -474,7 +474,14 @@ smgrextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n \tif (reln->smgr_cached_nblocks[forknum] == blocknum)\n \t\treln->smgr_cached_nblocks[forknum] = blocknum + 1;\n \telse\n+\t{\n+\t\t/*\n+\t\t * DropRelFileNodeBuffers relies on the behavior that nblocks cache\n+\t\t * won't be invalidated by file extension while recoverying.\n+\t\t */\n+\t\tAssert(!InRecovery);\n \t\treln->smgr_cached_nblocks[forknum] = InvalidBlockNumber;\n+\t}\n }\n\n> > Don't\n> > we need to guarantee the cache to be valid while recovery?\n> >\n> \n> One possibility could be that we somehow detect that the value we are\n> using is cached one and if so then only do this optimization.\n\nI basically like this direction. But I'm not sure the additional\nparameter for smgrnblocks is acceptable.\n\nBut on the contrary, it might be a better design that\nDropRelFileNodeBuffers gives up the optimization when\nsmgrnblocks(,,must_accurate = true) returns InvalidBlockNumber.\n\n\n@@ -544,9 +551,12 @@ smgrwriteback(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n /*\n *\tsmgrnblocks() -- Calculate the number of blocks in the\n *\t\t\t\t\t supplied relation.\n+ *\n+ *\tReturns InvalidBlockNumber if must_accurate is true and smgr_cached_nblocks\n+ *\tis not available.\n */\n BlockNumber\n-smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n+smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool must_accurate)\n {\n \tBlockNumber result;\n \n@@ -561,6 +571,17 @@ smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n \n \treln->smgr_cached_nblocks[forknum] = result;\n \n+\t/*\n+\t * We cannot believe the result from smgr_nblocks is always accurate\n+\t * because lseek of buggy Linux kernels doesn't account for a recent\n+\t * write. However, we can rely on the result from lseek while recovering\n+\t * because the first call to this function is not happen just after a file\n+\t * extension. Return values on subsequent calls return cached nblocks,\n+\t * which should be accurate during recovery.\n+\t */\n+\tif (!InRecovery && must_accurate)\n+\t\treturn InvalidBlockNumber;\n+\n \treturn result;\n }\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Sep 2020 17:32:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 16, 2020 at 2:02 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 16 Sep 2020 10:05:32 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Wed, Sep 16, 2020 at 9:02 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 16 Sep 2020 08:33:06 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > On Wed, Sep 16, 2020 at 7:46 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > By the way I'm not sure that actually happens, but if one smgrextend\n> > > call exnteded the relation by two or more blocks, the cache is\n> > > invalidated and succeeding smgrnblocks returns lseek()'s result.\n> > >\n> >\n> > Can you think of any such case? I think in recovery we use\n> > XLogReadBufferExtended->ReadBufferWithoutRelcache for reading the page\n> > which seems to be extending page-by-page but there could be some case\n> > where that is not true. One idea is to run regressions and add an\n> > Assert to see if we are extending more than a block during recovery.\n>\n> I agree with you. Actually XLogReadBufferExtended is the only point to\n> read a page while recovery and seems calling ReadBufferWithoutRelcache\n> page by page up to the target page. The only case I found where the\n> cache is invalidated is ALTER TABLE SET TABLESPACE while\n> wal_level=minimal and not during recovery. smgrextend is called\n> without smgrnblocks called at the time.\n>\n> Considering that the behavior of lseek can be a problem only just after\n> extending a file, an assertion in smgrextend seems to be\n> enough. Although, I'm not confident on the diagnosis.\n>\n> --- a/src/backend/storage/smgr/smgr.c\n> +++ b/src/backend/storage/smgr/smgr.c\n> @@ -474,7 +474,14 @@ smgrextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n> if (reln->smgr_cached_nblocks[forknum] == blocknum)\n> reln->smgr_cached_nblocks[forknum] = blocknum + 1;\n> else\n> + {\n> + /*\n> + * DropRelFileNodeBuffers relies on the behavior that nblocks cache\n> + * won't be invalidated by file extension while recoverying.\n> + */\n> + Assert(!InRecovery);\n> reln->smgr_cached_nblocks[forknum] = InvalidBlockNumber;\n> + }\n> }\n>\n\nYeah, I have something like this in mind. I am not very sure at this\nstage that we want to commit this but for verification purpose,\nrunning regressions it is a good idea.\n\n> > > Don't\n> > > we need to guarantee the cache to be valid while recovery?\n> > >\n> >\n> > One possibility could be that we somehow detect that the value we are\n> > using is cached one and if so then only do this optimization.\n>\n> I basically like this direction. But I'm not sure the additional\n> parameter for smgrnblocks is acceptable.\n>\n> But on the contrary, it might be a better design that\n> DropRelFileNodeBuffers gives up the optimization when\n> smgrnblocks(,,must_accurate = true) returns InvalidBlockNumber.\n>\n\nI haven't thought about what is the best way to achieve this. Let us\nsee if Tsunakawa-San or Kirk-San has other ideas on this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 16 Sep 2020 18:37:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, September 16, 2020 5:32 PM, Kyotaro Horiguchi wrote:\n> At Wed, 16 Sep 2020 10:05:32 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > On Wed, Sep 16, 2020 at 9:02 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 16 Sep 2020 08:33:06 +0530, Amit Kapila\n> > > <amit.kapila16@gmail.com> wrote in\n> > > > On Wed, Sep 16, 2020 at 7:46 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > By the way I'm not sure that actually happens, but if one smgrextend\n> > > call exnteded the relation by two or more blocks, the cache is\n> > > invalidated and succeeding smgrnblocks returns lseek()'s result.\n> > >\n> >\n> > Can you think of any such case? I think in recovery we use\n> > XLogReadBufferExtended->ReadBufferWithoutRelcache for reading the\n> page\n> > which seems to be extending page-by-page but there could be some case\n> > where that is not true. One idea is to run regressions and add an\n> > Assert to see if we are extending more than a block during recovery.\n> \n> I agree with you. Actually XLogReadBufferExtended is the only point to read a\n> page while recovery and seems calling ReadBufferWithoutRelcache page by\n> page up to the target page. The only case I found where the cache is\n> invalidated is ALTER TABLE SET TABLESPACE while wal_level=minimal and\n> not during recovery. smgrextend is called without smgrnblocks called at the\n> time.\n> \n> Considering that the behavior of lseek can be a problem only just after\n> extending a file, an assertion in smgrextend seems to be enough. Although,\n> I'm not confident on the diagnosis.\n> \n> --- a/src/backend/storage/smgr/smgr.c\n> +++ b/src/backend/storage/smgr/smgr.c\n> @@ -474,7 +474,14 @@ smgrextend(SMgrRelation reln, ForkNumber forknum,\n> BlockNumber blocknum,\n> \tif (reln->smgr_cached_nblocks[forknum] == blocknum)\n> \t\treln->smgr_cached_nblocks[forknum] = blocknum + 1;\n> \telse\n> +\t{\n> +\t\t/*\n> +\t\t * DropRelFileNodeBuffers relies on the behavior that\n> nblocks cache\n> +\t\t * won't be invalidated by file extension while recoverying.\n> +\t\t */\n> +\t\tAssert(!InRecovery);\n> \t\treln->smgr_cached_nblocks[forknum] =\n> InvalidBlockNumber;\n> +\t}\n> }\n> \n> > > Don't\n> > > we need to guarantee the cache to be valid while recovery?\n> > >\n> >\n> > One possibility could be that we somehow detect that the value we are\n> > using is cached one and if so then only do this optimization.\n> \n> I basically like this direction. But I'm not sure the additional parameter for\n> smgrnblocks is acceptable.\n> \n> But on the contrary, it might be a better design that DropRelFileNodeBuffers\n> gives up the optimization when smgrnblocks(,,must_accurate = true) returns\n> InvalidBlockNumber.\n> \n\nThank you for your thoughtful reviews and discussions Horiguchi-san, Tsunakawa-san and Amit-san.\nApologies for my carelessness. I've addressed the bugs in the previous version.\n1. Getting the total number of blocks for all the specified forks\n2. Hashtable probing conditions\n\nI added the suggestion of putting an assert on smgrextend for the XLogReadBufferExtended case,\nand I thought that would be enough. I think modifying the smgrnblocks with the addition of new\nparameter would complicate the source code because a number of functions call it.\nSo I thought that maybe putting BlockNumberIsValid(nblocks) in the condition would suffice.\nElse, we do full scan of buffer pool.\n\n+ if ((nblocks / (uint32)NBuffers) < BUF_DROP_FULLSCAN_THRESHOLD &&\n+ BlockNumberIsValid(nblocks))\n\n+ else\n+ {\n\t\t\t\t//full scan\n\nAttached is the v14 of the patch. It compiles and passes the tests.\nHoping for your continuous reviews and feedback. Thank you very much.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 17 Sep 2020 13:06:33 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "I looked at v14.\n\n\n(1)\n+\t\t/* Get the total number of blocks for the supplied relation's fork */\n+\t\tfor (j = 0; j < nforks; j++)\n+\t\t{\n+\t\t\tBlockNumber\t\tblock = smgrnblocks(smgr_reln, forkNum[j]);\n+\t\t\tnblocks += block;\n+\t\t}\n\nWhy do you sum all forks?\n\n\n(2)\n+\t\t\tif ((nblocks / (uint32)NBuffers) < BUF_DROP_FULLSCAN_THRESHOLD &&\n+\t\t\t\tBlockNumberIsValid(nblocks))\n+\t\t\t{\n\nThe division by NBuffers is not necessary, because both sides of = are number of blocks.\nWhy is BlockNumberIsValid(nblocks)) call needed?\n\n\n(3)\n \tif (reln->smgr_cached_nblocks[forknum] == blocknum)\n \t\treln->smgr_cached_nblocks[forknum] = blocknum + 1;\n \telse\n+\t{\n+\t\t/*\n+\t\t * DropRelFileNodeBuffers relies on the behavior that cached nblocks\n+\t\t * won't be invalidated by file extension while recovering.\n+\t\t */\n+\t\tAssert(!InRecovery);\n \t\treln->smgr_cached_nblocks[forknum] = InvalidBlockNumber;\n+\t}\n\nI think this change is not directly related to this patch and can be a separate patch, but I want to leave the decision up to a committer.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Wed, 23 Sep 2020 02:26:13 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> > > > Don't\r\n> > > > we need to guarantee the cache to be valid while recovery?\r\n> > > >\r\n> > >\r\n> > > One possibility could be that we somehow detect that the value we\r\n> > > are using is cached one and if so then only do this optimization.\r\n> >\r\n> > I basically like this direction. But I'm not sure the additional\r\n> > parameter for smgrnblocks is acceptable.\r\n> >\r\n> > But on the contrary, it might be a better design that\r\n> > DropRelFileNodeBuffers gives up the optimization when\r\n> > smgrnblocks(,,must_accurate = true) returns InvalidBlockNumber.\r\n> >\r\n> \r\n> I haven't thought about what is the best way to achieve this. Let us see if\r\n> Tsunakawa-San or Kirk-San has other ideas on this?\r\n\r\nI see no need for smgrnblocks() to add an argument as it returns the correct cached or measured value.\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Wed, 23 Sep 2020 02:34:45 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, September 23, 2020 11:26 AM, Tsunakawa, Takayuki wrote:\n\n> I looked at v14.\nThank you for checking it!\n \n> (1)\n> +\t\t/* Get the total number of blocks for the supplied relation's\n> fork */\n> +\t\tfor (j = 0; j < nforks; j++)\n> +\t\t{\n> +\t\t\tBlockNumber\t\tblock =\n> smgrnblocks(smgr_reln, forkNum[j]);\n> +\t\t\tnblocks += block;\n> +\t\t}\n> \n> Why do you sum all forks?\n\nI revised the patch based from my understanding of Horiguchi-san's comment,\nbut I could be wrong.\nQuoting:\n\n\" \n+\t\t\t/* Get the number of blocks for the supplied relation's fork */\n+\t\t\tnblocks = smgrnblocks(smgr_reln, forkNum[fork_num]);\n+\t\t\tAssert(BlockNumberIsValid(nblocks));\n+\n+\t\t\tif (nblocks < BUF_DROP_FULLSCAN_THRESHOLD)\n\nAs mentioned upthread, the criteria whether we do full-scan or\nlookup-drop is how large portion of NBUFFERS this relation-drop can be\ngoing to invalidate. So the nblocks above should be the sum of number\nof blocks to be truncated (not just the total number of blocks) of all\ndesignated forks. Then once we decided to do lookup-drop method, we\ndo that for all forks.\"\n\n> (2)\n> +\t\t\tif ((nblocks / (uint32)NBuffers) <\n> BUF_DROP_FULLSCAN_THRESHOLD &&\n> +\t\t\t\tBlockNumberIsValid(nblocks))\n> +\t\t\t{\n> \n> The division by NBuffers is not necessary, because both sides of = are\n> number of blocks.\n\nAgain I based it from my understanding of the comment above,\nso nblocks is the sum of all blocks to be truncated for all forks.\n\n\n> Why is BlockNumberIsValid(nblocks)) call needed?\n\nI thought we need to ensure that nblocks is not invalid, so I also added\n\n> (3)\n> \tif (reln->smgr_cached_nblocks[forknum] == blocknum)\n> \t\treln->smgr_cached_nblocks[forknum] = blocknum + 1;\n> \telse\n> +\t{\n> +\t\t/*\n> +\t\t * DropRelFileNodeBuffers relies on the behavior that\n> cached nblocks\n> +\t\t * won't be invalidated by file extension while recovering.\n> +\t\t */\n> +\t\tAssert(!InRecovery);\n> \t\treln->smgr_cached_nblocks[forknum] =\n> InvalidBlockNumber;\n> +\t}\n> \n> I think this change is not directly related to this patch and can be a separate\n> patch, but I want to leave the decision up to a committer.\n> \nThis is noted. Once we clarified the above comments, I'll put it in a separate patch if it's necessary,\n\nThank you very much for the reviews.\n\nBest regards,\nKirk Jamison\n\n\n\n\n", "msg_date": "Wed, 23 Sep 2020 04:23:29 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 23, 2020 at 7:56 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> (3)\n> if (reln->smgr_cached_nblocks[forknum] == blocknum)\n> reln->smgr_cached_nblocks[forknum] = blocknum + 1;\n> else\n> + {\n> + /*\n> + * DropRelFileNodeBuffers relies on the behavior that cached nblocks\n> + * won't be invalidated by file extension while recovering.\n> + */\n> + Assert(!InRecovery);\n> reln->smgr_cached_nblocks[forknum] = InvalidBlockNumber;\n> + }\n>\n> I think this change is not directly related to this patch and can be a separate patch, but I want to leave the decision up to a committer.\n>\n\nWe have added this mainly for testing purpose, basically this\nassertion should not fail during the regression tests. We can keep it\nin a separate patch but need to ensure that. If this fails then we\ncan't rely on the caching behaviour during recovery which is actually\nrequired for the correctness of patch.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Sep 2020 10:12:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 23, 2020 at 8:04 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > > > > Don't\n> > > > > we need to guarantee the cache to be valid while recovery?\n> > > > >\n> > > >\n> > > > One possibility could be that we somehow detect that the value we\n> > > > are using is cached one and if so then only do this optimization.\n> > >\n> > > I basically like this direction. But I'm not sure the additional\n> > > parameter for smgrnblocks is acceptable.\n> > >\n> > > But on the contrary, it might be a better design that\n> > > DropRelFileNodeBuffers gives up the optimization when\n> > > smgrnblocks(,,must_accurate = true) returns InvalidBlockNumber.\n> > >\n> >\n> > I haven't thought about what is the best way to achieve this. Let us see if\n> > Tsunakawa-San or Kirk-San has other ideas on this?\n>\n> I see no need for smgrnblocks() to add an argument as it returns the correct cached or measured value.\n>\n\nThe idea is that we can't use this optimization if the value is not\ncached because we can't rely on lseek behavior. See all the discussion\nbetween Horiguchi-San and me in the thread above. So, how would you\nensure that if we don't use Kirk-San's proposal?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Sep 2020 10:14:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> I revised the patch based from my understanding of Horiguchi-san's comment,\n> but I could be wrong.\n> Quoting:\n> \n> \"\n> +\t\t\t/* Get the number of blocks for the supplied relation's\n> fork */\n> +\t\t\tnblocks = smgrnblocks(smgr_reln,\n> forkNum[fork_num]);\n> +\t\t\tAssert(BlockNumberIsValid(nblocks));\n> +\n> +\t\t\tif (nblocks < BUF_DROP_FULLSCAN_THRESHOLD)\n> \n> As mentioned upthread, the criteria whether we do full-scan or\n> lookup-drop is how large portion of NBUFFERS this relation-drop can be\n> going to invalidate. So the nblocks above should be the sum of number\n> of blocks to be truncated (not just the total number of blocks) of all\n> designated forks. Then once we decided to do lookup-drop method, we\n> do that for all forks.\"\n\nOne takeaway from Horiguchi-san's comment is to use the number of blocks to invalidate for comparison, instead of all blocks in the fork. That is, use\n\nnblocks = smgrnblocks(fork) - firstDelBlock[fork];\n\nDoes this make sense?\n\nWhat do you think is the reason for summing up all forks? I didn't understand why. Typically, FSM and VM forks are very small. If the main fork is larger than NBuffers / 500, then v14 scans the entire shared buffers for the FSM and VM forks as well as the main fork, resulting in three scans in total.\n\nAlso, if you want to judge the criteria based on the total blocks of all forks, the following if should be placed outside the for loop, right? Because this if condition doesn't change inside the for loop.\n\n+\t\t\tif ((nblocks / (uint32)NBuffers) < BUF_DROP_FULLSCAN_THRESHOLD &&\n+\t\t\t\tBlockNumberIsValid(nblocks))\n+\t\t\t{\n\n\n\n> > (2)\n> > +\t\t\tif ((nblocks / (uint32)NBuffers) <\n> > BUF_DROP_FULLSCAN_THRESHOLD &&\n> > +\t\t\t\tBlockNumberIsValid(nblocks))\n> > +\t\t\t{\n> >\n> > The division by NBuffers is not necessary, because both sides of = are\n> > number of blocks.\n> \n> Again I based it from my understanding of the comment above,\n> so nblocks is the sum of all blocks to be truncated for all forks.\n\nBut the left expression of \"<\" is a percentage, while the right one is a block count. Two different units are compared.\n\n\n> > Why is BlockNumberIsValid(nblocks)) call needed?\n> \n> I thought we need to ensure that nblocks is not invalid, so I also added\n\nWhen is it invalid? smgrnblocks() seems to always return a valid block number. Am I seeing a different source code (I saw HEAD)?\n\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Wed, 23 Sep 2020 05:37:24 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> The idea is that we can't use this optimization if the value is not\r\n> cached because we can't rely on lseek behavior. See all the discussion\r\n> between Horiguchi-San and me in the thread above. So, how would you\r\n> ensure that if we don't use Kirk-San's proposal?\r\n\r\nHmm, buggy Linux kernel... (Until when should we be worried about the bug?)\r\n\r\nAccording to the following Horiguchi-san's suggestion, it's during normal operation, not during recovery, when we should be careful, right? Then, we can use the current smgrnblocks() as is?\r\n\r\n+\t/*\r\n+\t * We cannot believe the result from smgr_nblocks is always accurate\r\n+\t * because lseek of buggy Linux kernels doesn't account for a recent\r\n+\t * write. However, we can rely on the result from lseek while recovering\r\n+\t * because the first call to this function is not happen just after a file\r\n+\t * extension. Return values on subsequent calls return cached nblocks,\r\n+\t * which should be accurate during recovery.\r\n+\t */\r\n+\tif (!InRecovery && must_accurate)\r\n+\t\treturn InvalidBlockNumber;\r\n+\r\n \treturn result;\r\n} \r\n\r\nIf smgrnblocks() could return a smaller value than the actual file size by one block even during recovery, how about always adding one to the return value of smgrnblocks() in DropRelFileNodeBuffers()? When smgrnblocks() actually returned the correct value, the extra one block is not found in the shared buffer, so DropRelFileNodeBuffers() does no harm.\r\n\r\nOr, add a new function like smgrnblocks_precise() to avoid adding an argument to smgrnblocks()?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Wed, 23 Sep 2020 06:30:52 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, September 23, 2020 2:37 PM, Tsunakawa, Takayuki wrote:\n> > I revised the patch based from my understanding of Horiguchi-san's\n> > comment, but I could be wrong.\n> > Quoting:\n> >\n> > \"\n> > +\t\t\t/* Get the number of blocks for the supplied\n> relation's\n> > fork */\n> > +\t\t\tnblocks = smgrnblocks(smgr_reln,\n> > forkNum[fork_num]);\n> > +\t\t\tAssert(BlockNumberIsValid(nblocks));\n> > +\n> > +\t\t\tif (nblocks <\n> BUF_DROP_FULLSCAN_THRESHOLD)\n> >\n> > As mentioned upthread, the criteria whether we do full-scan or\n> > lookup-drop is how large portion of NBUFFERS this relation-drop can be\n> > going to invalidate. So the nblocks above should be the sum of number\n> > of blocks to be truncated (not just the total number of blocks) of all\n> > designated forks. Then once we decided to do lookup-drop method, we\n> > do that for all forks.\"\n> \n> One takeaway from Horiguchi-san's comment is to use the number of blocks\n> to invalidate for comparison, instead of all blocks in the fork. That is, use\n> \n> nblocks = smgrnblocks(fork) - firstDelBlock[fork];\n> \n> Does this make sense?\n\nHmm. Ok, I think it got too much to my head that I misunderstood what it meant.\nI'll debug again by using ereport just to check the values and behavior are correct.\nYour comment about V14 patch has dawned on me that it reverted to previous\nslower version where we scan NBuffers for each fork. Thank you for explaining it.\n\n> What do you think is the reason for summing up all forks? I didn't\n> understand why. Typically, FSM and VM forks are very small. If the main\n> fork is larger than NBuffers / 500, then v14 scans the entire shared buffers for\n> the FSM and VM forks as well as the main fork, resulting in three scans in\n> total.\n> \n> Also, if you want to judge the criteria based on the total blocks of all forks, the\n> following if should be placed outside the for loop, right? Because this if\n> condition doesn't change inside the for loop.\n> \n> +\t\t\tif ((nblocks / (uint32)NBuffers) <\n> BUF_DROP_FULLSCAN_THRESHOLD &&\n> +\t\t\t\tBlockNumberIsValid(nblocks))\n> +\t\t\t{\n> \n> \n> \n> > > (2)\n> > > +\t\t\tif ((nblocks / (uint32)NBuffers) <\n> > > BUF_DROP_FULLSCAN_THRESHOLD &&\n> > > +\t\t\t\tBlockNumberIsValid(nblocks))\n> > > +\t\t\t{\n> > >\n> > > The division by NBuffers is not necessary, because both sides of =\n> > > are number of blocks.\n> >\n> > Again I based it from my understanding of the comment above, so\n> > nblocks is the sum of all blocks to be truncated for all forks.\n> \n> But the left expression of \"<\" is a percentage, while the right one is a block\n> count. Two different units are compared.\n> \n\nRight. Makes sense. Fixed.\n\n> > > Why is BlockNumberIsValid(nblocks)) call needed?\n> >\n> > I thought we need to ensure that nblocks is not invalid, so I also\n> > added\n> \n> When is it invalid? smgrnblocks() seems to always return a valid block\n> number. Am I seeing a different source code (I saw HEAD)?\n\nIt's based from the discussion upthread to guarantee the cache to be valid while recovery\nand that we don't want to proceed with the optimization in case that nblocks is invalid.\nIt may not be needed so I already removed it, because the correct direction is ensuring that\nsmgrnblocks return the precise value.\nConsidering the test case that Horiguchi-san suggested (attached as separate patch),\nthen maybe there's no need to indicate it in the loop condition.\nFor now, I haven't modified the design (or created a new function) of smgrnblocks, \nand I just updated the patches based from the recent comments.\n\nThank you very much again for the reviews.\n\nBest regards,\nKirk Jamison", "msg_date": "Wed, 23 Sep 2020 07:57:33 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Sep 23, 2020 at 12:00 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > The idea is that we can't use this optimization if the value is not\n> > cached because we can't rely on lseek behavior. See all the discussion\n> > between Horiguchi-San and me in the thread above. So, how would you\n> > ensure that if we don't use Kirk-San's proposal?\n>\n> Hmm, buggy Linux kernel... (Until when should we be worried about the bug?)\n>\n> According to the following Horiguchi-san's suggestion, it's during normal operation, not during recovery, when we should be careful, right?\n>\n\nNo, during recovery also we need to be careful. We need to ensure that\nwe use cached value during recovery and cached value is always\nup-to-date. We can't rely on lseek and I have provided some scenario\nup thread [1] where such behavior can cause problem and then see the\nresponse from Tom Lane why the same can be true for recovery as well.\n\nThe basic approach we are trying to pursue here is to rely on the\ncached value of 'number of blocks' (as that always gives correct value\nand even if there is a problem that will be our bug, we don't need to\nrely on OS for correct value and it will be better w.r.t performance\nas well). It is currently only possible during recovery so we are\nusing it in recovery path and later once Thomas's patch to cache it\nfor non-recovery cases is also done, we can use it for non-recovery\ncases as well.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LqaJvT%3DbFOpc4i5Haq4oaVQ6wPbAcg64-Kt1qzp_MZYA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Sep 2020 17:52:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "In v15:\n\n(1)\n+\t\t\t\tfor (cur_blk = firstDelBlock[j]; cur_blk < nblocks; cur_blk++)\n\nThe right side of \"cur_blk <\" should not be nblocks, because nblocks is not the number of the relation fork anymore.\n\n\n(2)\n+\t\t\tBlockNumber\t\tnblocks;\n+\t\t\tnblocks = smgrnblocks(smgr_reln, forkNum[j]) - firstDelBlock[j];\n\nYou should either:\n\n* Combine the two lines into one: BlockNumber nblocks = ...;\n\nor\n\n* Put an empty line between the two lines to separate declarations and execution statements.\n\n\nAfter correcting these, I think you can check the recovery performance.\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Thu, 24 Sep 2020 04:26:37 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, September 24, 2020 1:27 PM, Tsunakawa-san wrote:\n\n> (1)\n> +\t\t\t\tfor (cur_blk = firstDelBlock[j]; cur_blk <\n> nblocks; cur_blk++)\n> \n> The right side of \"cur_blk <\" should not be nblocks, because nblocks is not\n> the number of the relation fork anymore.\n\nRight. Fixed. It should be the total number of (n)blocks of relation.\n\n> (2)\n> +\t\t\tBlockNumber\t\tnblocks;\n> +\t\t\tnblocks = smgrnblocks(smgr_reln, forkNum[j]) -\n> firstDelBlock[j];\n> \n> You should either:\n> \n> * Combine the two lines into one: BlockNumber nblocks = ...;\n> \n> or\n> \n> * Put an empty line between the two lines to separate declarations and\n> execution statements.\n\nRight. I separated them in the updated patch. And to prevent confusion,\ninstead of nblocks, nTotalBlocks & nBlocksToInvalidate are used.\n\n/* Get the total number of blocks for the supplied relation's fork */\nnTotalBlocks = smgrnblocks(smgr_reln, forkNum[j]);\n\n/* Get the total number of blocks to be invalidated for the specified fork */\nnBlocksToInvalidate = nTotalBlocks - firstDelBlock[j];\n \n\n> After correcting these, I think you can check the recovery performance.\n\nI'll send performance measurement results in the next email. Thanks a lot for the reviews!\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 24 Sep 2020 08:47:06 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hello.\n\nAt Wed, 23 Sep 2020 05:37:24 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n\n# Wow. I'm surprised to read it..\n\n> > I revised the patch based from my understanding of Horiguchi-san's comment,\n> > but I could be wrong.\n> > Quoting:\n> > \n> > \"\n> > +\t\t\t/* Get the number of blocks for the supplied relation's\n> > fork */\n> > +\t\t\tnblocks = smgrnblocks(smgr_reln,\n> > forkNum[fork_num]);\n> > +\t\t\tAssert(BlockNumberIsValid(nblocks));\n> > +\n> > +\t\t\tif (nblocks < BUF_DROP_FULLSCAN_THRESHOLD)\n> > \n> > As mentioned upthread, the criteria whether we do full-scan or\n> > lookup-drop is how large portion of NBUFFERS this relation-drop can be\n> > going to invalidate. So the nblocks above should be the sum of number\n> > of blocks to be truncated (not just the total number of blocks) of all\n> > designated forks. Then once we decided to do lookup-drop method, we\n> > do that for all forks.\"\n> \n> One takeaway from Horiguchi-san's comment is to use the number of blocks to invalidate for comparison, instead of all blocks in the fork. That is, use\n> \n> nblocks = smgrnblocks(fork) - firstDelBlock[fork];\n> \n> Does this make sense?\n> \n> What do you think is the reason for summing up all forks? I didn't understand why. Typically, FSM and VM forks are very small. If the main fork is larger than NBuffers / 500, then v14 scans the entire shared buffers for the FSM and VM forks as well as the main fork, resulting in three scans in total.\n\nI thought of summing up smgrnblocks(fork) - firstDelBlock[fork] of all\nfolks. I don't mind omitting non-main forks but a comment to explain\nthe reason or reasoning would be needed.\n\nreards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 24 Sep 2020 17:48:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi. \n\n> I'll send performance measurement results in the next email. Thanks a lot for\n> the reviews!\n\nBelow are the performance measurement results.\nI was only able to use low-spec machine:\nCPU 4v, Memory 8GB, RHEL, xfs filesystem.\n\n[Failover/Recovery Test]\n1. (Master) Create table (ex. 10,000 tables). Insert data to tables.\n2. (M) DELETE FROM TABLE (ex. all rows of 10,000 tables)\n3. (Standby) To test with failover, pause the WAL replay on standby server.\n(SELECT pg_wal_replay_pause();)\n4. (M) psql -c \"\\timing on\" (measures total execution of SQL queries)\n5. (M) VACUUM (whole db)\n6. (M) After vacuum finishes, stop primary server: pg_ctl stop -w -mi\n7. (S) Resume wal replay and promote standby.\nBecause it's difficult to measure recovery time I used the attached script (resume.sh)\nthat prints timestamp before and after promotion. It basically does the following\n- \"SELECT pg_wal_replay_resume();\" is executed and the WAL application is resumed.\n- \"pg_ctl promote\" to promote standby.\n- The time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\" is measured.\n\n[Results]\nRecovery/Failover performance (in seconds). 3 trial runs.\n\n| shared_buffers | master | patch | %reg | \n|----------------|--------|--------|---------| \n| 128MB | 32.406 | 33.785 | 4.08% | \n| 1GB | 36.188 | 32.747 | -10.51% | \n| 2GB | 41.996 | 32.88 | -27.73% |\n\nThere's a bit of small regression with the default shared_buffers (128MB),\nbut as for the recovery time when we have large NBuffers, it's now at least almost constant\nso there's boosted performance. IOW, we enter the optimization most of the time\nduring recovery.\n\nI also did similar benchmark performance as what Tomas did [1],\nsimple \"pgbench -S\" tests (warmup and then 15 x 1-minute runs with\n1, 8 and 16 clients, but I'm not sure if my machine is reliable enough to\nproduce reliable results for 8 clients and more.\n\n| # | master | patch | %reg | \n|------------|-------------|-------------|--------| \n| 1 client | 1676.937825 | 1707.018029 | -1.79% | \n| 8 clients | 7706.835401 | 7529.089044 | 2.31% | \n| 16 clients | 9823.65254 | 9991.184206 | -1.71% |\n\n\nIf there's additional/necessary performance measurement, kindly advise me too.\nThank you in advance.\n\n[1] https://www.postgresql.org/message-id/flat/20200806213334.3bzadeirly3mdtzl%40development#473168a61e229de40eaf36326232f86c\n\nBest regards,\nKirk Jamison", "msg_date": "Fri, 25 Sep 2020 08:18:55 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> No, during recovery also we need to be careful. We need to ensure that\r\n> we use cached value during recovery and cached value is always\r\n> up-to-date. We can't rely on lseek and I have provided some scenario\r\n> up thread [1] where such behavior can cause problem and then see the\r\n> response from Tom Lane why the same can be true for recovery as well.\r\n> \r\n> The basic approach we are trying to pursue here is to rely on the\r\n> cached value of 'number of blocks' (as that always gives correct value\r\n> and even if there is a problem that will be our bug, we don't need to\r\n> rely on OS for correct value and it will be better w.r.t performance\r\n> as well). It is currently only possible during recovery so we are\r\n> using it in recovery path and later once Thomas's patch to cache it\r\n> for non-recovery cases is also done, we can use it for non-recovery\r\n> cases as well.\r\n\r\nAlthough I may be still confused, I understood that Kirk-san's patch should:\r\n\r\n* Still focus on speeding up the replay of TRUNCATE during recovery.\r\n\r\n* During recovery, DropRelFileNodeBuffers() gets the cached size of the relation fork. If it is cached, trust it and optimize the buffer invalidation. If it's not cached, we can't trust the return value of smgrnblocks() because it's the lseek(END) return value, so we avoid the optimization.\r\n\r\n* Then, add a new function, say, smgrnblocks_cached() that simply returns the cached block count, and DropRelFileNodeBuffers() uses it instead of smgrnblocks().\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 25 Sep 2020 08:55:03 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> [Results]\n> Recovery/Failover performance (in seconds). 3 trial runs.\n> \n> | shared_buffers | master | patch | %reg |\n> |----------------|--------|--------|---------|\n> | 128MB | 32.406 | 33.785 | 4.08% |\n> | 1GB | 36.188 | 32.747 | -10.51% |\n> | 2GB | 41.996 | 32.88 | -27.73% |\n\nThanks for sharing good results. We want to know if we can get as significant results as you gained before with hundreds of GBs of shared buffers, don't we?\n\n\n> I also did similar benchmark performance as what Tomas did [1], simple\n> \"pgbench -S\" tests (warmup and then 15 x 1-minute runs with 1, 8 and 16\n> clients, but I'm not sure if my machine is reliable enough to produce reliable\n> results for 8 clients and more.\n\nLet me confirm just in case. Your patch should not affect pgbench performance, but you measured it. Is there anything you're concerned about?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Fri, 25 Sep 2020 09:01:38 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, September 25, 2020 6:02 PM, Tsunakawa-san wrote:\n\n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > [Results]\n> > Recovery/Failover performance (in seconds). 3 trial runs.\n> >\n> > | shared_buffers | master | patch | %reg |\n> > |----------------|--------|--------|---------|\n> > | 128MB | 32.406 | 33.785 | 4.08% |\n> > | 1GB | 36.188 | 32.747 | -10.51% |\n> > | 2GB | 41.996 | 32.88 | -27.73% |\n> \n> Thanks for sharing good results. We want to know if we can get as\n> significant results as you gained before with hundreds of GBs of shared\n> buffers, don't we?\n\nYes. But I don't have a high-spec machine I could use at the moment.\nI'll try if I can get one by next week. Or if someone would like to reproduce the\ntest with their available higher spec machines, it'd would be much appreciated.\nThe test case is upthread [1]\n\n> > I also did similar benchmark performance as what Tomas did [1], simple\n> > \"pgbench -S\" tests (warmup and then 15 x 1-minute runs with 1, 8 and\n> > 16 clients, but I'm not sure if my machine is reliable enough to\n> > produce reliable results for 8 clients and more.\n> \n> Let me confirm just in case. Your patch should not affect pgbench\n> performance, but you measured it. Is there anything you're concerned\n> about?\n> \n\nNot really. Because In the previous emails, the argument was the BufferAlloc \noverhead. But we don't have it in the latest patch. But just in case somebody\nasks about benchmark performance, I also posted the results.\n\n[1] https://www.postgresql.org/message-id/OSBPR01MB2341683DEDE0E7A8D045036FEF360%40OSBPR01MB2341.jpnprd01.prod.outlook.com\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Fri, 25 Sep 2020 09:25:49 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Sep 25, 2020 at 2:25 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > No, during recovery also we need to be careful. We need to ensure that\n> > we use cached value during recovery and cached value is always\n> > up-to-date. We can't rely on lseek and I have provided some scenario\n> > up thread [1] where such behavior can cause problem and then see the\n> > response from Tom Lane why the same can be true for recovery as well.\n> >\n> > The basic approach we are trying to pursue here is to rely on the\n> > cached value of 'number of blocks' (as that always gives correct value\n> > and even if there is a problem that will be our bug, we don't need to\n> > rely on OS for correct value and it will be better w.r.t performance\n> > as well). It is currently only possible during recovery so we are\n> > using it in recovery path and later once Thomas's patch to cache it\n> > for non-recovery cases is also done, we can use it for non-recovery\n> > cases as well.\n>\n> Although I may be still confused, I understood that Kirk-san's patch should:\n>\n> * Still focus on speeding up the replay of TRUNCATE during recovery.\n>\n> * During recovery, DropRelFileNodeBuffers() gets the cached size of the relation fork. If it is cached, trust it and optimize the buffer invalidation. If it's not cached, we can't trust the return value of smgrnblocks() because it's the lseek(END) return value, so we avoid the optimization.\n>\n\nI agree with the above two points.\n\n> * Then, add a new function, say, smgrnblocks_cached() that simply returns the cached block count, and DropRelFileNodeBuffers() uses it instead of smgrnblocks().\n>\n\nI am not sure if it worth adding a new function for this. Why not\nsimply add a boolean variable in smgrnblocks for this? BTW, AFAICS,\nthe latest patch doesn't have code to address this point.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 26 Sep 2020 11:39:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Sep 25, 2020 at 1:49 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Hi.\n>\n> > I'll send performance measurement results in the next email. Thanks a lot for\n> > the reviews!\n>\n> Below are the performance measurement results.\n> I was only able to use low-spec machine:\n> CPU 4v, Memory 8GB, RHEL, xfs filesystem.\n>\n> [Failover/Recovery Test]\n> 1. (Master) Create table (ex. 10,000 tables). Insert data to tables.\n> 2. (M) DELETE FROM TABLE (ex. all rows of 10,000 tables)\n> 3. (Standby) To test with failover, pause the WAL replay on standby server.\n> (SELECT pg_wal_replay_pause();)\n> 4. (M) psql -c \"\\timing on\" (measures total execution of SQL queries)\n> 5. (M) VACUUM (whole db)\n> 6. (M) After vacuum finishes, stop primary server: pg_ctl stop -w -mi\n> 7. (S) Resume wal replay and promote standby.\n> Because it's difficult to measure recovery time I used the attached script (resume.sh)\n> that prints timestamp before and after promotion. It basically does the following\n> - \"SELECT pg_wal_replay_resume();\" is executed and the WAL application is resumed.\n> - \"pg_ctl promote\" to promote standby.\n> - The time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\" is measured.\n>\n> [Results]\n> Recovery/Failover performance (in seconds). 3 trial runs.\n>\n> | shared_buffers | master | patch | %reg |\n> |----------------|--------|--------|---------|\n> | 128MB | 32.406 | 33.785 | 4.08% |\n> | 1GB | 36.188 | 32.747 | -10.51% |\n> | 2GB | 41.996 | 32.88 | -27.73% |\n>\n> There's a bit of small regression with the default shared_buffers (128MB),\n>\n\nI feel we should try to address this. Basically, we can see the\nsmallest value of shared buffers above which the new algorithm is\nbeneficial and try to use that as threshold for doing this\noptimization. I don't think it is beneficial to use this optimization\nfor a small value of shared_buffers.\n\n> but as for the recovery time when we have large NBuffers, it's now at least almost constant\n> so there's boosted performance. IOW, we enter the optimization most of the time\n> during recovery.\n>\n\nYeah, that is good to see. We can probably try to check with a much\nlarger value of shared buffers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 26 Sep 2020 11:44:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> I agree with the above two points.\r\n\r\nThank you. I'm relieved to know I didn't misunderstand.\r\n\r\n\r\n> > * Then, add a new function, say, smgrnblocks_cached() that simply returns\r\n> the cached block count, and DropRelFileNodeBuffers() uses it instead of\r\n> smgrnblocks().\r\n> >\r\n> \r\n> I am not sure if it worth adding a new function for this. Why not simply add a\r\n> boolean variable in smgrnblocks for this?\r\n\r\n\r\nOne reason is that adding an argument requires modification of existing call sites (10 + a few). Another is that, although this may be different for each person's taste, it's sometimes not easy to understand when a function call with true/false appears. One such example is find_XXX(some_args, true/false), where the true/false represents missing_ok. Another example is as follows. I often wonder \"what's the meaning of this false, and that true?\"\r\n\r\n if (!InstallXLogFileSegment(&destsegno, tmppath, false, 0, false))\r\n elog(ERROR, \"InstallXLogFileSegment should not have failed\");\r\n\r\nFortunately, the new function is very short and doesn't duplicate much code. The function is a simple getter and the function name can convey the meaning straight (if the name is good.)\r\n\r\n\r\n> BTW, AFAICS, the latest patch\r\n> doesn't have code to address this point.\r\n\r\nKirk-san, can you address this? I don't mind much if you add an argument or a new function.\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 28 Sep 2020 02:50:20 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Monday, September 28, 2020 11:50 AM, Tsunakawa-san wrote:\r\n\r\n> From: Amit Kapila <amit.kapila16@gmail.com>\r\n> > I agree with the above two points.\r\n> \r\n> Thank you. I'm relieved to know I didn't misunderstand.\r\n> \r\n> \r\n> > > * Then, add a new function, say, smgrnblocks_cached() that simply\r\n> > > returns\r\n> > the cached block count, and DropRelFileNodeBuffers() uses it instead\r\n> > of smgrnblocks().\r\n> > >\r\n> >\r\n> > I am not sure if it worth adding a new function for this. Why not\r\n> > simply add a boolean variable in smgrnblocks for this?\r\n> \r\n> \r\n> One reason is that adding an argument requires modification of existing call\r\n> sites (10 + a few). Another is that, although this may be different for each\r\n> person's taste, it's sometimes not easy to understand when a function call\r\n> with true/false appears. One such example is find_XXX(some_args,\r\n> true/false), where the true/false represents missing_ok. Another example is\r\n> as follows. I often wonder \"what's the meaning of this false, and that true?\"\r\n> \r\n> if (!InstallXLogFileSegment(&destsegno, tmppath, false, 0, false))\r\n> elog(ERROR, \"InstallXLogFileSegment should not have failed\");\r\n> \r\n> Fortunately, the new function is very short and doesn't duplicate much code.\r\n> The function is a simple getter and the function name can convey the\r\n> meaning straight (if the name is good.)\r\n> \r\n> \r\n> > BTW, AFAICS, the latest patch\r\n> > doesn't have code to address this point.\r\n> \r\n> Kirk-san, can you address this? I don't mind much if you add an argument\r\n> or a new function.\r\n\r\nI maybe missing something. so I'd like to check if my understanding is correct,\r\nas I'm confused with what do we mean exactly by \"cached value of nblocks\".\r\n\r\nDiscussed upthread, smgrnblocks() does not always guarantee that it returns a\r\n\"cached\" nblocks even in recovery.\r\nWhen we enter this path in recovery path of DropRelFileNodeBuffers,\r\naccording to Tsunakawa-san:\r\n>> * During recovery, DropRelFileNodeBuffers() gets the cached size of the relation fork. If it is cached, trust it and optimize the buffer invalidation. If it's not cached, we can't trust the return value of smgrnblocks() because it's the lseek(END) return value, so we avoid the optimization.\r\n\r\n+\tnTotalBlocks = smgrnblocks(smgr_reln, forkNum[j]);\r\n\r\nBut this comment in the smgrnblocks source code:\r\n\t * For now, we only use cached values in recovery due to lack of a shared\r\n\t * invalidation mechanism for changes in file size.\r\n\t */\r\n\tif (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\r\n\t\treturn reln->smgr_cached_nblocks[forknum];\r\n\r\nSo the nblocks returned in DropRelFileNodeBuffers are still not guaranteed to be \"cached values\"?\r\nAnd that we want to add a new function (I think it's the lesser complicated way than modifying smgrnblocks):\r\n\r\n/*\r\n *\tsmgrnblocksvalid() -- Calculate the number of blocks that are cached in\r\n *\t\t\t\t\t the supplied relation.\r\n *\r\n * It is equivalent to calling smgrnblocks, but only used in recovery for now\r\n * when DropRelFileNodeBuffers() is called, to ensure that only cached value\r\n * is used, which is always valid.\r\n *\r\n * This returns an InvalidBlockNumber when smgr_cached_nblocks is not available\r\n * and when isCached is false.\r\n */\r\nBlockNumber\r\nsmgrnblocksvalid(SMgrRelation reln, ForkNumber forknum, bool isCached)\r\n{\r\n\tBlockNumber result;\r\n\r\n\t/*\r\n\t * For now, we only use cached values in recovery due to lack of a shared\r\n\t * invalidation mechanism for changes in file size.\r\n\t */\r\n\tif (InRecovery && if reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber\r\n\t\t&& isCached)\r\n\t\t\treturn reln->smgr_cached_nblocks[forknum];\r\n\t}\r\n\r\n\tresult = smgrsw[reln->smgr_which].smgr_nblocks(reln, forknum);\r\n\r\n\treln->smgr_cached_nblocks[forknum] = result;\r\n\r\n\tif (!InRecovery && !isCached)\r\n\t\treturn InvalidBlockNumber;\r\n\r\n\treturn result;\r\n}\r\n\r\nThen in DropRelFileNodeBuffers\r\n+\tnTotalBlocks = smgrcachednblocks(smgr_reln, forkNum[j], true);\r\n\r\nIs my understanding above correct?\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Mon, 28 Sep 2020 07:29:57 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "\tFrom: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> Is my understanding above correct?\r\n\r\nNo. I simply meant DropRelFileNodeBuffers() calls the following function, and avoids the optimization if it returns InvalidBlockNumber.\r\n\r\n\r\nBlockNumber\r\nsmgrcachednblocks(SMgrRelation reln, ForkNumber forknum)\r\n{\r\n\treturn reln->smgr_cached_nblocks[forknum];\r\n}\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 28 Sep 2020 08:07:40 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Monday, September 28, 2020 5:08 PM, Tsunakawa-san wrote:\r\n\r\n> \tFrom: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> > Is my understanding above correct?\r\n> \r\n> No. I simply meant DropRelFileNodeBuffers() calls the following function,\r\n> and avoids the optimization if it returns InvalidBlockNumber.\r\n> \r\n> \r\n> BlockNumber\r\n> smgrcachednblocks(SMgrRelation reln, ForkNumber forknum) {\r\n> \treturn reln->smgr_cached_nblocks[forknum];\r\n> }\r\n\r\nThank you for clarifying. \r\n\r\nSo in the new function, it goes something like:\r\n\tif (InRecovery)\r\n\t{\r\n\t\tif (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\r\n\t\t\treturn reln->smgr_cached_nblocks[forknum];\r\n\t\telse\r\n\t\t\treturn InvalidBlockNumber;\r\n\t}\r\n\r\nI've revised the patch and added the new function accordingly in the attached file.\r\nI also did not remove the duplicate code from smgrnblocks because Amit-san mentioned\r\nthat when the caching for non-recovery cases is implemented, we can use it\r\nfor non-recovery cases as well.\r\n\r\nAlthough I am not sure if the way it's written in DropRelFileNodeBuffers is okay.\r\nBlockNumberIsValid(nTotalBlocks)\r\n \r\n\t\t\tnTotalBlocks = smgrcachednblocks(smgr_reln, forkNum[j]);\r\n\t\t\tnBlocksToInvalidate = nTotalBlocks - firstDelBlock[j];\r\n\r\n\t\t\tif (BlockNumberIsValid(nTotalBlocks) &&\r\n\t\t\t\tnBlocksToInvalidate < BUF_DROP_FULLSCAN_THRESHOLD)\r\n\t\t\t{\r\n\t\t\t\t//enter optimization loop\r\n\t\t\t}\r\n\t\t\telse\r\n\t\t\t{\r\n\t\t\t\t//full scan for each fork \r\n\t\t\t}\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Mon, 28 Sep 2020 08:57:36 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Mon, 28 Sep 2020 08:57:36 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> On Monday, September 28, 2020 5:08 PM, Tsunakawa-san wrote:\n> \n> > \tFrom: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > > Is my understanding above correct?\n> > \n> > No. I simply meant DropRelFileNodeBuffers() calls the following function,\n> > and avoids the optimization if it returns InvalidBlockNumber.\n> > \n> > \n> > BlockNumber\n> > smgrcachednblocks(SMgrRelation reln, ForkNumber forknum) {\n> > \treturn reln->smgr_cached_nblocks[forknum];\n> > }\n> \n> Thank you for clarifying. \n\nFWIW, I (and maybe Amit) am thinking that the property we need here is\nnot it is cached or not but the accuracy of the returned file length,\nand that the \"cached\" property should be hidden behind the API.\n\nAnother reason for not adding this function is the cached value is not\nreally reliable on non-recovery environment.\n\n> So in the new function, it goes something like:\n> \tif (InRecovery)\n> \t{\n> \t\tif (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n> \t\t\treturn reln->smgr_cached_nblocks[forknum];\n> \t\telse\n> \t\t\treturn InvalidBlockNumber;\n> \t}\n\nIf we add the new function, it should reutrn InvalidBlockNumber\nwithout consulting smgr_nblocks().\n\n> I've revised the patch and added the new function accordingly in the attached file.\n> I also did not remove the duplicate code from smgrnblocks because Amit-san mentioned\n> that when the caching for non-recovery cases is implemented, we can use it\n> for non-recovery cases as well.\n> \n> Although I am not sure if the way it's written in DropRelFileNodeBuffers is okay.\n> BlockNumberIsValid(nTotalBlocks)\n> \n> \t\t\tnTotalBlocks = smgrcachednblocks(smgr_reln, forkNum[j]);\n> \t\t\tnBlocksToInvalidate = nTotalBlocks - firstDelBlock[j];\n> \n> \t\t\tif (BlockNumberIsValid(nTotalBlocks) &&\n> \t\t\t\tnBlocksToInvalidate < BUF_DROP_FULLSCAN_THRESHOLD)\n> \t\t\t{\n> \t\t\t\t//enter optimization loop\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\t//full scan for each fork \n> \t\t\t}\n\nHmm. The current loop in DropRelFileNodeBuffers looks like this:\n\n if (InRecovery)\n\t for (for each forks)\n\t if (the fork meets the criteria)\n\t\t <optimized dropping>\n else\n\t\t <full scan>\n\nI think this is somewhat different from the current\ndiscussion. Whether we sum-up the number of blcoks for all forks or\njust use that of the main fork, we should take full scan if we failed\nto know the accurate size for any one of the forks. (In other words,\nit is stupid that we run a full scan for more than one fork at a\ndrop.)\n\nCome to think of that, we can naturally sum-up all forks' blocks since\nanyway we need to call smgrnblocks for all forks to know the\noptimzation is usable.\n\nSo that block would be something like this:\n\n for (forks of the rel)\n\t /* the function returns InvalidBlockNumber if !InRecovery */\n\t if (smgrnblocks returned InvalidBlockNumber) \n\t total_blocks = InvalidBlockNumber;\n\t\t break;\n total_blocks += nbloks of this fork\n\n /* <we could rely on the fact that InvalidBlockNumber is zero> */\n if (total_blocks != InvalidBlockNumber && total_blocks < threshold)\n \t for (forks of the rel)\n\t for (blocks of the fork)\n <try dropping the buffer for the block>\n else\n <full scan dropping>\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 29 Sep 2020 10:34:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> I also did not remove the duplicate code from smgrnblocks because Amit-san\r\n> mentioned that when the caching for non-recovery cases is implemented, we\r\n> can use it for non-recovery cases as well.\r\n\r\nBut the extra code is not used now. The code for future usage should be added when it becomes necessary. Duplicate code may make people think that you should add an argument to smgrnblocks() instead of adding a new function.\r\n\r\n+\t\tif (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\r\n+\t\t\treturn reln->smgr_cached_nblocks[forknum];\r\n+\t\telse\r\n+\t\t\treturn InvalidBlockNumber;\r\n\r\nAnyway, the else block is redundant, as the variable contains InvalidBlockNumber.\r\n\r\nAlso, as Amit-san mentioned, the cause of the slight performance regression when shared_buffers is small needs to be investigated and addressed. I think you can do it after sharing the performance result with a large shared_buffers.\r\n\r\nI found no other problem.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 29 Sep 2020 01:51:12 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Sep 29, 2020 at 7:21 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n>\n> Also, as Amit-san mentioned, the cause of the slight performance regression when shared_buffers is small needs to be investigated and addressed.\n>\n\nYes, I think it is mainly because extra instructions added in the\noptimized code which doesn't make up for the loss when the size of\nshared buffers is small.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 29 Sep 2020 08:18:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, September 29, 2020 10:35 AM, Horiguchi-san wrote:\n\n> FWIW, I (and maybe Amit) am thinking that the property we need here is not it\n> is cached or not but the accuracy of the returned file length, and that the\n> \"cached\" property should be hidden behind the API.\n> \n> Another reason for not adding this function is the cached value is not really\n> reliable on non-recovery environment.\n> \n> > So in the new function, it goes something like:\n> > \tif (InRecovery)\n> > \t{\n> > \t\tif (reln->smgr_cached_nblocks[forknum] !=\n> InvalidBlockNumber)\n> > \t\t\treturn reln->smgr_cached_nblocks[forknum];\n> > \t\telse\n> > \t\t\treturn InvalidBlockNumber;\n> > \t}\n> \n> If we add the new function, it should reutrn InvalidBlockNumber without\n> consulting smgr_nblocks().\n\nSo here's how I revised it\nsmgrcachednblocks(SMgrRelation reln, ForkNumber forknum)\n{\n\tif (InRecovery)\n\t{\n\t\tif (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n\t\t\treturn reln->smgr_cached_nblocks[forknum];\n\t}\n\treturn InvalidBlockNumber;\n\n\n> Hmm. The current loop in DropRelFileNodeBuffers looks like this:\n> \n> if (InRecovery)\n> \t for (for each forks)\n> \t if (the fork meets the criteria)\n> \t\t <optimized dropping>\n> else\n> \t\t <full scan>\n> \n> I think this is somewhat different from the current discussion. Whether we\n> sum-up the number of blcoks for all forks or just use that of the main fork, we\n> should take full scan if we failed to know the accurate size for any one of the\n> forks. (In other words, it is stupid that we run a full scan for more than one\n> fork at a\n> drop.)\n> \n> Come to think of that, we can naturally sum-up all forks' blocks since anyway\n> we need to call smgrnblocks for all forks to know the optimzation is usable.\n\nI understand. We really don't have to enter the optimization when we know the\nfile size is inaccurate. That also makes the patch simpler.\n\n> So that block would be something like this:\n> \n> for (forks of the rel)\n> \t /* the function returns InvalidBlockNumber if !InRecovery */\n> \t if (smgrnblocks returned InvalidBlockNumber)\n> \t total_blocks = InvalidBlockNumber;\n> \t\t break;\n> total_blocks += nbloks of this fork\n> \n> /* <we could rely on the fact that InvalidBlockNumber is zero> */\n> if (total_blocks != InvalidBlockNumber && total_blocks < threshold)\n> \t for (forks of the rel)\n> \t for (blocks of the fork)\n> <try dropping the buffer for the block>\n> else\n> <full scan dropping>\n\nI followed this logic in the attached patch.\nThank you very much for the thoughtful reviews.\n\nPerformance measurement for large shared buffers to follow.\n\nBest regards,\nKirk Jamison", "msg_date": "Tue, 29 Sep 2020 04:04:16 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\n\nI revised the patch again. Attached is V19.\nThe previous patch's algorithm missed entering the optimization loop.\nSo I corrected that and removed the extra function I added\nin the previous versions.\n\nThe revised patch goes something like this:\n\tfor (forks of rel)\n\t{\n\t\tif (smgrcachednblocks() == InvalidBlockNumber) \n\t\t\tbreak; //go to full scan\n\t\tif (nBlocksToInvalidate < buf_full_scan_threshold)\n\t\t\tfor (blocks of the fork)\n\t\telse\n\t\t\tbreak; //go to full scan\n\t}\n\t<execute full scan>\n\nRecovery performance measurement results below.\nBut it seems there are overhead even with large shared buffers.\n\n| s_b | master | patched | %reg | \n|-------|--------|---------|-------| \n| 128MB | 36.052 | 39.451 | 8.62% | \n| 1GB | 21.731 | 21.73 | 0.00% | \n| 20GB | 24.534 | 25.137 | 2.40% | \n| 100GB | 30.54 | 31.541 | 3.17% |\n\nI'll investigate further. Or if you have any feedback or advice, I'd appreciate it.\n\nMachine specs used for testing:\nRHEL7, 8 core, 256 GB RAM, xfs\n\nConfiguration:\nwal_level = replica\nautovacuum = off\nfull_page_writes = off\n\n# For streaming replication from primary. \nsynchronous_commit = remote_write\nsynchronous_standby_names = ''\n\n# For Standby.\n#hot_standby = on\n#primary_conninfo\n\nshared_buffers = 128MB\n# 1GB, 20GB, 100GB\n\nJust in case it helps for some understanding,\nI also attached the recovery log 018_wal_optimize_node_replica.log\nwith some ereport that prints whether we enter the optimization loop or do full scan.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 1 Oct 2020 01:55:10 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> Recovery performance measurement results below.\n> But it seems there are overhead even with large shared buffers.\n> \n> | s_b | master | patched | %reg |\n> |-------|--------|---------|-------|\n> | 128MB | 36.052 | 39.451 | 8.62% |\n> | 1GB | 21.731 | 21.73 | 0.00% |\n> | 20GB | 24.534 | 25.137 | 2.40% |\n> | 100GB | 30.54 | 31.541 | 3.17% |\n\nDid you really check that the optimization path is entered and the traditional path is never entered?\n\nWith the following code, when the main fork does not meet the optimization criteria, other forks are not optimized as well. You want to determine each fork's optimization separately, don't you?\n\n+\t\t/* If blocks are invalid, exit the optimization and execute full scan */\n+\t\tif (nTotalBlocks == InvalidBlockNumber)\n+\t\t\tbreak;\n\n\n+\t\telse\n+\t\t\tbreak;\n+\t}\n \tfor (i = 0; i < NBuffers; i++)\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n", "msg_date": "Thu, 1 Oct 2020 02:40:52 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Oct 1, 2020 at 8:11 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > Recovery performance measurement results below.\n> > But it seems there are overhead even with large shared buffers.\n> >\n> > | s_b | master | patched | %reg |\n> > |-------|--------|---------|-------|\n> > | 128MB | 36.052 | 39.451 | 8.62% |\n> > | 1GB | 21.731 | 21.73 | 0.00% |\n> > | 20GB | 24.534 | 25.137 | 2.40% |\n> > | 100GB | 30.54 | 31.541 | 3.17% |\n>\n> Did you really check that the optimization path is entered and the traditional path is never entered?\n>\n\nI have one idea for performance testing. We can even test this for\nnon-recovery paths by removing the recovery-related check like only\nuse it when there are cached blocks. You can do this if testing via\nrecovery path is difficult because at the end performance should be\nsame for recovery and non-recovery paths.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Oct 2020 08:18:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> I have one idea for performance testing. We can even test this for\r\n> non-recovery paths by removing the recovery-related check like only\r\n> use it when there are cached blocks. You can do this if testing via\r\n> recovery path is difficult because at the end performance should be\r\n> same for recovery and non-recovery paths.\r\n\r\nThat's a good idea.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Thu, 1 Oct 2020 02:55:46 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 1 Oct 2020 02:40:52 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> With the following code, when the main fork does not meet the\n> optimization criteria, other forks are not optimized as well. You\n> want to determine each fork's optimization separately, don't you?\n\nIn more detail, if smgrcachednblocks() returned InvalidBlockNumber for\nany of the forks, we should give up the optimization at all since we\nneed to run a full scan anyway. On the other hand, if any of the\nforks is smaller than the threshold, we still can use the optimization\nwhen we know the accurate block number of all the forks.\n\nStill, I prefer to use total block number of all forks since we anyway\nvisit the all forks. Is there any reason to exlucde forks other than\nthe main fork while we visit all of them already?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 01 Oct 2020 12:17:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> In more detail, if smgrcachednblocks() returned InvalidBlockNumber for\n> any of the forks, we should give up the optimization at all since we\n> need to run a full scan anyway. On the other hand, if any of the\n> forks is smaller than the threshold, we still can use the optimization\n> when we know the accurate block number of all the forks.\n\nAh, I got your point (many eyes in open source development is nice.) Still, I feel it's better to treat each fork separately, because the inner loop in the traditional path may be able to skip forks that have been already processed in the optimization path. For example, if the forks[] array contains {fsm, vm, main} in this order (I know main is usually put at the beginning), fsm and vm are processed in the optimization path and the inner loop in the traditional path can skip fsm and vm.\n\n> Still, I prefer to use total block number of all forks since we anyway\n> visit the all forks. Is there any reason to exlucde forks other than\n> the main fork while we visit all of them already?\n\nWhen the number of cached blocks for a main fork is below the threshold but the total cached blocks of all forks exceeds the threshold, the optimization is skipped. I think it's mottainai.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n", "msg_date": "Thu, 1 Oct 2020 04:20:27 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 1 Oct 2020 04:20:27 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > In more detail, if smgrcachednblocks() returned InvalidBlockNumber for\n> > any of the forks, we should give up the optimization at all since we\n> > need to run a full scan anyway. On the other hand, if any of the\n> > forks is smaller than the threshold, we still can use the optimization\n> > when we know the accurate block number of all the forks.\n> \n> Ah, I got your point (many eyes in open source development is nice.) Still, I feel it's better to treat each fork separately, because the inner loop in the traditional path may be able to skip forks that have been already processed in the optimization path. For example, if the forks[] array contains {fsm, vm, main} in this order (I know main is usually put at the beginning), fsm and vm are processed in the optimization path and the inner loop in the traditional path can skip fsm and vm.\n\nI thought that the advantage of this optimization is that we don't\nneed to visit all buffers? If we need to run a full-scan for any\nreason, there's no point in looking-up already-visited buffers\nagain. That's just wastefull cycles. Am I missing somethig?\n\n\n> > Still, I prefer to use total block number of all forks since we anyway\n> > visit the all forks. Is there any reason to exlucde forks other than\n> > the main fork while we visit all of them already?\n> \n> When the number of cached blocks for a main fork is below the threshold but the total cached blocks of all forks exceeds the threshold, the optimization is skipped. I think it's mottainai.\n\nI don't understand. If we chose to the optimized dropping, the reason\nis the number of buffer lookup is fewer than a certain threashold. Why\ndo you think that the fork kind a buffer belongs to is relevant to the\ncriteria?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 01 Oct 2020 14:09:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, October 1, 2020 11:49 AM, Amit Kapila wrote:\r\n> On Thu, Oct 1, 2020 at 8:11 AM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> >\r\n> > From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> > > Recovery performance measurement results below.\r\n> > > But it seems there are overhead even with large shared buffers.\r\n> > >\r\n> > > | s_b | master | patched | %reg |\r\n> > > |-------|--------|---------|-------|\r\n> > > | 128MB | 36.052 | 39.451 | 8.62% |\r\n> > > | 1GB | 21.731 | 21.73 | 0.00% |\r\n> > > | 20GB | 24.534 | 25.137 | 2.40% | 100GB | 30.54 | 31.541 |\r\n> > > | 3.17% |\r\n> >\r\n> > Did you really check that the optimization path is entered and the traditional\r\n> path is never entered?\r\n> >\r\n\r\nOops. Thanks Tsunakawa-san for catching that. \r\nWill fix in the next patch, replacing break with continue.\r\n\r\n> I have one idea for performance testing. We can even test this for\r\n> non-recovery paths by removing the recovery-related check like only use it\r\n> when there are cached blocks. You can do this if testing via recovery path is\r\n> difficult because at the end performance should be same for recovery and\r\n> non-recovery paths.\r\n\r\nFor non-recovery path, did you mean by any chance\r\nmeasuring the cache hit rate for varying shared_buffers?\r\n\r\nSELECT \r\n sum(heap_blks_read) as heap_read,\r\n sum(heap_blks_hit) as heap_hit,\r\n sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio\r\nFROM \r\n pg_statio_user_tables;\r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Thu, 1 Oct 2020 05:43:34 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> For non-recovery path, did you mean by any chance\r\n> measuring the cache hit rate for varying shared_buffers?\r\n\r\nNo. You can test the speed of DropRelFileNodeBuffers() during normal operation, i.e. by running TRUNCATE on psql, instead of performing recovery. To enable that, you can just remove the checks for recovery, i.e. removing the check if InRecovery and if the value is cached or not.\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Thu, 1 Oct 2020 06:32:26 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> I thought that the advantage of this optimization is that we don't\n> need to visit all buffers? If we need to run a full-scan for any\n> reason, there's no point in looking-up already-visited buffers\n> again. That's just wastefull cycles. Am I missing somethig?\n> \n> I don't understand. If we chose to the optimized dropping, the reason\n> is the number of buffer lookup is fewer than a certain threashold. Why\n> do you think that the fork kind a buffer belongs to is relevant to the\n> criteria?\n\nI rethought about this, and you certainly have a point, but... OK, I think I understood. I should have thought in a complicated way. In other words, you're suggesting \"Let's simply treat all forks as one relation to determine whether to optimize,\" right? That is, the code simple becomes:\n\nSums up the number of buffers to invalidate in all forks;\nif (the cached sizes of all forks are valid && # of buffers to invalidate < THRESHOLD)\n{\n\tdo the optimized way;\n\treturn;\n}\ndo the traditional way;\n\nThis will be simple, and I'm +1.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n", "msg_date": "Thu, 1 Oct 2020 07:51:59 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, October 1, 2020 4:52 PM, Tsunakawa-san wrote:\n \n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > I thought that the advantage of this optimization is that we don't\n> > need to visit all buffers? If we need to run a full-scan for any\n> > reason, there's no point in looking-up already-visited buffers again.\n> > That's just wastefull cycles. Am I missing somethig?\n> >\n> > I don't understand. If we chose to the optimized dropping, the reason\n> > is the number of buffer lookup is fewer than a certain threashold. Why\n> > do you think that the fork kind a buffer belongs to is relevant to the\n> > criteria?\n> \n> I rethought about this, and you certainly have a point, but... OK, I think I\n> understood. I should have thought in a complicated way. In other words,\n> you're suggesting \"Let's simply treat all forks as one relation to determine\n> whether to optimize,\" right? That is, the code simple becomes:\n> \n> Sums up the number of buffers to invalidate in all forks; if (the cached sizes\n> of all forks are valid && # of buffers to invalidate < THRESHOLD) {\n> \tdo the optimized way;\n> \treturn;\n> }\n> do the traditional way;\n> \n> This will be simple, and I'm +1.\n\nThis is actually close to the v18 I posted trying Horiguchi-san's approach, but that\npatch had bug. So attached is an updated version (v20) trying this approach again.\nI hope it's bug-free this time.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 1 Oct 2020 12:55:34 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 1 Oct 2020 12:55:34 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> On Thursday, October 1, 2020 4:52 PM, Tsunakawa-san wrote:\n> \n> > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > I thought that the advantage of this optimization is that we don't\n> > > need to visit all buffers? If we need to run a full-scan for any\n> > > reason, there's no point in looking-up already-visited buffers again.\n> > > That's just wastefull cycles. Am I missing somethig?\n> > >\n> > > I don't understand. If we chose to the optimized dropping, the reason\n> > > is the number of buffer lookup is fewer than a certain threashold. Why\n> > > do you think that the fork kind a buffer belongs to is relevant to the\n> > > criteria?\n> > \n> > I rethought about this, and you certainly have a point, but... OK, I think I\n> > understood. I should have thought in a complicated way. In other words,\n> > you're suggesting \"Let's simply treat all forks as one relation to determine\n> > whether to optimize,\" right? That is, the code simple becomes:\n\nExactly. The concept of the threshold is that if we are expected to\nrepeat buffer look-up than that, we consider just one-time full-scan\nmore efficient. Since we know we are going to drop buffers of all (or\nthe specified) forks of the relation at once, the number of looking-up\nis naturally the sum of the expected number of the buffers of all\nforks.\n\n> > whether to optimize,\" right? That is, the code simple becomes:\n> > \n> > Sums up the number of buffers to invalidate in all forks;\n> > if (the cached sizes\n> > of all forks are valid && # of buffers to invalidate < THRESHOLD) {\n> > \tdo the optimized way;\n> > \treturn;\n> > }\n> > do the traditional way;\n> > \n> > This will be simple, and I'm +1.\n\nThanks!\n\n> This is actually close to the v18 I posted trying Horiguchi-san's approach, but that\n> patch had bug. So attached is an updated version (v20) trying this approach again.\n> I hope it's bug-free this time.\n\nThaks for the new version.\n\n- *\t\tXXX currently it sequentially searches the buffer pool, should be\n- *\t\tchanged to more clever ways of searching. However, this routine\n- *\t\tis used only in code paths that aren't very performance-critical,\n- *\t\tand we shouldn't slow down the hot paths to make it faster ...\n+ *\t\tXXX The relation might have extended before this, so this path is\n\nThe following description is found in the comment for FlushRelationBuffers.\n\n> *\t\tXXX currently it sequentially searches the buffer pool, should be\n> *\t\tchanged to more clever ways of searching. This routine is not\n> *\t\tused in any performance-critical code paths, so it's not worth\n> *\t\tadding additional overhead to normal paths to make it go faster;\n> *\t\tbut see also DropRelFileNodeBuffers.\n\nThis looks like to me \"We won't do that kind of optimization for\nFlushRelationBuffers, but DropRelFileNodeBuffers would need it\". If\nso, don't we need to revise the comment together?\n\n- *\t\tXXX currently it sequentially searches the buffer pool, should be\n- *\t\tchanged to more clever ways of searching. However, this routine\n- *\t\tis used only in code paths that aren't very performance-critical,\n- *\t\tand we shouldn't slow down the hot paths to make it faster ...\n+ *\t\tXXX The relation might have extended before this, so this path is\n+ *\t\tonly optimized during recovery when we can get a reliable cached\n+ *\t\tvalue of blocks for specified relation. In addition, it is safe to\n+ *\t\tdo this since there are no other processes but the startup process\n+ *\t\tthat changes the relation size during recovery. Otherwise, or if\n+ *\t\tnot in recovery, proceed to usual invalidation process, where it\n+ *\t\tsequentially searches the buffer pool.\n\nThis should no longer be a XXX comment. It seems to me somewhat\ndescribing too-detailed at this function's level. How about something\nlike the follwoing? (excpet its syntax, or phrasing:p)\n\n===\nIf the expected maximum number of buffers to drop is small enough\ncompared to NBuffers, individual buffers are located by\nBufTableLookup. Otherwise we scan through all buffers. Snnce we\nmustn't leave a buffer behind, we take the latter way unless the\nnumber is not reliably identified. See smgrcachednblocks() for\ndetails.\n===\n\n(I'm still mildly opposed to the function name, which seems exposing\n detail too much.)\n\n+\t * Get the total number of cached blocks and to-be-invalidated blocks\n+\t * of the relation. If a fork's nblocks is not valid, break the loop.\n\nThe number of file blocks is not usually equal to the number of\nexisting buffers for the file. We might need to explain that\nlimitation here.\n\n\n+\tfor (j = 0; j < nforks; j++)\n\nThough I understand that j is considered to be in a connection with\nfork number, I'm a bit uncomfortable that j is used for the outmost\nloop..\n\n+\t\t\tfor (curBlock = firstDelBlock[j]; curBlock < nTotalBlocks; curBlock++)\n\nMmm. We should compare curBlock with the number of blocks of the fork,\nnot the total of all forks.\n\n+\t\t\t\tuint32\t\tnewHash;\t\t/* hash value for newTag */\n+\t\t\t\tBufferTag\tnewTag;\t\t\t/* identity of requested block */\n+\t\t\t\tLWLock\t \t*newPartitionLock;\t/* buffer partition lock for it */\n\nIt seems to be copied from somewhere, but the buffer is not new at\nall.\n\n+\t\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n+\t\t\t\t\tbufHdr->tag.forkNum == forkNum[j] &&\n+\t\t\t\t\tbufHdr->tag.blockNum == curBlock)\n+\t\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases spinlock */\n\nI think it cannot happen that the block is used for a different block\nof the same relation-fork, but it could be safer to check\nbufHdr->tag.blockNum >= firstDelBlock[j] instead.\n\n\n+/*\n+ *\tsmgrcachednblocks() -- Calculate the number of blocks that are cached in\n+ *\t\t\t\t\t the supplied relation.\n+ *\n+ * It is equivalent to calling smgrnblocks, but only used in recovery for now\n+ * when DropRelFileNodeBuffers() is called. This ensures that only cached value\n+ * is used which is always valid in recovery, since there is no shared\n+ * invalidation mechanism that is implemented yet for changes in file size.\n+ *\n+ * This returns an InvalidBlockNumber when smgr_cached_nblocks is not available\n+ * and when not in recovery.\n\nIsn't it too concrete? We need to mention the buggy-kernel issue here\nrahter than that of callers.\n\nAnd if the comment is correct, we should Assert(InRecovery) at the\nbeggining of this function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 02 Oct 2020 11:44:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Fri, 02 Oct 2020 11:44:46 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 1 Oct 2020 12:55:34 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> - *\t\tXXX currently it sequentially searches the buffer pool, should be\n> - *\t\tchanged to more clever ways of searching. However, this routine\n> - *\t\tis used only in code paths that aren't very performance-critical,\n> - *\t\tand we shouldn't slow down the hot paths to make it faster ...\n> + *\t\tXXX The relation might have extended before this, so this path is\n> + *\t\tonly optimized during recovery when we can get a reliable cached\n> + *\t\tvalue of blocks for specified relation. In addition, it is safe to\n> + *\t\tdo this since there are no other processes but the startup process\n> + *\t\tthat changes the relation size during recovery. Otherwise, or if\n> + *\t\tnot in recovery, proceed to usual invalidation process, where it\n> + *\t\tsequentially searches the buffer pool.\n> \n> This should no longer be a XXX comment. It seems to me somewhat\n> describing too-detailed at this function's level. How about something\n> like the follwoing? (excpet its syntax, or phrasing:p)\n> \n> ===\n> If the expected maximum number of buffers to drop is small enough\n> compared to NBuffers, individual buffers are located by\n> BufTableLookup. Otherwise we scan through all buffers. Snnce we\n> mustn't leave a buffer behind, we take the latter way unless the\n> number is not reliably identified. See smgrcachednblocks() for\n> details.\n> ===\n\nThe second to last phrase is inversed, and some typos are found. FWIW\nthis is the revised version.\n\n====\nIf we are expected to drop buffers less enough, we locate individual\nbuffers using BufTableLookup. Otherwise we scan through all\nbuffers. Since we mustn't leave a buffer behind, we take the latter\nway unless the sizes of all the involved forks are known to be\naccurte. See smgrcachednblocks() for details.\n====\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 02 Oct 2020 13:47:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, October 2, 2020 11:45 AM, Horiguchi-san wrote:\n\n> Thaks for the new version.\n\nThank you for your thoughtful reviews!\nI've attached an updated patch addressing the comments below.\n\n1.\n> The following description is found in the comment for FlushRelationBuffers.\n> \n> > *\t\tXXX currently it sequentially searches the buffer pool, should\n> be\n> > *\t\tchanged to more clever ways of searching. This routine is\n> not\n> > *\t\tused in any performance-critical code paths, so it's not worth\n> > *\t\tadding additional overhead to normal paths to make it go\n> faster;\n> > *\t\tbut see also DropRelFileNodeBuffers.\n> \n> This looks like to me \"We won't do that kind of optimization for\n> FlushRelationBuffers, but DropRelFileNodeBuffers would need it\". If so,\n> don't we need to revise the comment together?\n\nYes, but instead of combining, I just removed the comment in FlushRelationBuffers that mentions\nreferring to DropRelFileNodeBuffers. I think it meant the same of using more clever ways of searching.\nBut that comment s not applicable anymore in DropRelFileNodeBuffers due to the optimization.\n- * adding additional overhead to normal paths to make it go faster;\n- * but see also DropRelFileNodeBuffers.\n+ * adding additional overhead to normal paths to make it go faster.\n\n2. \n> - *\t\tXXX currently it sequentially searches the buffer pool, should be\n> - *\t\tchanged to more clever ways of searching. However, this routine\n> - *\t\tis used only in code paths that aren't very performance-critical,\n> - *\t\tand we shouldn't slow down the hot paths to make it faster ...\n\nI revised and removed most parts of this code comment in the DropRelFileNodeBuffers\nbecause isn't it the point of the optimization, to make the path faster for some performance\ncases we've tackled in the thread?\n\n3.\n> This should no longer be a XXX comment. \nAlright. I've fixed it.\n\n4.\n> It seems to me somewhat\n> describing too-detailed at this function's level. How about something like the\n> follwoing? (excpet its syntax, or phrasing:p)\n> ====\n> If we are expected to drop buffers less enough, we locate individual buffers\n> using BufTableLookup. Otherwise we scan through all buffers. Since we\n> mustn't leave a buffer behind, we take the latter way unless the sizes of all the\n> involved forks are known to be accurte. See smgrcachednblocks() for details.\n> ====\n\nSure. I paraphrased it like below.\n\nIf the expected maximum number of buffers to be dropped is small\nenough, individual buffer is located by BufTableLookup(). Otherwise,\nthe buffer pool is sequentially scanned. Since buffers must not be\nleft behind, the latter way is executed unless the sizes of all the\ninvolved forks are known to be accurate. See smgrcachednblocks() for\nmore details.\n\n5. \n> (I'm still mildly opposed to the function name, which seems exposing detail\n> too much.)\nI can't think of a better name, but smgrcachednblocks seems straightforward though.\nAlthough I understand that it may be confused with the relation property smgr_cached_nblocks.\nBut isn't that what we're getting in the function?\n\n6.\n> +\t * Get the total number of cached blocks and to-be-invalidated\n> blocks\n> +\t * of the relation. If a fork's nblocks is not valid, break the loop.\n> \n> The number of file blocks is not usually equal to the number of existing\n> buffers for the file. We might need to explain that limitation here.\n\nI revised that comment like below..\n\nGet the total number of cached blocks and to-be-invalidated blocks\nof the relation. The cached value returned by smgrcachednblocks\ncould be smaller than the actual number of existing buffers of the\nfile. This is caused by buggy Linux kernels that might not have\naccounted the recent write. If a fork's nblocks is invalid, exit loop.\n\n7. \n> +\tfor (j = 0; j < nforks; j++)\n> \n> Though I understand that j is considered to be in a connection with fork\n> number, I'm a bit uncomfortable that j is used for the outmost loop..\n\nI agree. We must use I for the outer loop for consistency.\n\n8.\n> +\t\t\tfor (curBlock = firstDelBlock[j]; curBlock <\n> nTotalBlocks;\n> +curBlock++)\n> \n> Mmm. We should compare curBlock with the number of blocks of the fork,\n> not the total of all forks.\n\nOops. Yes. That should be nForkBlocks, so we have to call again smgrcachednblocks()\nIn the optimization loop for forks.\n\n9.\n> +\t\t\t\tuint32\t\tnewHash;\t\t/*hash value for newTag */\n> +\t\t\t\tBufferTag\tnewTag; \t/* identity of requested block */\n> +\t\t\t\tLWLock\t \t*newPartitionLock; \t/* buffer partition lock for it */\n> \n> It seems to be copied from somewhere, but the buffer is not new at all.\n\nThanks for catching that. Yeah. Fixed.\n\n10.\n> +\t\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> +\t\t\t\t\tbufHdr->tag.forkNum == forkNum[j] &&\n> +\t\t\t\t\tbufHdr->tag.blockNum == curBlock)\n> +\t\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases spinlock */\n> \n> I think it cannot happen that the block is used for a different block of the\n> same relation-fork, but it could be safer to check\n> bufHdr->tag.blockNum >= firstDelBlock[j] instead.\n\nUnderstood and that's fine with me. Updated.\n\n11.\n> + *\tsmgrcachednblocks() -- Calculate the number of blocks that are\n> cached in\n> + *\t\t\t\t\t the supplied relation.\n> + *\n> + * It is equivalent to calling smgrnblocks, but only used in recovery\n> +for now\n> + * when DropRelFileNodeBuffers() is called. This ensures that only\n> +cached value\n> + * is used which is always valid in recovery, since there is no shared\n> + * invalidation mechanism that is implemented yet for changes in file size.\n> + *\n> + * This returns an InvalidBlockNumber when smgr_cached_nblocks is not\n> +available\n> + * and when not in recovery.\n> \n> Isn't it too concrete? We need to mention the buggy-kernel issue here rahter\n> than that of callers.\n> \n> And if the comment is correct, we should Assert(InRecovery) at the beggining\n> of this function.\n\nI did not add the assert because it causes the recovery tap test to fail.\nHowever, I updated the function description like below.\n\nIt is equivalent to calling smgrnblocks, but only used in recovery for now.\nThe returned value of file size could be inaccurate because the lseek of buggy\nLinux kernels might not have accounted the recent file extension or write.\nHowever, this function ensures that cached values are only used in recovery,\nsince there is no shared invalidation mechanism that is implemented yet for\nchanges in file size.\n\nThis returns an InvalidBlockNumber when smgr_cached_nblocks is not available\nand when not in recovery.\n\nThanks a lot for the reviews.\nIf there are any more comments, feedback, or points I might have missed please feel free to reply.\n\nBest regards,\nKirk Jamison", "msg_date": "Mon, 5 Oct 2020 01:29:07 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Oct 2, 2020 at 8:14 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 1 Oct 2020 12:55:34 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in\n> > On Thursday, October 1, 2020 4:52 PM, Tsunakawa-san wrote:\n> >\n>\n> (I'm still mildly opposed to the function name, which seems exposing\n> detail too much.)\n>\n\nDo you have any better proposal? BTW, I am still not sure whether it\nis a good idea to expose a new API for this especially because we do\nexactly the same thing in existing function smgrnblocks. Why not just\nadd a new bool *cached parameter in smgrnblocks which will be set if\nwe return cached value? I understand that we need to change the code\nwherever we call smgrnblocks or maybe even extensions if they call\nthis function but it is not clear to me if that is a big deal. What do\nyou think? I am not opposed to introducing the new API but I feel that\nadding a new parameter to the existing API to handle this case is a\nbetter option.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Oct 2020 11:22:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Mon, Oct 5, 2020 at 6:59 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Friday, October 2, 2020 11:45 AM, Horiguchi-san wrote:\n>\n> > Thaks for the new version.\n>\n> Thank you for your thoughtful reviews!\n> I've attached an updated patch addressing the comments below.\n>\n\nFew comments:\n===============\n1.\n@@ -2990,10 +3002,80 @@ DropRelFileNodeBuffers(RelFileNodeBackend\nrnode, ForkNumber *forkNum,\n return;\n }\n\n+ /*\n+ * Get the total number of cached blocks and to-be-invalidated blocks\n+ * of the relation. The cached value returned by smgrcachednblocks\n+ * could be smaller than the actual number of existing buffers of the\n+ * file. This is caused by buggy Linux kernels that might not have\n+ * accounted the recent write. If a fork's nblocks is invalid, exit loop.\n+ */\n+ for (i = 0; i < nforks; i++)\n+ {\n+ /* Get the total nblocks for a relation's fork */\n+ nForkBlocks = smgrcachednblocks(smgr_reln, forkNum[i]);\n+\n+ if (nForkBlocks == InvalidBlockNumber)\n+ {\n+ nTotalBlocks = InvalidBlockNumber;\n+ break;\n+ }\n+ nTotalBlocks += nForkBlocks;\n+ nBlocksToInvalidate = nTotalBlocks - firstDelBlock[i];\n+ }\n+\n+ /*\n+ * Do explicit hashtable probe if the total of nblocks of relation's forks\n+ * is not invalid and the nblocks to be invalidated is less than the\n+ * full-scan threshold of buffer pool. Otherwise, full scan is executed.\n+ */\n+ if (nTotalBlocks != InvalidBlockNumber &&\n+ nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n+ {\n+ for (j = 0; j < nforks; j++)\n+ {\n+ BlockNumber curBlock;\n+\n+ nForkBlocks = smgrcachednblocks(smgr_reln, forkNum[j]);\n+\n+ for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks; curBlock++)\n\nWhat if one or more of the forks doesn't have cached value? I think\nthe patch will skip such forks and will invalidate/unpin buffers for\nothers. You probably need a local array of nForkBlocks which will be\nformed first time and then used in the second loop. You also in some\nway need to handle the case where that local array doesn't have cached\nblocks.\n\n2. Also, the other thing is I have asked for some testing to avoid the\nsmall regression we have for a smaller number of shared buffers which\nI don't see the results nor any change in the code. I think it is\nbetter if you post the pending/open items each time you post a new\nversion of the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Oct 2020 11:59:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Monday, October 5, 2020 3:30 PM, Amit Kapila wrote:\r\n\r\n> + for (i = 0; i < nforks; i++)\r\n> + {\r\n> + /* Get the total nblocks for a relation's fork */ nForkBlocks =\r\n> + smgrcachednblocks(smgr_reln, forkNum[i]);\r\n> +\r\n> + if (nForkBlocks == InvalidBlockNumber) { nTotalBlocks =\r\n> + InvalidBlockNumber; break; } nTotalBlocks += nForkBlocks;\r\n> + nBlocksToInvalidate = nTotalBlocks - firstDelBlock[i]; }\r\n> +\r\n> + /*\r\n> + * Do explicit hashtable probe if the total of nblocks of relation's\r\n> + forks\r\n> + * is not invalid and the nblocks to be invalidated is less than the\r\n> + * full-scan threshold of buffer pool. Otherwise, full scan is executed.\r\n> + */\r\n> + if (nTotalBlocks != InvalidBlockNumber && nBlocksToInvalidate <\r\n> + BUF_DROP_FULL_SCAN_THRESHOLD) { for (j = 0; j < nforks; j++) {\r\n> + BlockNumber curBlock;\r\n> +\r\n> + nForkBlocks = smgrcachednblocks(smgr_reln, forkNum[j]);\r\n> +\r\n> + for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks; curBlock++)\r\n> \r\n> What if one or more of the forks doesn't have cached value? I think the patch\r\n> will skip such forks and will invalidate/unpin buffers for others. \r\n\r\nNot having a cached value is equivalent to InvalidBlockNumber, right?\r\nMaybe I'm missing something? But in the first loop we are already doing the\r\npre-check of whether or not one of the forks doesn't have cached value.\r\nIf it's not cached, then the nTotalBlocks is set to InvalidBlockNumber so we\r\nwon't need to enter the optimization loop and just execute the full scan buffer\r\ninvalidation process.\r\n\r\n> You probably\r\n> need a local array of nForkBlocks which will be formed first time and then\r\n> used in the second loop. You also in some way need to handle the case where\r\n> that local array doesn't have cached blocks.\r\n\r\nUnderstood. that would be cleaner.\r\n\tBlockNumber\tnForkBlocks[MAX_FORKNUM];\r\n\r\nAs for handling whether the local array is empty, I think the first loop would cover it,\r\nand there's no need to pre-check if the array is empty again in the second loop.\r\nfor (i = 0; i < nforks; i++)\r\n{\r\n\tnForkBlocks[i] = smgrcachednblocks(smgr_reln, forkNum[i]);\r\n\r\n\tif (nForkBlocks[i] == InvalidBlockNumber)\r\n\t{\r\n\t\tnTotalBlocks = InvalidBlockNumber;\r\n\t\tbreak;\r\n\t}\r\n\tnTotalBlocks += nForkBlocks[i];\r\n\tnBlocksToInvalidate = nTotalBlocks - firstDelBlock[i];\r\n}\r\n\r\n> 2. Also, the other thing is I have asked for some testing to avoid the small\r\n> regression we have for a smaller number of shared buffers which I don't see\r\n> the results nor any change in the code. I think it is better if you post the\r\n> pending/open items each time you post a new version of the patch.\r\n\r\nAh. Apologies for forgetting to include updates about that, but since I keep on updating\r\nthe patch I've decided not to post results yet as performance may vary per patch-update\r\ndue to possible bugs.\r\nBut for the performance case of not using recovery check, I just removed it from below.\r\nDoes it meet the intention?\r\n\r\nBlockNumber smgrcachednblocks(SMgrRelation reln, ForkNumber forknum) {\r\n- if (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\r\n+ if (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\r\n return reln->smgr_cached_nblocks[forknum];\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Mon, 5 Oct 2020 09:34:13 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Mon, Oct 5, 2020 at 3:04 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Monday, October 5, 2020 3:30 PM, Amit Kapila wrote:\n>\n> > + for (i = 0; i < nforks; i++)\n> > + {\n> > + /* Get the total nblocks for a relation's fork */ nForkBlocks =\n> > + smgrcachednblocks(smgr_reln, forkNum[i]);\n> > +\n> > + if (nForkBlocks == InvalidBlockNumber) { nTotalBlocks =\n> > + InvalidBlockNumber; break; } nTotalBlocks += nForkBlocks;\n> > + nBlocksToInvalidate = nTotalBlocks - firstDelBlock[i]; }\n> > +\n> > + /*\n> > + * Do explicit hashtable probe if the total of nblocks of relation's\n> > + forks\n> > + * is not invalid and the nblocks to be invalidated is less than the\n> > + * full-scan threshold of buffer pool. Otherwise, full scan is executed.\n> > + */\n> > + if (nTotalBlocks != InvalidBlockNumber && nBlocksToInvalidate <\n> > + BUF_DROP_FULL_SCAN_THRESHOLD) { for (j = 0; j < nforks; j++) {\n> > + BlockNumber curBlock;\n> > +\n> > + nForkBlocks = smgrcachednblocks(smgr_reln, forkNum[j]);\n> > +\n> > + for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks; curBlock++)\n> >\n> > What if one or more of the forks doesn't have cached value? I think the patch\n> > will skip such forks and will invalidate/unpin buffers for others.\n>\n> Not having a cached value is equivalent to InvalidBlockNumber, right?\n> Maybe I'm missing something? But in the first loop we are already doing the\n> pre-check of whether or not one of the forks doesn't have cached value.\n> If it's not cached, then the nTotalBlocks is set to InvalidBlockNumber so we\n> won't need to enter the optimization loop and just execute the full scan buffer\n> invalidation process.\n>\n\noh, I have missed that, so the existing code will work fine for that case.\n\n> > You probably\n> > need a local array of nForkBlocks which will be formed first time and then\n> > used in the second loop. You also in some way need to handle the case where\n> > that local array doesn't have cached blocks.\n>\n> Understood. that would be cleaner.\n> BlockNumber nForkBlocks[MAX_FORKNUM];\n>\n> As for handling whether the local array is empty, I think the first loop would cover it,\n> and there's no need to pre-check if the array is empty again in the second loop.\n> for (i = 0; i < nforks; i++)\n> {\n> nForkBlocks[i] = smgrcachednblocks(smgr_reln, forkNum[i]);\n>\n> if (nForkBlocks[i] == InvalidBlockNumber)\n> {\n> nTotalBlocks = InvalidBlockNumber;\n> break;\n> }\n> nTotalBlocks += nForkBlocks[i];\n> nBlocksToInvalidate = nTotalBlocks - firstDelBlock[i];\n> }\n>\n\nThis appears okay.\n\n> > 2. Also, the other thing is I have asked for some testing to avoid the small\n> > regression we have for a smaller number of shared buffers which I don't see\n> > the results nor any change in the code. I think it is better if you post the\n> > pending/open items each time you post a new version of the patch.\n>\n> Ah. Apologies for forgetting to include updates about that, but since I keep on updating\n> the patch I've decided not to post results yet as performance may vary per patch-update\n> due to possible bugs.\n> But for the performance case of not using recovery check, I just removed it from below.\n> Does it meet the intention?\n>\n> BlockNumber smgrcachednblocks(SMgrRelation reln, ForkNumber forknum) {\n> - if (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n> + if (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n> return reln->smgr_cached_nblocks[forknum];\n>\n\nYes, we can do that for the purpose of testing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Oct 2020 17:20:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Monday, October 5, 2020 8:50 PM, Amit Kapila wrote:\r\n\r\n> On Mon, Oct 5, 2020 at 3:04 PM k.jamison@fujitsu.com\r\n> > > 2. Also, the other thing is I have asked for some testing to avoid\r\n> > > the small regression we have for a smaller number of shared buffers\r\n> > > which I don't see the results nor any change in the code. I think it\r\n> > > is better if you post the pending/open items each time you post a new\r\n> version of the patch.\r\n> >\r\n> > Ah. Apologies for forgetting to include updates about that, but since\r\n> > I keep on updating the patch I've decided not to post results yet as\r\n> > performance may vary per patch-update due to possible bugs.\r\n> > But for the performance case of not using recovery check, I just removed it\r\n> from below.\r\n> > Does it meet the intention?\r\n> >\r\n> > BlockNumber smgrcachednblocks(SMgrRelation reln, ForkNumber\r\n> forknum) {\r\n> > - if (InRecovery && reln->smgr_cached_nblocks[forknum] !=\r\n> InvalidBlockNumber)\r\n> > + if (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\r\n> > return reln->smgr_cached_nblocks[forknum];\r\n> >\r\n> \r\n> Yes, we can do that for the purpose of testing.\r\n\r\nWith the latest patches attached, and removing the recovery check in smgrnblocks,\r\nI tested the performance of vacuum.\r\n(3 trial runs, 3.5 GB db populated with 1000 tables)\r\n\r\nExecution Time (seconds)\r\n| s_b | master | patched | %reg | \r\n|-------|--------|---------|----------| \r\n| 128MB | 15.265 | 15.260 | -0.03% | \r\n| 1GB | 14.808 | 15.009 | 1.34% | \r\n| 20GB | 24.673 | 11.681 | -111.22% | \r\n| 100GB | 74.298 | 11.724 | -533.73% |\r\n\r\nThese are good results and we can see the improvements for large shared buffers,\r\nFor small s_b, the performance is almost the same.\r\n\r\nI repeated the recovery performance test from the previous mail,\r\nand ran three trials for each shared_buffer setting.\r\nWe can also clearly see the improvement here.\r\n\r\nRecovery Time (seconds)\r\n| s_b | master | patched | %reg | \r\n|-------|--------|---------|--------| \r\n| 128MB | 3.043 | 3.010 | -1.10% | \r\n| 1GB | 3.417 | 3.477 | 1.73% | \r\n| 20GB | 20.597 | 2.409 | -755% | \r\n| 100GB | 66.862 | 2.409 | -2676% |\r\n\r\nFor default and small shared_buffers, the recovery performance is almost the same.\r\nBut for bigger shared_buffers, we can see the benefit and improvement.\r\nFor 20GB, from 20.597 s to 2.409 s. For 100GB s_b, from 66.862 s to 2.409 s.\r\n\r\nI have updated the latest patches, with 0002 being the new one.\r\nInstead of introducing a new API, I just added the bool parameter to smgrnblocks\r\nand modified its callers.\r\n\r\nComments and feedback are highly appreciated.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Thu, 8 Oct 2020 02:07:06 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> With the latest patches attached, and removing the recovery check in\r\n> smgrnblocks, I tested the performance of vacuum.\r\n> (3 trial runs, 3.5 GB db populated with 1000 tables)\r\n> \r\n> Execution Time (seconds)\r\n> | s_b | master | patched | %reg |\r\n> |-------|--------|---------|----------|\r\n> | 128MB | 15.265 | 15.260 | -0.03% |\r\n> | 1GB | 14.808 | 15.009 | 1.34% |\r\n> | 20GB | 24.673 | 11.681 | -111.22% | 100GB | 74.298 | 11.724 |\r\n> | -533.73% |\r\n> \r\n> These are good results and we can see the improvements for large shared\r\n> buffers, For small s_b, the performance is almost the same.\r\n\r\nVery nice!\r\n\r\nI'll try to review the patch again soon.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 8 Oct 2020 02:45:06 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Kirk san,\r\n\r\n\r\n(1)\r\n+ * This returns an InvalidBlockNumber when smgr_cached_nblocks is not\r\n+ * available and when not in recovery path.\r\n\r\n+\t/*\r\n+\t * We cannot believe the result from smgr_nblocks is always accurate\r\n+\t * because lseek of buggy Linux kernels doesn't account for a recent\r\n+\t * write.\r\n+\t */\r\n+\tif (!InRecovery && result == InvalidBlockNumber)\r\n+\t\treturn InvalidBlockNumber;\r\n+\r\n\r\nThese are unnecessary, because mdnblocks() never returns InvalidBlockNumber and conseuently smgrnblocks() doesn't return InvalidBlockNumber.\r\n\r\n\r\n(2)\r\n+smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *isCached)\r\n\r\nI think it's better to make the argument name iscached so that camel case aligns with forknum, which is not forkNum.\r\n\r\n\r\n(3)\r\n+\t * This is caused by buggy Linux kernels that might not have accounted\r\n+\t * the recent write. If a fork's nblocks is invalid, exit loop.\r\n\r\n\"accounted for\" is the right English?\r\nI think The second sentence should be described in terms of its meaning, not the program logic. For example, something like \"Give up the optimization if the block count of any fork cannot be trusted.\"\r\nLikewise, express the following part in semantics.\r\n\r\n+\t * Do explicit hashtable lookup if the total of nblocks of relation's forks\r\n+\t * is not invalid and the nblocks to be invalidated is less than the\r\n\r\n\r\n(4)\r\n+\t\tif (nForkBlocks[i] == InvalidBlockNumber)\r\n+\t\t{\r\n+\t\t\tnTotalBlocks = InvalidBlockNumber;\r\n+\t\t\tbreak;\r\n+\t\t}\r\n\r\nUse isCached in if condition because smgrnblocks() doesn't return InvalidBlockNumber.\r\n\r\n\r\n(5)\r\n+\t\tnBlocksToInvalidate = nTotalBlocks - firstDelBlock[i];\r\n\r\nshould be\r\n\r\n+\t\tnBlocksToInvalidate += (nForkBlocks[i] - firstDelBlock[i]);\r\n\r\n\r\n\r\n(6)\r\n+\t\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[j])\r\n+\t\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases spinlock */\r\n\r\nThe right side of >= should be cur_block.\r\n\r\n\r\n\r\n \r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 8 Oct 2020 06:37:39 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, October 8, 2020 3:38 PM, Tsunakawa-san wrote:\r\n\r\n> Hi Kirk san,\r\nThank you for looking into my patches!\r\n\r\n> (1)\r\n> + * This returns an InvalidBlockNumber when smgr_cached_nblocks is not\r\n> + * available and when not in recovery path.\r\n> \r\n> +\t/*\r\n> +\t * We cannot believe the result from smgr_nblocks is always accurate\r\n> +\t * because lseek of buggy Linux kernels doesn't account for a recent\r\n> +\t * write.\r\n> +\t */\r\n> +\tif (!InRecovery && result == InvalidBlockNumber)\r\n> +\t\treturn InvalidBlockNumber;\r\n> +\r\n> \r\n> These are unnecessary, because mdnblocks() never returns\r\n> InvalidBlockNumber and conseuently smgrnblocks() doesn't return\r\n> InvalidBlockNumber.\r\n\r\nYes. Thanks for carefully looking into that. Removed.\r\n\r\n> (2)\r\n> +smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *isCached)\r\n> \r\n> I think it's better to make the argument name iscached so that camel case\r\n> aligns with forknum, which is not forkNum.\r\n\r\nThis is kinda tricky because of the surrounding code which follows inconsistent coding style too.\r\nSo I just followed the same like below and retained the change.\r\n\r\nextern void smgrcreate(SMgrRelation reln, ForkNumber forknum, bool isRedo);\r\nextern void smgrextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum, char *buffer, bool skipFsync);\r\n\r\n> (3)\r\n> +\t * This is caused by buggy Linux kernels that might not have\r\n> accounted\r\n> +\t * the recent write. If a fork's nblocks is invalid, exit loop.\r\n> \r\n> \"accounted for\" is the right English?\r\n> I think The second sentence should be described in terms of its meaning, not\r\n> the program logic. For example, something like \"Give up the optimization if\r\n> the block count of any fork cannot be trusted.\"\r\n\r\nFixed.\r\n\r\n> Likewise, express the following part in semantics.\r\n> \r\n> +\t * Do explicit hashtable lookup if the total of nblocks of relation's\r\n> forks\r\n> +\t * is not invalid and the nblocks to be invalidated is less than the\r\n\r\nI revised it like below:\r\n\"Look up the buffer in the hashtable if the block size is known to \r\n be accurate and the total blocks to be invalidated is below the\r\n full scan threshold. Otherwise, give up the optimization.\"\r\n\r\n> (4)\r\n> +\t\tif (nForkBlocks[i] == InvalidBlockNumber)\r\n> +\t\t{\r\n> +\t\t\tnTotalBlocks = InvalidBlockNumber;\r\n> +\t\t\tbreak;\r\n> +\t\t}\r\n> \r\n> Use isCached in if condition because smgrnblocks() doesn't return\r\n> InvalidBlockNumber.\r\n\r\nFixed. if (!isCached)\r\n\r\n> (5)\r\n> +\t\tnBlocksToInvalidate = nTotalBlocks - firstDelBlock[i];\r\n> \r\n> should be\r\n> \r\n> +\t\tnBlocksToInvalidate += (nForkBlocks[i] - firstDelBlock[i]);\r\n\r\nFixed.\r\n\r\n> (6)\r\n> +\t\t\t\t\tbufHdr->tag.blockNum >=\r\n> firstDelBlock[j])\r\n> +\t\t\t\t\tInvalidateBuffer(bufHdr);\t/*\r\n> releases spinlock */\r\n> \r\n> The right side of >= should be cur_block.\r\n\r\nFixed.\r\n\r\n\r\nAttached are the updated patches.\r\nThank you again for the reviews.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Thu, 8 Oct 2020 09:13:48 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi, \r\n> Attached are the updated patches.\r\n\r\nSorry there was an error in the 3rd patch. So attached is a rebase one.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Thu, 8 Oct 2020 09:37:48 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> > (6)\r\n> > +\t\t\t\t\tbufHdr->tag.blockNum >=\r\n> > firstDelBlock[j])\r\n> > +\t\t\t\t\tInvalidateBuffer(bufHdr);\t/*\r\n> > releases spinlock */\r\n> >\r\n> > The right side of >= should be cur_block.\r\n> \r\n> Fixed.\r\n\r\n>= should be =, shouldn't it?\r\n\r\nPlease measure and let us see just the recovery performance again because the critical part of the patch was modified. If the performance is good as the previous one, and there's no review interaction with others in progress, I'll mark the patch as ready for committer in a few days.\r\n\r\n\r\n Regards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 9 Oct 2020 00:41:24 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Fri, 9 Oct 2020 00:41:24 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > > (6)\n> > > +\t\t\t\t\tbufHdr->tag.blockNum >=\n> > > firstDelBlock[j])\n> > > +\t\t\t\t\tInvalidateBuffer(bufHdr);\t/*\n> > > releases spinlock */\n> > >\n> > > The right side of >= should be cur_block.\n> > \n> > Fixed.\n> \n> >= should be =, shouldn't it?\n> \n> Please measure and let us see just the recovery performance again because the critical part of the patch was modified. If the performance is good as the previous one, and there's no review interaction with others in progress, I'll mark the patch as ready for committer in a few days.\n\nThe performance is expected to be kept since smgrnblocks() is called\nin a non-hot code path and actually it is called at most four times\nper a buffer drop in this patch. But it's better making it sure.\n\nI have some comments on the latest patch.\n\n@@ -445,6 +445,7 @@ BlockNumber\n visibilitymap_prepare_truncate(Relation rel, BlockNumber nheapblocks)\n {\n \tBlockNumber newnblocks;\n+\tbool\tcached;\n\nAll the added variables added by 0002 is useless because all the\ncaller sites are not interested in the value. smgrnblocks should\naccept NULL as isCached. (I'm agree with Tsunakawa-san that the\ncamel-case name is not common there.)\n\n+\t\tnForkBlocks[i] = smgrnblocks(smgr_reln, forkNum[i], &isCached);\n+\n+\t\tif (!isCached)\n\n\"is cached\" is not the property that code is interested in. No other callers to smgrnblocks are interested in that property. The need for caching is purely internal of smgrnblocks().\n\nOn the other hand, we are going to utilize the property of \"accuracy\"\nthat is a biproduct of reducing fseek calls, and, again, not\ninterested in how it is achieved.\n\nSo I suggest that the name should be \"accurite\" or something that is\nnot suggest the mechanism used under the hood.\n\n+\tif (nTotalBlocks != InvalidBlockNumber &&\n+\t\tnBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n\nI don't think nTotalBlocks is useful. What we need here is only total\nblocks for every forks (nForkBlocks[]) and the total number of buffers\nto be invalidated for all forks (nBlocksToInvalidate).\n\n\n> > > The right side of >= should be cur_block.\n> > \n> > Fixed.\n> \n> >= should be =, shouldn't it?\n\nIt's just from a paranoia. What we are going to invalidate is blocks\nblockNum of which >= curBlock. Although actually there's no chance of\nany other processes having replaced the buffer with another page (with\nlower blockid) of the same relation after BugTableLookup(), that\ncondition makes it sure not to leave blocks to be invalidated left\nalone.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 09 Oct 2020 11:12:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Oops! Sorry for the mistake.\n\nAt Fri, 09 Oct 2020 11:12:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 9 Oct 2020 00:41:24 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> > From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > > > (6)\n> > > > +\t\t\t\t\tbufHdr->tag.blockNum >=\n> > > > firstDelBlock[j])\n> > > > +\t\t\t\t\tInvalidateBuffer(bufHdr);\t/*\n> > > > releases spinlock */\n> > > >\n> > > > The right side of >= should be cur_block.\n> > > \n> > > Fixed.\n> > \n> > >= should be =, shouldn't it?\n> > \n> > Please measure and let us see just the recovery performance again because the critical part of the patch was modified. If the performance is good as the previous one, and there's no review interaction with others in progress, I'll mark the patch as ready for committer in a few days.\n> \n> The performance is expected to be kept since smgrnblocks() is called\n> in a non-hot code path and actually it is called at most four times\n> per a buffer drop in this patch. But it's better making it sure.\n> \n> I have some comments on the latest patch.\n> \n> @@ -445,6 +445,7 @@ BlockNumber\n> visibilitymap_prepare_truncate(Relation rel, BlockNumber nheapblocks)\n> {\n> \tBlockNumber newnblocks;\n> +\tbool\tcached;\n> \n> All the added variables added by 0002 is useless because all the\n> caller sites are not interested in the value. smgrnblocks should\n> accept NULL as isCached. (I'm agree with Tsunakawa-san that the\n> camel-case name is not common there.)\n> \n> +\t\tnForkBlocks[i] = smgrnblocks(smgr_reln, forkNum[i], &isCached);\n> +\n> +\t\tif (!isCached)\n> \n> \"is cached\" is not the property that code is interested in. No other callers to smgrnblocks are interested in that property. The need for caching is purely internal of smgrnblocks().\n> \n> On the other hand, we are going to utilize the property of \"accuracy\"\n> that is a biproduct of reducing fseek calls, and, again, not\n> interested in how it is achieved.\n> \n> So I suggest that the name should be \"accurite\" or something that is\n> not suggest the mechanism used under the hood.\n> \n> +\tif (nTotalBlocks != InvalidBlockNumber &&\n> +\t\tnBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n> \n> I don't think nTotalBlocks is useful. What we need here is only total\n> blocks for every forks (nForkBlocks[]) and the total number of buffers\n> to be invalidated for all forks (nBlocksToInvalidate).\n> \n> \n> > > > The right side of >= should be cur_block.\n> > > \n> > > Fixed.\n> > \n> > >= should be =, shouldn't it?\n> \n> It's just from a paranoia. What we are going to invalidate is blocks\n> blockNum of which >= curBlock. Although actually there's no chance of\n\nSorry. What we are going to invalidate is blocks that are blocNum >=\nfirstDelBlock[i]. So what I wanted to suggest was the condition should\nbe\n\n+\t\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n+\t\t\t\t\tbufHdr->tag.forkNum == forkNum[j] &&\n+\t\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[j])\n\n> any other processes having replaced the buffer with another page (with\n> lower blockid) of the same relation after BugTableLookup(), that\n> condition makes it sure not to leave blocks to be invalidated left\n> alone.\n\nAnd I forgot to mention the patch names. I think many of us name the\npatches using -v option of git-format-patch, and assign the version to\na patch-set thus the version number of all files that are posted at\nonce is same.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 09 Oct 2020 11:24:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, October 9, 2020 11:12 AM, Horiguchi-san wrote:\n> I have some comments on the latest patch.\n\nThank you for the feedback!\nI've attached the latest patches.\n\n> visibilitymap_prepare_truncate(Relation rel, BlockNumber nheapblocks) {\n> \tBlockNumber newnblocks;\n> +\tbool\tcached;\n> \n> All the added variables added by 0002 is useless because all the caller sites\n> are not interested in the value. smgrnblocks should accept NULL as isCached.\n> (I'm agree with Tsunakawa-san that the camel-case name is not common\n> there.)\n> \n> +\t\tnForkBlocks[i] = smgrnblocks(smgr_reln, forkNum[i],\n> &isCached);\n> +\n> +\t\tif (!isCached)\n> \n> \"is cached\" is not the property that code is interested in. No other callers to\n> smgrnblocks are interested in that property. The need for caching is purely\n> internal of smgrnblocks().\n> On the other hand, we are going to utilize the property of \"accuracy\"\n> that is a biproduct of reducing fseek calls, and, again, not interested in how it\n> is achieved.\n> So I suggest that the name should be \"accurite\" or something that is not\n> suggest the mechanism used under the hood.\n\nI changed the bool param to \"accurate\" per your suggestion.\nAnd I also removed the additional variables \"bool cached\" from the modified functions.\nNow NULL values are accepted for the new boolean parameter\n \n\n> +\tif (nTotalBlocks != InvalidBlockNumber &&\n> +\t\tnBlocksToInvalidate <\n> BUF_DROP_FULL_SCAN_THRESHOLD)\n> \n> I don't think nTotalBlocks is useful. What we need here is only total blocks for\n> every forks (nForkBlocks[]) and the total number of buffers to be invalidated\n> for all forks (nBlocksToInvalidate).\n\nAlright. I also removed nTotalBlocks in v24-0003 patch.\n\nfor (i = 0; i < nforks; i++)\n{\n if (nForkBlocks[i] != InvalidBlockNumber &&\n nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n {\n Optimization loop\n }\n else\n break;\n}\nif (i >= nforks)\n return;\n{ usual buffer invalidation process }\n\n\n> > > > The right side of >= should be cur_block.\n> > > Fixed.\n> > >= should be =, shouldn't it?\n> \n> It's just from a paranoia. What we are going to invalidate is blocks blockNum\n> of which >= curBlock. Although actually there's no chance of any other\n> processes having replaced the buffer with another page (with lower blockid)\n> of the same relation after BufTableLookup(), that condition makes it sure not\n> to leave blocks to be invalidated left alone.\n> Sorry. What we are going to invalidate is blocks that are blocNum >=\n> firstDelBlock[i]. So what I wanted to suggest was the condition should be\n> \n> +\t\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode,\n> rnode.node) &&\n> +\t\t\t\t\tbufHdr->tag.forkNum ==\n> forkNum[j] &&\n> +\t\t\t\t\tbufHdr->tag.blockNum >=\n> firstDelBlock[j])\n\nI used bufHdr->tag.blockNum >= firstDelBlock[i] in the latest patch.\n\n> > Please measure and let us see just the recovery performance again because\n> the critical part of the patch was modified. If the performance is good as the\n> previous one, and there's no review interaction with others in progress, I'll\n> mark the patch as ready for committer in a few days.\n> \n> The performance is expected to be kept since smgrnblocks() is called in a\n> non-hot code path and actually it is called at most four times per a buffer\n> drop in this patch. But it's better making it sure.\n\nHmm. When I repeated the performance measurement for non-recovery,\nI got almost similar execution results for both master and patched.\n\nExecution Time (in seconds)\n| s_b | master | patched | %reg | \n|-------|--------|---------|--------| \n| 128MB | 15.265 | 14.769 | -3.36% | \n| 1GB | 14.808 | 14.618 | -1.30% | \n| 20GB | 24.673 | 24.425 | -1.02% | \n| 100GB | 74.298 | 74.813 | 0.69% |\n\nThat is considering that I removed the recovery-related checks in the patch and just\nexecuted the commands on a standalone server.\n- if (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n+ if (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n\nOTOH, I also measured the recovery performance by having hot standby and executing failover.\nThe results were good and almost similar to the previously reported recovery performance.\n\nRecovery Time (in seconds)\n| s_b | master | patched | %reg | \n|-------|--------|---------|--------| \n| 128MB | 3.043 | 2.977 | -2.22% | \n| 1GB | 3.417 | 3.41 | -0.21% | \n| 20GB | 20.597 | 2.409 | -755% | \n| 100GB | 66.862 | 2.409 | -2676% |\n\nFor 20GB s_b, from 20.597 s (Master) to 2.409 s (Patched).\nFor 100GB s_b, from 66.862 s (Master) to 2.409 s (Patched).\nThis is mainly benefits for large shared_buffers setting,\nwithout compromising when shared_buffers is set to default or lower value.\n\nIf you could take a look again and if you have additional feedback or comments, I'd appreciate it.\nThank you for your time\n\nRegards,\nKirk Jamison", "msg_date": "Mon, 12 Oct 2020 09:38:12 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Mon, Oct 12, 2020 at 3:08 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Hmm. When I repeated the performance measurement for non-recovery,\n> I got almost similar execution results for both master and patched.\n>\n> Execution Time (in seconds)\n> | s_b | master | patched | %reg |\n> |-------|--------|---------|--------|\n> | 128MB | 15.265 | 14.769 | -3.36% |\n> | 1GB | 14.808 | 14.618 | -1.30% |\n> | 20GB | 24.673 | 24.425 | -1.02% |\n> | 100GB | 74.298 | 74.813 | 0.69% |\n>\n> That is considering that I removed the recovery-related checks in the patch and just\n> executed the commands on a standalone server.\n> - if (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n> + if (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n>\n\nWhy so? Have you tried to investigate? Check if it takes an optimized\npath for the non-recovery case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Oct 2020 16:19:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n\n\n(1)\n> Alright. I also removed nTotalBlocks in v24-0003 patch.\n> \n> for (i = 0; i < nforks; i++)\n> {\n> if (nForkBlocks[i] != InvalidBlockNumber &&\n> nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n> {\n> Optimization loop\n> }\n> else\n> break;\n> }\n> if (i >= nforks)\n> return;\n> { usual buffer invalidation process }\n\nWhy do you do this way? I think the previous patch was more correct (while agreeing with Horiguchi-san in that nTotalBlocks may be unnecessary. What you want to do is \"if the size of any fork could be inaccurate, do the traditional full buffer scan without performing any optimization for any fork,\" right? But the above code performs optimization for forks until it finds a fork with inaccurate size.\n\n(2)\n+\t * Get the total number of cached blocks and to-be-invalidated blocks\n+\t * of the relation. The cached value returned by smgrnblocks could be\n+\t * smaller than the actual number of existing buffers of the file.\n\nAs you changed the meaning of the smgrnblocks() argument from cached to accurate, and you nolonger calculate the total blocks, the comment should reflect them.\n\n\n(3)\nIn smgrnblocks(), accurate is not set to false when mdnblocks() is called. The caller doesn't initialize the value either, so it can see garbage value.\n\n\n(4)\n+\t\tif (nForkBlocks[i] != InvalidBlockNumber &&\n+\t\t\tnBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n+\t\t{\n...\n+\t\t}\n+\t\telse\n+\t\t\tbreak;\n+\t}\n\nIn cases like this, it's better to reverse the if and else. Thus, you can reduce the nest depth.\n\n\n Regards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Tue, 13 Oct 2020 01:08:50 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, October 13, 2020 10:09 AM, Tsunakawa-san wrote:\n> Why do you do this way? I think the previous patch was more correct (while\n> agreeing with Horiguchi-san in that nTotalBlocks may be unnecessary. What\n> you want to do is \"if the size of any fork could be inaccurate, do the traditional\n> full buffer scan without performing any optimization for any fork,\" right? But\n> the above code performs optimization for forks until it finds a fork with\n> inaccurate size.\n> \n> (2)\n> +\t * Get the total number of cached blocks and to-be-invalidated\n> blocks\n> +\t * of the relation. The cached value returned by smgrnblocks could\n> be\n> +\t * smaller than the actual number of existing buffers of the file.\n> \n> As you changed the meaning of the smgrnblocks() argument from cached to\n> accurate, and you nolonger calculate the total blocks, the comment should\n> reflect them.\n> \n> \n> (3)\n> In smgrnblocks(), accurate is not set to false when mdnblocks() is called.\n> The caller doesn't initialize the value either, so it can see garbage value.\n> \n> \n> (4)\n> +\t\tif (nForkBlocks[i] != InvalidBlockNumber &&\n> +\t\t\tnBlocksToInvalidate <\n> BUF_DROP_FULL_SCAN_THRESHOLD)\n> +\t\t{\n> ...\n> +\t\t}\n> +\t\telse\n> +\t\t\tbreak;\n> +\t}\n> \n> In cases like this, it's better to reverse the if and else. Thus, you can reduce\n> the nest depth.\n\nThank you for the review!\n1. I have revised the patch addressing your comments/feedback. Attached are the latest set of patches.\n\n2. Non-recovery Performance\nI also included a debug version of the patch (0004) where I removed the recovery-related checks\nto measure non-recovery performance.\nHowever, I still can't seem to find the cause of why the non-recovery performance\ndoes not change when compared to master. (1 min 15 s for the given test case below)\n\n> - if (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n> + if (reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n\nHere's how I measured it:\n0. postgresql.conf setting\nshared_buffers = 100GB\nautovacuum = off\nfull_page_writes = off\ncheckpoint_timeout = 30min\nmax_locks_per_transaction = 100\nwal_log_hints = on\nwal_keep_size = 100\nmax_wal_size = 20GB\n\n1. createdb test\n\n2. Create tables: SELECT create_tables(1000);\n\ncreate or replace function create_tables(numtabs int)\nreturns void as $$\ndeclare query_string text;\nbegin\n for i in 1..numtabs loop\n query_string := 'create table tab_' || i::text || ' (a int);';\n execute query_string;\n end loop;\nend;\n$$ language plpgsql;\n\n3 Insert rows to tables (3.5 GB db): SELECT insert_tables(1000);\n\ncreate or replace function insert_tables(numtabs int)\nreturns void as $$\ndeclare query_string text;\nbegin\n for i in 1..numtabs loop\n query_string := 'insert into tab_' || i::text || ' SELECT generate_series(1, 100000);' ;\n execute query_string;\n end loop;\nend;\n$$ language plpgsql;\n\n4. DELETE FROM tables: SELECT delfrom_tables(1000);\n\ncreate or replace function delfrom_tables(numtabs int)\nreturns void as $$\ndeclare query_string text;\nbegin\n for i in 1..numtabs loop\n query_string := 'delete from tab_' || i::text;\n execute query_string;\n end loop;\nend;\n$$ language plpgsql;\n\n5. Measure VACUUM timing\n\\timing\nVACUUM;\n\nUsing the debug version of the patch, I have confirmed that it enters the optimization path\nwhen it meets the conditions. Here are some printed logs from 018_wal_optimize_node_replica.log:\n> make world -j4 -s && make -C src/test/recovery/ check PROVE_TESTS=t/018_wal_optimize.pl\n\nWARNING: current fork 0, nForkBlocks[i] 1, accurate: 1\nCONTEXT: WAL redo at 0/162B4E0 for Storage/TRUNCATE: base/13751/24577 to 0 blocks flags 7\nWARNING: Optimization Loop.\nbuf_id = 41. nforks = 1. current fork = 0. forkNum: 0 == tag's forkNum: 0. curBlock: 0 < nForkBlocks[i] = 1. tag blockNum: 0 >= firstDelBlock[i]: 0. nBlocksToInvalidate = 1 < threshold = 32.\n\n--\n3. Recovery Performance (hot standby, failover)\nOTOH, when executing recovery performance (using 0003 patch), the results were great.\n\n| s_b | master | patched | %reg | \n|-------|--------|---------|--------| \n| 128MB | 3.043 | 2.977 | -2.22% | \n| 1GB | 3.417 | 3.41 | -0.21% | \n| 20GB | 20.597 | 2.409 | -755% | \n| 100GB | 66.862 | 2.409 | -2676% |\n\nTo execute this on a hot standby setup (after inserting rows to tables)\n1. [Standby] Pause WAL replay\n SELECT pg_wal_replay_pause();\n\n2. [Master] Measure VACUUM timing. Then stop server.\n\\timing\nVACUUM;\n\\q\npg_ctl stop -mi -w\n\n3. [Standby] Use the attached script to promote standby and measure the performance.\n# test.sh recovery\n\n\nSo the current issue I'm still investigating is why the performance for non-recovery is bad,\nwhile OTOH it's good when InRecovery.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 15 Oct 2020 03:34:09 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> 2. Non-recovery Performance\n> However, I still can't seem to find the cause of why the non-recovery\n> performance does not change when compared to master. (1 min 15 s for the\n> given test case below)\n...\n> 5. Measure VACUUM timing\n> \\timing\n> VACUUM;\n\nOops, why are you using VACUUM? Aren't you trying to speed up TRUNCATE?\n\nEven if you wanted to utilize the truncation at the end of VACUUM for measuring truncation speed, your way measures the whole VACUUM processing, which includes the garbage collection process. The garbage collection should dominate the time.\n\n\n> 3. Recovery Performance (hot standby, failover) OTOH, when executing\n> 2. [Master] Measure VACUUM timing. Then stop server.\n> \\timing\n> VACUUM;\n> \\q\n> pg_ctl stop -mi -w\n> \n> 3. [Standby] Use the attached script to promote standby and measure the\n> performance.\n> # test.sh recovery\n\nYou didn't DELETE the table data as opposed to the non-recovery case. Then, the replay of VACUUM should do nothing. That's why you got a good performance number.\n\nTRUNCATE goes this path:\n\n[non-recovery]\nCommitTransaction\nsmgrdopendingdeletes\nsmgrdounlinkall\nDropRelFileNodesAllBuffers\n\n[recovery]\nxact_redo_commit\nDropRelationFiles\nsmgrdounlinkall\nDropRelFileNodesAllBuffers\n\nSo, you need to modify DropRelFileNodesAllBuffers(). OTOH, DropRelFileNodeBuffers(), which you modified, is used in VACUUM's truncation and another case. The modification itself is useful because it can shorten the occasional hickup during autovacuum, so you don't remove the change.\n\n(The existence of these two paths is tricky; anyone on this thread didn't notice, and I forgot about it. It would be good to refactor this, but it's a separate undertaking, I think.)\n\n\nBelow are my comments for the code:\n\n(1)\n@@ -572,6 +572,9 @@ smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n+\tif (accurate != NULL)\n+\t\t*accurate = false;\n+\n\nThe above change should be in 002, right?\n\n\n(2) \n+\t\t/* Get the total nblocks for a relation's fork */\n\ntotal nblocks -> number of blocks\n\n\n(3)\n+\t\tif (nForkBlocks[i] == InvalidBlockNumber ||\n+\t\t\tnBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)\n+\t\t\tbreak;\n\nWith this code, you haven't addressed what I commented previously. If the size of the first fork is accurate but that of the second one is not, the first fork is processed in an optimized way while the second fork is done in the traditional way. What you want to here is to only use the traditional way for all forks, right?\n\nSo, remove the above change and replace\n\n+\t\tif (!accurate)\n+\t\t{\n+\t\t\tnForkBlocks[i] = InvalidBlockNumber;\n+\t\t\tbreak;\n+\t\t}\n\nwith\n\n+\t\tif (!accurate)\n+\t\t\tbreak;\n\nAnd after the first for loop, put\n\n\tif (!accurate || nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n\t\tgoto full_scan;\n\nAnd remove the following code and instead put the \"full_scan:\" label there.\n\n+\tif (i >= nforks)\n+\t\treturn;\n+\n\nOr, instead of using goto, you can write like this:\n\nfor (...)\n\tcalculate # of invalidated blocks\n\nif (accurate && nBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)\n{\n\tdo the optimized way;\n\treturn;\n}\n\ndo the traditional way;\n\n\nI prefer using goto here because the loop nesting gets shallow. But that's a matter of taste and you can choose either.\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 15 Oct 2020 06:55:22 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "\t\tFrom: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> However, I still can't seem to find the cause of why the non-recovery\n> performance does not change when compared to master. (1 min 15 s for the\n> given test case below)\n\nCan you check and/or try the following?\n\n\n1. Isn't the vacuum cost delay working?\nVACUUM command should run without sleeping with the default settings. Just in case, can you try with the settings:\n\nvacuum_cost_delay = 0\nvacuum_cost_limit = 10000\n\n\n2. Buffer strategy\nThe non-recovery VACUUM can differ from that of recovery in the use of shared buffers. The VACUUM command uses only 256 KB of shared buffers. To make VACUUM command use the whole shared buffers, can you modify src/backend/commands/vacuum.c so that GetAccessStrategy()'s argument is changed to BAS_VACUUM to BAS_NORMAL? (I don't have much hope about this, though, because all blocks of the relations are already cached in shared buffers when VACUUM is run.)\n\n\nCan you measure the time DropRelFileNodeBuffers()? You can call GetTimestamp() at the beginning and end of the function, and use TimestampDifference() to calculate the difference. Then, for instance, elog(WARNING, \"time is | %u.%u\", sec, usec) at the end of the function. You can use any elog() print format for your convenience to write shell commands to filter the lines and sum up the total.\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Tue, 20 Oct 2020 07:06:58 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\n> Can you measure the time DropRelFileNodeBuffers()? You can call\n> GetTimestamp() at the beginning and end of the function, and use\n> TimestampDifference() to calculate the difference. Then, for instance,\n> elog(WARNING, \"time is | %u.%u\", sec, usec) at the end of the function. You\n> can use any elog() print format for your convenience to write shell commands to\n> filter the lines and sum up the total.\n\nBefore doing this, you can also do \"VACUUM (truncate off)\" to see which of the garbage collection or relation truncation takes long time. The relation truncation processing includes not only DropRelFileNodeBuffers() but also file truncation and something else, but it's an easy filter.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Wed, 21 Oct 2020 01:34:23 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "RelationTruncate() invalidates the cached fork sizes as follows. This causes smgrnblocks() return accurate=false, resulting in not running optimization. Try commenting out for non-recovery case.\n\n /*\n * Make sure smgr_targblock etc aren't pointing somewhere past new end\n */\n rel->rd_smgr->smgr_targblock = InvalidBlockNumber;\n for (int i = 0; i <= MAX_FORKNUM; ++i)\n rel->rd_smgr->smgr_cached_nblocks[i] = InvalidBlockNumber;\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Wed, 21 Oct 2020 07:36:36 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, October 21, 2020 4:37 PM, Tsunakawa-san wrote:\n> RelationTruncate() invalidates the cached fork sizes as follows. This causes\n> smgrnblocks() return accurate=false, resulting in not running optimization.\n> Try commenting out for non-recovery case.\n> \n> /*\n> * Make sure smgr_targblock etc aren't pointing somewhere past new\n> end\n> */\n> rel->rd_smgr->smgr_targblock = InvalidBlockNumber;\n> for (int i = 0; i <= MAX_FORKNUM; ++i)\n> rel->rd_smgr->smgr_cached_nblocks[i] = InvalidBlockNumber;\n\nHello, I have updated the set of patches which incorporated all your feedback in the previous email.\nThank you for also looking into it. The patch 0003 (DropRelFileNodeBuffers improvement)\nis indeed for vacuum optimization and not for truncate.\nI'll post a separate patch for the truncate optimization in the coming days.\n\n1. Vacuum Optimization\nI have confirmed that the above comment (commenting out the lines in RelationTruncate)\nsolves the issue for non-recovery case.\nThe attached 0004 patch is just for non-recovery testing and is not included in the\nfinal set of patches to be committed for vacuum optimization.\n\nThe table below shows the vacuum execution time for non-recovery case.\nI've also subtracted the execution time when VACUUM (truncate off) is set.\n\n[NON-RECOVERY CASE - VACUUM execution Time in seconds]\n\n| s_b | master | patched | %reg | \n|-------|--------|---------|-----------| \n| 128MB | 0.22 | 0.181 | -21.55% | \n| 1GB | 0.701 | 0.712 | 1.54% | \n| 20GB | 15.027 | 1.920 | -682.66% | \n| 100GB | 65.456 | 1.795 | -3546.57% |\n\n[RECOVERY CASE, VACUUM execution + failover]\nI've made a mistake in my writing of the previous email [1].\nDELETE from was executed before pausing the WAL replay on standby.\nIn short, the procedure and results were correct. But I repeated the\nperformance measurement just in case. The results are still great and \nalmost the same as the previous measurement.\n\n| s_b | master | patched | %reg | \n|-------|--------|---------|--------| \n| 128MB | 3.043 | 3.009 | -1.13% | \n| 1GB | 3.417 | 3.410 | -0.21% | \n| 20GB | 20.597 | 2.410 | -755% | \n| 100GB | 65.734 | 2.409 | -2629% |\n\nBased from the results above, with the patches applied,\nthe performance for both recovery and non-recovery were relatively close.\nFor default and small shared_buffers (128MB, 1GB), the performance is\nrelatively the same as master. But we see the benefit when we have large shared_buffers setting.\n\nI've tested using the same test case I indicated in the previous email,\nIncluding the following additional setting:\nvacuum_cost_delay = 0\nvacuum_cost_limit = 10000\n\nThat's it for the vacuum optimization. Feedback and comments would be highly appreciated.\n\n2. Truncate Optimization\nI'll post a separate patch in the future for the truncate optimization which modifies the\nDropRelFileNodesAllBuffers and related functions along the truncate path..\n\nThank you.\n\nRegards,\nKirk Jamison\n\n[1] https://www.postgresql.org/message-id/OSBPR01MB2341672E9A95E5EC6D2E79B5EF020%40OSBPR01MB2341.jpnprd01.prod.outlook.com", "msg_date": "Thu, 22 Oct 2020 00:41:48 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> I have confirmed that the above comment (commenting out the lines in\n> RelationTruncate) solves the issue for non-recovery case.\n> The attached 0004 patch is just for non-recovery testing and is not included in\n> the final set of patches to be committed for vacuum optimization.\n\nI'm relieved to hear that.\n\nAs for 0004:\nWhen testing TRUNCATE, remove the change to storage.c because it was intended to troubleshoot the VACUUM test.\nWhat's the change in bufmgr.c for? Is it to be included in 0001 or 0002?\n\n\n> The table below shows the vacuum execution time for non-recovery case.\n> I've also subtracted the execution time when VACUUM (truncate off) is set.\n> \n> [NON-RECOVERY CASE - VACUUM execution Time in seconds]\n(snip)\n> | 100GB | 65.456 | 1.795 | -3546.57% |\n\nSo, the full shared buffer scan for 10,000 relations took about as long as 63 seconds (= 6.3 ms per relation). It's nice to shorten this long time.\n\nI'll review the patch soon.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 01:33:31 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "> As for 0004:\n> When testing TRUNCATE, remove the change to storage.c because it was\n> intended to troubleshoot the VACUUM test.\n\nI meant vacuum.c. Sorry.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 01:35:38 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "The patch looks good except for the minor one:\n\n(1)\n+\t * as the total nblocks for a given fork. The cached value returned by\n\nnblocks -> blocks\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 01:53:07 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, October 22, 2020 10:34 AM, Tsunakwa-san wrote:\n> > I have confirmed that the above comment (commenting out the lines in\n> > RelationTruncate) solves the issue for non-recovery case.\n> > The attached 0004 patch is just for non-recovery testing and is not\n> > included in the final set of patches to be committed for vacuum\n> optimization.\n> \n> I'm relieved to hear that.\n> \n> As for 0004:\n> When testing TRUNCATE, remove the change to storage.c because it was\n> intended to troubleshoot the VACUUM test.\nI've removed it now.\n\n> What's the change in bufmgr.c for? Is it to be included in 0001 or 0002?\n\nRight. But that should be in 0003. Fixed.\n\nI also fixed the feedback from the previous email:\n>(1)\n>+\t * as the total nblocks for a given fork. The cached value returned by\n>\n>nblocks -> blocks\n\n\n> > The table below shows the vacuum execution time for non-recovery case.\n> > I've also subtracted the execution time when VACUUM (truncate off) is set.\n> >\n> > [NON-RECOVERY CASE - VACUUM execution Time in seconds]\n> (snip)\n> > | 100GB | 65.456 | 1.795 | -3546.57% |\n> \n> So, the full shared buffer scan for 10,000 relations took about as long as 63\n> seconds (= 6.3 ms per relation). It's nice to shorten this long time.\n> \n> I'll review the patch soon.\n\nThank you very much for the reviews. Attached are the latest set of patches.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 22 Oct 2020 02:06:43 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Oct 22, 2020 at 3:07 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n+ /*\n+ * Get the total number of to-be-invalidated blocks of a relation as well\n+ * as the total blocks for a given fork. The cached value returned by\n+ * smgrnblocks could be smaller than the actual number of existing buffers\n+ * of the file. This is caused by buggy Linux kernels that might not have\n+ * accounted for the recent write. Give up the optimization if the block\n+ * count of any fork cannot be trusted.\n+ */\n+ for (i = 0; i < nforks; i++)\n+ {\n+ /* Get the number of blocks for a relation's fork */\n+ nForkBlocks[i] = smgrnblocks(smgr_reln, forkNum[i], &accurate);\n+\n+ if (!accurate)\n+ break;\n\nHmmm. The Linux comment led me to commit ffae5cc and a 2006 thread[1]\nshowing a buggy sequence of system calls. AFAICS it was not even an\nSMP/race problem of the type you might half expect, it was a single\nprocess not seeing its own write. I didn't find details on the\nversion, filesystem etc.\n\nSearching for our message \"This has been seen to occur with buggy\nkernels; consider updating your system\" turns up recent-ish results\ntoo. The reports I read involved GlusterFS, which I don't personally\nknow anything about, but it claims full POSIX compliance, and POSIX is\nstrict about that sort of thing, so I'd guess that is/was a fairly\nserious bug or misconfiguration. Surely there must be other symptoms\nfor PostgreSQL on such systems too, like sequential scans that don't\nsee recently added pages.\n\nBut... does the proposed caching behaviour and \"accurate\" flag really\nhelp with any of that? Cached values come from lseek() anyway. If we\njust trusted unmodified smgrnblocks(), someone running on such a\nforgetful file system might eventually see nasty errors because we\nleft buffers in the buffer pool that prevent a checkpoint from\ncompleting (and panic?), but they might also see other really strange\nerrors, and that applies with or without that \"accurate\" flag, no?\n\n[1] https://www.postgresql.org/message-id/flat/26202.1159032931%40sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 22 Oct 2020 16:35:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hmmm. The Linux comment led me to commit ffae5cc and a 2006 thread[1]\n> showing a buggy sequence of system calls.\n\nHah, blast from the past ...\n\n> AFAICS it was not even an\n> SMP/race problem of the type you might half expect, it was a single\n> process not seeing its own write. I didn't find details on the\n> version, filesystem etc.\n\nPer the referenced bug-reporting thread, it was ReiserFS and/or NFS on\nSLES 9.3; so, dubious storage choices on an ancient-even-then Linux\nkernel.\n\nI think the takeaway point is not so much that that particular bug\nmight recur as that storage infrastructure does sometimes have bugs.\nIf you're wanting to introduce new assumptions about what the filesystem\nwill do, it's prudent to think about how badly will we break if the\nassumptions fail.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Oct 2020 00:52:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 22 Oct 2020 16:35:27 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Oct 22, 2020 at 3:07 PM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n> + /*\n> + * Get the total number of to-be-invalidated blocks of a relation as well\n> + * as the total blocks for a given fork. The cached value returned by\n> + * smgrnblocks could be smaller than the actual number of existing buffers\n> + * of the file. This is caused by buggy Linux kernels that might not have\n> + * accounted for the recent write. Give up the optimization if the block\n> + * count of any fork cannot be trusted.\n> + */\n> + for (i = 0; i < nforks; i++)\n> + {\n> + /* Get the number of blocks for a relation's fork */\n> + nForkBlocks[i] = smgrnblocks(smgr_reln, forkNum[i], &accurate);\n> +\n> + if (!accurate)\n> + break;\n> \n> Hmmm. The Linux comment led me to commit ffae5cc and a 2006 thread[1]\n> showing a buggy sequence of system calls. AFAICS it was not even an\n> SMP/race problem of the type you might half expect, it was a single\n> process not seeing its own write. I didn't find details on the\n> version, filesystem etc.\n\nAnyway that comment is irrelevant to the added code. The point here is\nthat the returned value may not be reliable, due to not only the\nkernel bugs, but the files is extended/truncated by other\nprocesess. But I suppose that we may have synchronized file-size cache\nin the future?\n\n> Searching for our message \"This has been seen to occur with buggy\n> kernels; consider updating your system\" turns up recent-ish results\n> too. The reports I read involved GlusterFS, which I don't personally\n> know anything about, but it claims full POSIX compliance, and POSIX is\n> strict about that sort of thing, so I'd guess that is/was a fairly\n> serious bug or misconfiguration. Surely there must be other symptoms\n> for PostgreSQL on such systems too, like sequential scans that don't\n> see recently added pages.\n> \n> But... does the proposed caching behaviour and \"accurate\" flag really\n> help with any of that? Cached values come from lseek() anyway. If we\n\nThat \"accurate\" (good name wanted) flag suggest that it is guaranteed\nthat we don't have a buffer for blocks after that block number.\n\n> just trusted unmodified smgrnblocks(), someone running on such a\n> forgetful file system might eventually see nasty errors because we\n> left buffers in the buffer pool that prevent a checkpoint from\n> completing (and panic?), but they might also see other really strange\n> errors, and that applies with or without that \"accurate\" flag, no?\n> \n> [1] https://www.postgresql.org/message-id/flat/26202.1159032931%40sss.pgh.pa.us\n\nsmgrtruncate and msgrextend modifies that cache from their parameter,\nnot from lseek(). At the very first the value in the cache comes from\nlseek() but if nothing other than postgres have changed the file size,\nI believe we can rely on the cache even with such a buggy kernels even\nif still exists.\n\nIf there's no longer such a buggy kernel, we can rely on lseek() only\nwhen InRecovery. If we had synchronized file size cache we could rely\non the cache even while !InRecovery. (I'm not sure about how vacuum\naffects, though.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 14:16:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Oct 22, 2020 at 5:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Per the referenced bug-reporting thread, it was ReiserFS and/or NFS on\n> SLES 9.3; so, dubious storage choices on an ancient-even-then Linux\n> kernel.\n\nOhhhh. I can reproduce that on a modern Linux box by forcing\nwriteback to a full NFS filesystem[1], approximately as the kernel\ndoes asynchronously when it feels like it, causing the size reported\nby SEEK_END to go down.\n\n$ cat magic_shrinking_file.c\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n\nint main()\n{\n int fd;\n char buffer[8192] = {0};\n\n fd = open(\"/mnt/test_loopback_remote/dir/file\", O_RDWR | O_APPEND);\n if (fd < 0) {\n perror(\"open\");\n return EXIT_FAILURE;\n }\n printf(\"lseek(..., SEEK_END) = %jd\\n\", lseek(fd, 0, SEEK_END));\n printf(\"write(...) = %zd\\n\", write(fd, buffer, sizeof(buffer)));\n printf(\"lseek(..., SEEK_END) = %jd\\n\", lseek(fd, 0, SEEK_END));\n printf(\"fsync(...) = %d\\n\", fsync(fd));\n printf(\"lseek(..., SEEK_END) = %jd\\n\", lseek(fd, 0, SEEK_END));\n\n return EXIT_SUCCESS;\n}\n$ cc magic_shrinking_file.c\n$ ./a.out\nlseek(..., SEEK_END) = 9670656\nwrite(...) = 8192\nlseek(..., SEEK_END) = 9678848\nfsync(...) = -1\nlseek(..., SEEK_END) = 9670656\n\n> I think the takeaway point is not so much that that particular bug\n> might recur as that storage infrastructure does sometimes have bugs.\n> If you're wanting to introduce new assumptions about what the filesystem\n> will do, it's prudent to think about how badly will we break if the\n> assumptions fail.\n\nYeah. My point was just that the caching trick doesn't seem to\nimprove matters on this particular front, it can just cache a bogus\nvalue.\n\n[1] https://www.postgresql.org/message-id/CAEepm=1FGo=ACPKRmAxvb53mBwyVC=TDwTE0DMzkWjdbAYw7sw@mail.gmail.com\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:54:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 22 Oct 2020 01:33:31 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > The table below shows the vacuum execution time for non-recovery case.\n> > I've also subtracted the execution time when VACUUM (truncate off) is set.\n> > \n> > [NON-RECOVERY CASE - VACUUM execution Time in seconds]\n> (snip)\n> > | 100GB | 65.456 | 1.795 | -3546.57% |\n> \n> So, the full shared buffer scan for 10,000 relations took about as long as 63 seconds (= 6.3 ms per relation). It's nice to shorten this long time.\n\nI'm not sure about the exact steps of the test, but it can be expected\nif we have many small relations to truncate.\n\nCurrently BUF_DROP_FULL_SCAN_THRESHOLD is set to Nbuffers / 512, which\nis quite arbitrary that comes from a wild guess.\n\nPerhaps we need to run benchmarks that drops one relation with several\ndifferent ratios between the number of buffers to-be-dropped and\nNbuffers, and preferably both on spinning rust and SSD.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 15:14:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 22 Oct 2020 14:16:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 22 Oct 2020 16:35:27 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> > On Thu, Oct 22, 2020 at 3:07 PM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> > But... does the proposed caching behaviour and \"accurate\" flag really\n> > help with any of that? Cached values come from lseek() anyway. If we\n> \n> That \"accurate\" (good name wanted) flag suggest that it is guaranteed\n> that we don't have a buffer for blocks after that block number.\n> \n> > just trusted unmodified smgrnblocks(), someone running on such a\n> > forgetful file system might eventually see nasty errors because we\n> > left buffers in the buffer pool that prevent a checkpoint from\n> > completing (and panic?), but they might also see other really strange\n> > errors, and that applies with or without that \"accurate\" flag, no?\n> > \n> > [1] https://www.postgresql.org/message-id/flat/26202.1159032931%40sss.pgh.pa.us\n> \n> smgrtruncate and msgrextend modifies that cache from their parameter,\n> not from lseek(). At the very first the value in the cache comes from\n> lseek() but if nothing other than postgres have changed the file size,\n> I believe we can rely on the cache even with such a buggy kernels even\n> if still exists.\n\nMmm. Not exact. The requirement here is that we must be certain that\nthe we don't have a buffuer for blocks after the file size known to\nthe process. While recoverying, If the first lseek() returned smaller\nsize than actual, we cannot have a buffer for the blocks after the\nsize. After we trncated or extended the file, we are certain that we\ndon't have a buffer for unknown blocks.\n\n> If there's no longer such a buggy kernel, we can rely on lseek() only\n> when InRecovery. If we had synchronized file size cache we could rely\n> on the cache even while !InRecovery. (I'm not sure about how vacuum\n> affects, though.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 15:33:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Oct 22, 2020 at 7:33 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 22 Oct 2020 14:16:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > smgrtruncate and msgrextend modifies that cache from their parameter,\n> > not from lseek(). At the very first the value in the cache comes from\n> > lseek() but if nothing other than postgres have changed the file size,\n> > I believe we can rely on the cache even with such a buggy kernels even\n> > if still exists.\n>\n> Mmm. Not exact. The requirement here is that we must be certain that\n> the we don't have a buffuer for blocks after the file size known to\n> the process. While recoverying, If the first lseek() returned smaller\n> size than actual, we cannot have a buffer for the blocks after the\n> size. After we trncated or extended the file, we are certain that we\n> don't have a buffer for unknown blocks.\n\nThanks, I understand now. Something feels fragile about it, perhaps\nbecause it's not really acting as a \"cache\" anymore despite its name,\nbut I see the logic now. It becomes the authoritative source of\ninformation, even if the kernel decides to make our file smaller\nasynchronously.\n\n> > If there's no longer such a buggy kernel, we can rely on lseek() only\n> > when InRecovery. If we had synchronized file size cache we could rely\n> > on the cache even while !InRecovery. (I'm not sure about how vacuum\n> > affects, though.)\n\nPerhaps the buggy kernel of 2006 is actually Linux working as designed\naccording to its philosophy on ejecting dirty buffers on writeback\nfailure (and apparently adjusting the size at the same time). At\nleast in 2020 it'll tell us about the problem that caused that when we\nnext perform an operation that reads the error counter, but in the\ncase of a relation we're dropping -- the use case in this thread --\nthat won't happen! (I mean, something else will probably tell you\nyour system is toast pretty soon, but this particular condition may be\nundetected).\n\nI think a synchronised file size cache wouldn't be enough to use this\ntrick outside the recovery process, because the initial value would\ncome from a call to lseek(), but unlike recovery, that wouldn't happen\n*before* we start putting pages in the buffer pool. Also, if we one\nday have a size-limited relcache, even recovery could get into\ntrouble, if it evicts the RelationData that holds the authoritative\nnblocks value.\n\n\n", "msg_date": "Thu, 22 Oct 2020 19:45:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 22 Oct 2020 18:54:43 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Thu, Oct 22, 2020 at 5:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Per the referenced bug-reporting thread, it was ReiserFS and/or NFS on\n> > SLES 9.3; so, dubious storage choices on an ancient-even-then Linux\n> > kernel.\n> \n> Ohhhh. I can reproduce that on a modern Linux box by forcing\n> writeback to a full NFS filesystem[1], approximately as the kernel\n> does asynchronously when it feels like it, causing the size reported\n> by SEEK_END to go down.\n\n<test code>\n\n> $ cc magic_shrinking_file.c\n> $ ./a.out\n> lseek(..., SEEK_END) = 9670656\n> write(...) = 8192\n> lseek(..., SEEK_END) = 9678848\n> fsync(...) = -1\n> lseek(..., SEEK_END) = 9670656\n\nInteresting..\n\n> > I think the takeaway point is not so much that that particular bug\n> > might recur as that storage infrastructure does sometimes have bugs.\n> > If you're wanting to introduce new assumptions about what the filesystem\n> > will do, it's prudent to think about how badly will we break if the\n> > assumptions fail.\n> \n> Yeah. My point was just that the caching trick doesn't seem to\n> improve matters on this particular front, it can just cache a bogus\n> value.\n> \n> [1] https://www.postgresql.org/message-id/CAEepm=1FGo=ACPKRmAxvb53mBwyVC=TDwTE0DMzkWjdbAYw7sw@mail.gmail.com\n\nAs I wrote in another branch of this thread, the requirement here is\nmaking sure that we don't have a buffer for blocks after the file size\nknown to the process. Even if the cache gets a bogus value at the\nfirst load, it's still true that we don't have a buffers for blocks\nafter that size. There's no problem as far as DropRelFileNodeBuffers\ndoesn't get a smaller value from smgrnblocks than the size the server\nthinks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 15:48:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Thomas Munro <thomas.munro@gmail.com>\r\n> On Thu, Oct 22, 2020 at 7:33 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> > Mmm. Not exact. The requirement here is that we must be certain that\r\n> > the we don't have a buffuer for blocks after the file size known to\r\n> > the process. While recoverying, If the first lseek() returned smaller\r\n> > size than actual, we cannot have a buffer for the blocks after the\r\n> > size. After we trncated or extended the file, we are certain that we\r\n> > don't have a buffer for unknown blocks.\r\n> \r\n> Thanks, I understand now. Something feels fragile about it, perhaps\r\n> because it's not really acting as a \"cache\" anymore despite its name,\r\n> but I see the logic now. It becomes the authoritative source of\r\n> information, even if the kernel decides to make our file smaller\r\n> asynchronously.\r\n\r\nThank you Horiguchi-san, you are a savior! I was worried like the end of the world has come.\r\n\r\n\r\n> I think a synchronised file size cache wouldn't be enough to use this\r\n> trick outside the recovery process, because the initial value would\r\n> come from a call to lseek(), but unlike recovery, that wouldn't happen\r\n> *before* we start putting pages in the buffer pool. Also, if we one\r\n> day have a size-limited relcache, even recovery could get into\r\n> trouble, if it evicts the RelationData that holds the authoritative\r\n> nblocks value.\r\n\r\nThat's too bad, because we hoped we may be able to various operations during normal operation (TRUNCATE, DROP TABLE/INDEX, DROP DATABASE, etc.) An honest man can't believe the system call, that's a hell.\r\n\r\nI'm probably being silly, but can't we avoid the problem by using fstat() instead of lseek(SEEK_END)? Would they return the same value from the i-node?\r\n\r\nOr, can't we just try to do BufTableLookup() one block after what smgrnblocks() returns?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 22 Oct 2020 07:31:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 22 Oct 2020 07:31:55 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Thomas Munro <thomas.munro@gmail.com>\n> > On Thu, Oct 22, 2020 at 7:33 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > Mmm. Not exact. The requirement here is that we must be certain that\n> > > the we don't have a buffuer for blocks after the file size known to\n> > > the process. While recoverying, If the first lseek() returned smaller\n> > > size than actual, we cannot have a buffer for the blocks after the\n> > > size. After we trncated or extended the file, we are certain that we\n> > > don't have a buffer for unknown blocks.\n> > \n> > Thanks, I understand now. Something feels fragile about it, perhaps\n> > because it's not really acting as a \"cache\" anymore despite its name,\n> > but I see the logic now. It becomes the authoritative source of\n> > information, even if the kernel decides to make our file smaller\n> > asynchronously.\n> \n> Thank you Horiguchi-san, you are a savior! I was worried like the end of the world has come.\n> \n> \n> > I think a synchronised file size cache wouldn't be enough to use this\n> > trick outside the recovery process, because the initial value would\n> > come from a call to lseek(), but unlike recovery, that wouldn't happen\n> > *before* we start putting pages in the buffer pool. Also, if we one\n> > day have a size-limited relcache, even recovery could get into\n> > trouble, if it evicts the RelationData that holds the authoritative\n> > nblocks value.\n> \n> That's too bad, because we hoped we may be able to various operations during normal operation (TRUNCATE, DROP TABLE/INDEX, DROP DATABASE, etc.) An honest man can't believe the system call, that's a hell.\n> \n> I'm probably being silly, but can't we avoid the problem by using fstat() instead of lseek(SEEK_END)? Would they return the same value from the i-node?\n> \n> Or, can't we just try to do BufTableLookup() one block after what smgrnblocks() returns?\n\nLossy smgrrelcache or relacache is not a hard obstacle. As the same\nwith the case of !accurate, we just give up the optimized dropping if\nthe relcache doesn't give the authoritative size.\n\nBy the way, heap scan finds the size of target relation using\nsmgrnblocks(). I'm not sure why we don't miss recently-extended pages\non a heap-scan? It seems to be possible that concurrent checkpoint\nfsyncs relation files inbetween the extension and scanning and the\nscanning gets smaller size than it really is.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Oct 2020 17:50:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Oct 22, 2020 at 9:50 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> By the way, heap scan finds the size of target relation using\n> smgrnblocks(). I'm not sure why we don't miss recently-extended pages\n> on a heap-scan? It seems to be possible that concurrent checkpoint\n> fsyncs relation files inbetween the extension and scanning and the\n> scanning gets smaller size than it really is.\n\nYeah. That's a narrow window: fsync() returns an error after the file\nshrinks and we immediately panic. A version with a wider window: the\nkernel tries to write in the background, gets an I/O error, shrinks\nthe file, but we don't know this and we continue running until the\nnext checkpoint calls fsync(), sees the error and panics. Seq scans\nbetween those two events fail to see recently committed data at the\nend of the table.\n\n\n", "msg_date": "Thu, 22 Oct 2020 22:27:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Oct 22, 2020 at 2:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 22 Oct 2020 07:31:55 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\n> > From: Thomas Munro <thomas.munro@gmail.com>\n> > > On Thu, Oct 22, 2020 at 7:33 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > Mmm. Not exact. The requirement here is that we must be certain that\n> > > > the we don't have a buffuer for blocks after the file size known to\n> > > > the process. While recoverying, If the first lseek() returned smaller\n> > > > size than actual, we cannot have a buffer for the blocks after the\n> > > > size. After we trncated or extended the file, we are certain that we\n> > > > don't have a buffer for unknown blocks.\n> > >\n> > > Thanks, I understand now. Something feels fragile about it, perhaps\n> > > because it's not really acting as a \"cache\" anymore despite its name,\n> > > but I see the logic now. It becomes the authoritative source of\n> > > information, even if the kernel decides to make our file smaller\n> > > asynchronously.\n\nI understand your hesitation but I guess if we can't rely on this\ncache in recovery then probably we have a problem without this patch\nitself because the current relation extension (in ReadBuffer_common)\nrelies on the smgrnblocks. So, if the cache lies with us it will\noverwrite some existing block.\n\n> > Thank you Horiguchi-san, you are a savior! I was worried like the end of the world has come.\n> >\n> >\n> > > I think a synchronised file size cache wouldn't be enough to use this\n> > > trick outside the recovery process, because the initial value would\n> > > come from a call to lseek(), but unlike recovery, that wouldn't happen\n> > > *before* we start putting pages in the buffer pool.\n\nThis is true because the other sessions might have pulled the page to\nbuffer pool but I think if we have invalidations for\nextension/truncation of a relation then probably before relying on\nthis value we can process the invalidations to update this cache\nvalue.\n\n> > > Also, if we one\n> > > day have a size-limited relcache, even recovery could get into\n> > > trouble, if it evicts the RelationData that holds the authoritative\n> > > nblocks value.\n> >\n> > That's too bad, because we hoped we may be able to various operations during normal operation (TRUNCATE, DROP TABLE/INDEX, DROP DATABASE, etc.) An honest man can't believe the system call, that's a hell.\n> >\n> > I'm probably being silly, but can't we avoid the problem by using fstat() instead of lseek(SEEK_END)? Would they return the same value from the i-node?\n> >\n> > Or, can't we just try to do BufTableLookup() one block after what smgrnblocks() returns?\n>\n> Lossy smgrrelcache or relacache is not a hard obstacle. As the same\n> with the case of !accurate, we just give up the optimized dropping if\n> the relcache doesn't give the authoritative size.\n>\n\nI think detecting lossy cache is the key thing, probably it might not\nbe as straight forward as it is in recovery path.\n\n> By the way, heap scan finds the size of target relation using\n> smgrnblocks(). I'm not sure why we don't miss recently-extended pages\n> on a heap-scan? It seems to be possible that concurrent checkpoint\n> fsyncs relation files inbetween the extension and scanning and the\n> scanning gets smaller size than it really is.\n>\n\nYeah, I think that would be a problem but not as serious as in the\ncase we are trying to deal here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Oct 2020 15:24:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Oct 22, 2020 at 8:32 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> I'm probably being silly, but can't we avoid the problem by using fstat() instead of lseek(SEEK_END)? Would they return the same value from the i-node?\n\nAmazingly, st_size can disagree with SEEK_END when using the Linux NFS\nclient, but its behaviour is worse. Here's a sequence from a Linux\nNFS client talking to a Linux NFS server with no free space. This\ntime, I also replaced the fsync() with sleep(60), just to make it\nclear that SEEK_END offset can move at any time due to asynchronous\nactivity in kernel threads:\n\nlseek(..., SEEK_END) = 9670656\nfstat(...) = 0, st_size = 9670656\n\nwrite(...) = 8192\nlseek(..., SEEK_END) = 9678848\nfstat(...) = 0, st_size = 9670656 (*1)\n\nsleep(...) = 0\n\nlseek(..., SEEK_END) = 9670656 (*2)\nfstat(...) = 0, st_size = 9670656\n\nfsync(...) = -1\nlseek(..., SEEK_END) = 9670656\nfstat(...) = 0, st_size = 9670656\nfsync(...) = 0\n\nHowever, I'm not entirely sure which phenomena visible here to blame\non which subsystems, and therefore which things to expect on local\nfilesystems, or on other operating systems. I can say that with a\nFreeBSD NFS client and the same Linux NFS server, I don't see\nphenomenon *1 (unsurprising) but I do see phenomenon *2 (surprising to\nme).\n\n> Or, can't we just try to do BufTableLookup() one block after what smgrnblocks() returns?\n\nUnfortunately the problem isn't limited to one block.\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:45:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Thomas Munro <thomas.munro@gmail.com>\r\n> > I'm probably being silly, but can't we avoid the problem by using fstat()\r\n> instead of lseek(SEEK_END)? Would they return the same value from the\r\n> i-node?\r\n> \r\n> Amazingly, st_size can disagree with SEEK_END when using the Linux NFS\r\n> client, but its behaviour is worse. Here's a sequence from a Linux\r\n> NFS client talking to a Linux NFS server with no free space. This\r\n> time, I also replaced the fsync() with sleep(60), just to make it\r\n> clear that SEEK_END offset can move at any time due to asynchronous\r\n> activity in kernel threads:\r\n\r\nThank you for experimenting. That's surely amazing. So, it makes sense for commercial DBMSs and MySQL to preallocate data files... (But IIRC, MySQL has provided an option to allocate a file per table like Postgres relatively recently.)\r\n\r\nFWIW, it seems safe to use the nodelalloc mount option with ext4 to disable delayed allocation, while xfs doesn't have such an option.\r\n\r\n> > Or, can't we just try to do BufTableLookup() one block after what\r\n> smgrnblocks() returns?\r\n> \r\n> Unfortunately the problem isn't limited to one block.\r\n\r\nYou're right. The data file can be extended by multiple blocks between disk writes.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 23 Oct 2020 00:56:35 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi everyone,\r\n\r\nAttached are the updated set of patches (V28).\r\n 0004 - Truncate optimization is a new patch, while the rest are similar to V27.\r\nThis passes the build, regression and TAP tests.\r\n\r\nApologies for the delay.\r\nI'll post the benchmark test results on SSD soon, considering the suggested benchmark of Horiguchi-san: \r\n> Currently BUF_DROP_FULL_SCAN_THRESHOLD is set to Nbuffers / 512,\r\n> which is quite arbitrary that comes from a wild guess.\r\n> \r\n> Perhaps we need to run benchmarks that drops one relation with several\r\n> different ratios between the number of buffers to-be-dropped and Nbuffers,\r\n> and preferably both on spinning rust and SSD.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Wed, 28 Oct 2020 12:52:08 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "The patch looks almost good except for the minor ones:\r\n\r\n(1)\r\n+\tfor (i = 0; i < nnodes; i++)\r\n+\t{\r\n+\t\tRelFileNodeBackend rnode = smgr_reln[i]->smgr_rnode;\r\n+\r\n+\t\trnodes[i] = rnode;\r\n+\t}\r\n\r\nYou can write:\r\n\r\n+\tfor (i = 0; i < nnodes; i++)\r\n+\t\trnodes[i] = smgr_reln[i]->smgr_rnode;\r\n\r\n\r\n(2)\r\n+\t\tif (!accurate || j >= MAX_FORKNUM ||\r\n\r\nThe correct condition would be:\r\n\r\n+\t\tif (j <= MAX_FORKNUM ||\r\n\r\nbecause j becomes MAX_FORKNUM + 1 if accurate sizes for all forks could be obtained. If any fork's size is inaccurate, j is <= MAX_FORKNUM when exiting the loop, so you don't need to test for accurate flag.\r\n\r\n\r\n(3)\r\n+\t\t{\r\n+\t\t\tgoto buffer_full_scan;\r\n+\t\t\treturn;\r\n+\t\t}\r\n\r\nreturn after goto cannot be reached, so this should just be:\r\n\r\n+\t\t\tgoto buffer_full_scan;\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 29 Oct 2020 02:08:02 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\r\n\r\nI've updated the patch 0004 (Truncate optimization) with the previous comments of\r\nTsunakawa-san already addressed in the patch. (Thank you very much for the review.) \r\nThe change here compared to the previous version is that in DropRelFileNodesAllBuffers()\r\nwe don't check for the accurate flag anymore when deciding whether to optimize or not.\r\nFor relations with blocks that do not exceed the threshold for full scan, we call\r\nDropRelFileNodeBuffers where the flag will be checked anyway. Otherwise, we proceed\r\nto the traditional buffer scan. Thoughts?\r\n\r\nI've done recovery performance for TRUNCATE.\r\nTest case: 1 parent table with 100 child partitions. TRUNCATE each child partition (1 transaction per table).\r\nCurrently, it takes a while to recover when we have large shared_buffers setting, but with the patch applied\r\nthe recovery is almost constant (0.206 s below).\r\n\r\n| s_b | master | patched | \r\n|-------|--------|---------| \r\n| 128MB | 0.105 | 0.105 | \r\n| 1GB | 0.205 | 0.205 | \r\n| 20GB | 2.008 | 0.206 | \r\n| 100GB | 9.315 | 0.206 |\r\n\r\nMethod of Testing (assuming streaming replication is configured):\r\n1. Create 1 parent table and 100 child partitions\r\n2. Insert data to each table. \r\n3. Pause WAL replay on standby. ( SELECT pg_wal_replay_pause(); )\r\n4. TRUNCATE each child partitions on primary (1 transaction per table). Stop the primary\r\n5. Resume the WAL replay and promote standby. ( SELECT pg_wal_replay_resume(); pg_ctl promote)\r\nI have confirmed that the relations became empty on standby.\r\n\r\nYour thoughts, feedback are very much appreciated.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Wed, 4 Nov 2020 02:58:27 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Nov 4, 2020 at 8:28 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Hi,\n>\n> I've updated the patch 0004 (Truncate optimization) with the previous comments of\n> Tsunakawa-san already addressed in the patch. (Thank you very much for the review.)\n> The change here compared to the previous version is that in DropRelFileNodesAllBuffers()\n> we don't check for the accurate flag anymore when deciding whether to optimize or not.\n> For relations with blocks that do not exceed the threshold for full scan, we call\n> DropRelFileNodeBuffers where the flag will be checked anyway. Otherwise, we proceed\n> to the traditional buffer scan. Thoughts?\n>\n\nCan we do a Truncate optimization once we decide about your other\npatch as I see a few problems with it? If we can get the first patch\n(vacuum optimization) committed it might be a bit easier for us to get\nthe truncate optimization. If possible, let's focus on (auto)vacuum\noptimization first.\n\nFew comments on patches:\n======================\nv29-0002-Add-bool-param-in-smgrnblocks-for-cached-blocks\n-----------------------------------------------------------------------------------\n1.\n-smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n+smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n {\n BlockNumber result;\n\n /*\n * For now, we only use cached values in recovery due to lack of a shared\n- * invalidation mechanism for changes in file size.\n+ * invalidation mechanism for changes in file size. The cached values\n+ * could be smaller than the actual number of existing buffers of the file.\n+ * This is caused by lseek of buggy Linux kernels that might not have\n+ * accounted for the recent write.\n */\n if (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n+ {\n+ if (accurate != NULL)\n+ *accurate = true;\n+\n\nI don't understand this comment. Few emails back, I think we have\ndiscussed that cached value can't be less than the number of buffers\nduring recovery. If that happens to be true then we have some problem.\nIf you want to explain 'accurate' variable then you can do the same\natop of function. Would it be better to name this variable as\n'cached'?\n\nv29-0003-Optimize-DropRelFileNodeBuffers-during-recovery\n----------------------------------------------------------------------------------\n2.\n+ /* Check that it is in the buffer pool. If not, do nothing. */\n+ LWLockAcquire(bufPartitionLock, LW_SHARED);\n+ buf_id = BufTableLookup(&bufTag, bufHash);\n+ LWLockRelease(bufPartitionLock);\n+\n+ if (buf_id < 0)\n+ continue;\n+\n+ bufHdr = GetBufferDescriptor(buf_id);\n+\n+ buf_state = LockBufHdr(bufHdr);\n+\n+ if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n\nI think a pre-check for RelFileNode might be better before LockBufHdr\nfor the reasons mentioned in this function few lines down.\n\n3.\n-DropRelFileNodeBuffers(RelFileNodeBackend rnode, ForkNumber *forkNum,\n+DropRelFileNodeBuffers(SMgrRelation smgr_reln, ForkNumber *forkNum,\n int nforks, BlockNumber *firstDelBlock)\n {\n int i;\n int j;\n+ RelFileNodeBackend rnode;\n+ bool accurate;\n\nIt is better to initialize accurate with false. Again, is it better to\nchange this variable name as 'cached'.\n\n4.\n+ /*\n+ * Look up the buffer in the hashtable if the block size is known to\n+ * be accurate and the total blocks to be invalidated is below the\n+ * full scan threshold. Otherwise, give up the optimization.\n+ */\n+ if (accurate && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n+ {\n+ for (j = 0; j < nforks; j++)\n+ {\n+ BlockNumber curBlock;\n+\n+ for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks[j]; curBlock++)\n+ {\n+ uint32 bufHash; /* hash value for tag */\n+ BufferTag bufTag; /* identity of requested block */\n+ LWLock *bufPartitionLock; /* buffer partition lock for it */\n+ int buf_id;\n+\n+ /* create a tag so we can lookup the buffer */\n+ INIT_BUFFERTAG(bufTag, rnode.node, forkNum[j], curBlock);\n+\n+ /* determine its hash code and partition lock ID */\n+ bufHash = BufTableHashCode(&bufTag);\n+ bufPartitionLock = BufMappingPartitionLock(bufHash);\n+\n+ /* Check that it is in the buffer pool. If not, do nothing. */\n+ LWLockAcquire(bufPartitionLock, LW_SHARED);\n+ buf_id = BufTableLookup(&bufTag, bufHash);\n+ LWLockRelease(bufPartitionLock);\n+\n+ if (buf_id < 0)\n+ continue;\n+\n+ bufHdr = GetBufferDescriptor(buf_id);\n+\n+ buf_state = LockBufHdr(bufHdr);\n+\n+ if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n+ bufHdr->tag.forkNum == forkNum[j] &&\n+ bufHdr->tag.blockNum >= firstDelBlock[j])\n+ InvalidateBuffer(bufHdr); /* releases spinlock */\n+ else\n+ UnlockBufHdr(bufHdr, buf_state);\n+ }\n+ }\n+ return;\n+ }\n\nCan we move the code under this 'if' condition to a separate function,\nsay FindAndDropRelFileNodeBuffers or something like that?\n\nv29-0004-TRUNCATE-optimization\n------------------------------------------------\n5.\n+ for (i = 0; i < n; i++)\n+ {\n+ nforks = 0;\n+ nBlocksToInvalidate = 0;\n+\n+ for (j = 0; j <= MAX_FORKNUM; j++)\n+ {\n+ if (!smgrexists(rels[i], j))\n+ continue;\n+\n+ /* Get the number of blocks for a relation's fork */\n+ nblocks = smgrnblocks(rels[i], j, NULL);\n+\n+ nBlocksToInvalidate += nblocks;\n+\n+ forks[nforks++] = j;\n+ }\n+ if (nBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)\n+ goto buffer_full_scan;\n+\n+ DropRelFileNodeBuffers(rels[i], forks, nforks, firstDelBlocks);\n+ }\n+ pfree(nodes);\n+ pfree(rels);\n+ pfree(rnodes);\n+ return;\n\nI think this can be slower than the current Truncate. Say there are\nthree relations and for one of them the size is greater than\nBUF_DROP_FULL_SCAN_THRESHOLD then you would anyway have to scan the\nentire shared buffers so the work done in optimized path for other two\nrelations will add some over head.\n\nAlso, as written, I think you need to remove the nodes for which you\nhave invalidated the buffers via optimized path, no.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 4 Nov 2020 15:59:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hello.\n\nMany of the quetions are on the code following my past suggestions.\n\nAt Wed, 4 Nov 2020 15:59:17 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Nov 4, 2020 at 8:28 AM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n> >\n> > Hi,\n> >\n> > I've updated the patch 0004 (Truncate optimization) with the previous comments of\n> > Tsunakawa-san already addressed in the patch. (Thank you very much for the review.)\n> > The change here compared to the previous version is that in DropRelFileNodesAllBuffers()\n> > we don't check for the accurate flag anymore when deciding whether to optimize or not.\n> > For relations with blocks that do not exceed the threshold for full scan, we call\n> > DropRelFileNodeBuffers where the flag will be checked anyway. Otherwise, we proceed\n> > to the traditional buffer scan. Thoughts?\n> >\n> \n> Can we do a Truncate optimization once we decide about your other\n> patch as I see a few problems with it? If we can get the first patch\n> (vacuum optimization) committed it might be a bit easier for us to get\n> the truncate optimization. If possible, let's focus on (auto)vacuum\n> optimization first.\n> \n> Few comments on patches:\n> ======================\n> v29-0002-Add-bool-param-in-smgrnblocks-for-cached-blocks\n> -----------------------------------------------------------------------------------\n> 1.\n> -smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n> +smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n> {\n> BlockNumber result;\n> \n> /*\n> * For now, we only use cached values in recovery due to lack of a shared\n> - * invalidation mechanism for changes in file size.\n> + * invalidation mechanism for changes in file size. The cached values\n> + * could be smaller than the actual number of existing buffers of the file.\n> + * This is caused by lseek of buggy Linux kernels that might not have\n> + * accounted for the recent write.\n> */\n> if (InRecovery && reln->smgr_cached_nblocks[forknum] != InvalidBlockNumber)\n> + {\n> + if (accurate != NULL)\n> + *accurate = true;\n> +\n> \n> I don't understand this comment. Few emails back, I think we have\n> discussed that cached value can't be less than the number of buffers\n> during recovery. If that happens to be true then we have some problem.\n> If you want to explain 'accurate' variable then you can do the same\n> atop of function. Would it be better to name this variable as\n> 'cached'?\n\n(I agree that the comment needs to be fixed.)\n\nFWIW I don't think 'cached' suggests the characteristics of the\nreturned value on its interface. It was introduced to reduce fseek()\ncalls, and after that we have found that it can be regarded as the\nauthoritative source of the file size. The \"accurate\" means that it\nis guaranteed that we don't have a buffer for the file blocks further\nthan that number. I don't come up with a more proper word than\n\"accurate\" but also I don't think \"cached\" is proper here.\n\nBy the way, if there's a case where we extend a file by more than one\nblock the cached value becomes invalid. I'm not sure if it actually\nhappens, but the following sequence may lead to a problem. We need a\nprotection for that case.\n\nsmgrnblocks() : cached n\ntruncate to n-5 : cached n=5\nextend to m + 2 : cached invalid\n(fsync failed)\nsmgrnblocks() : returns and cached n-5\n\n\n\n> v29-0003-Optimize-DropRelFileNodeBuffers-during-recovery\n> ----------------------------------------------------------------------------------\n> 2.\n> + /* Check that it is in the buffer pool. If not, do nothing. */\n> + LWLockAcquire(bufPartitionLock, LW_SHARED);\n> + buf_id = BufTableLookup(&bufTag, bufHash);\n> + LWLockRelease(bufPartitionLock);\n> +\n> + if (buf_id < 0)\n> + continue;\n> +\n> + bufHdr = GetBufferDescriptor(buf_id);\n> +\n> + buf_state = LockBufHdr(bufHdr);\n> +\n> + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> \n> I think a pre-check for RelFileNode might be better before LockBufHdr\n> for the reasons mentioned in this function few lines down.\n\nThe equivalent check is already done by BufTableLookup(). The last\nline in the above is not a precheck but the final check.\n\n> 3.\n> -DropRelFileNodeBuffers(RelFileNodeBackend rnode, ForkNumber *forkNum,\n> +DropRelFileNodeBuffers(SMgrRelation smgr_reln, ForkNumber *forkNum,\n> int nforks, BlockNumber *firstDelBlock)\n> {\n> int i;\n> int j;\n> + RelFileNodeBackend rnode;\n> + bool accurate;\n> \n> It is better to initialize accurate with false. Again, is it better to\n> change this variable name as 'cached'.\n\n*I* agree to initilization.\n\n> 4.\n> + /*\n> + * Look up the buffer in the hashtable if the block size is known to\n> + * be accurate and the total blocks to be invalidated is below the\n> + * full scan threshold. Otherwise, give up the optimization.\n> + */\n> + if (accurate && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n> + {\n> + for (j = 0; j < nforks; j++)\n> + {\n> + BlockNumber curBlock;\n> +\n> + for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks[j]; curBlock++)\n> + {\n> + uint32 bufHash; /* hash value for tag */\n> + BufferTag bufTag; /* identity of requested block */\n> + LWLock *bufPartitionLock; /* buffer partition lock for it */\n> + int buf_id;\n> +\n> + /* create a tag so we can lookup the buffer */\n> + INIT_BUFFERTAG(bufTag, rnode.node, forkNum[j], curBlock);\n> +\n> + /* determine its hash code and partition lock ID */\n> + bufHash = BufTableHashCode(&bufTag);\n> + bufPartitionLock = BufMappingPartitionLock(bufHash);\n> +\n> + /* Check that it is in the buffer pool. If not, do nothing. */\n> + LWLockAcquire(bufPartitionLock, LW_SHARED);\n> + buf_id = BufTableLookup(&bufTag, bufHash);\n> + LWLockRelease(bufPartitionLock);\n> +\n> + if (buf_id < 0)\n> + continue;\n> +\n> + bufHdr = GetBufferDescriptor(buf_id);\n> +\n> + buf_state = LockBufHdr(bufHdr);\n> +\n> + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> + bufHdr->tag.forkNum == forkNum[j] &&\n> + bufHdr->tag.blockNum >= firstDelBlock[j])\n> + InvalidateBuffer(bufHdr); /* releases spinlock */\n> + else\n> + UnlockBufHdr(bufHdr, buf_state);\n> + }\n> + }\n> + return;\n> + }\n> \n> Can we move the code under this 'if' condition to a separate function,\n> say FindAndDropRelFileNodeBuffers or something like that?\n\nThinking about the TRUNCATE optimization, it sounds reasonable to have\na separate function, which runs the optmized dropping unconditionally.\n\n> v29-0004-TRUNCATE-optimization\n> ------------------------------------------------\n> 5.\n> + for (i = 0; i < n; i++)\n> + {\n> + nforks = 0;\n> + nBlocksToInvalidate = 0;\n> +\n> + for (j = 0; j <= MAX_FORKNUM; j++)\n> + {\n> + if (!smgrexists(rels[i], j))\n> + continue;\n> +\n> + /* Get the number of blocks for a relation's fork */\n> + nblocks = smgrnblocks(rels[i], j, NULL);\n> +\n> + nBlocksToInvalidate += nblocks;\n> +\n> + forks[nforks++] = j;\n> + }\n> + if (nBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)\n> + goto buffer_full_scan;\n> +\n> + DropRelFileNodeBuffers(rels[i], forks, nforks, firstDelBlocks);\n> + }\n> + pfree(nodes);\n> + pfree(rels);\n> + pfree(rnodes);\n> + return;\n> \n> I think this can be slower than the current Truncate. Say there are\n> BUF_DROP_FULL_SCAN_THRESHOLD then you would anyway have to scan the\n> entire shared buffers so the work done in optimized path for other two\n> relations will add some over head.\n\nThat's true. The criteria here is the number of blocks of all\nrelations. And even if all of the relations is smaller than the\nthreshold, we should go to the full-scan dropping if the total size\nexceeds the threshold. So we cannot reuse DropRelFileNodeBuffers() as\nis here.\n\n> Also, as written, I think you need to remove the nodes for which you\n> have invalidated the buffers via optimized path, no.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 05 Nov 2020 10:22:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, November 5, 2020 10:22 AM, Horiguchi-san wrote:\n> Hello.\n> \n> Many of the quetions are on the code following my past suggestions.\n\nYeah, I was also about to answer with the feedback you have given.\nThank you for replying and taking a look too.\n\n> At Wed, 4 Nov 2020 15:59:17 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > On Wed, Nov 4, 2020 at 8:28 AM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I've updated the patch 0004 (Truncate optimization) with the\n> > > previous comments of Tsunakawa-san already addressed in the patch.\n> > > (Thank you very much for the review.) The change here compared to\n> > > the previous version is that in DropRelFileNodesAllBuffers() we don't\n> check for the accurate flag anymore when deciding whether to optimize or\n> not.\n> > > For relations with blocks that do not exceed the threshold for full\n> > > scan, we call DropRelFileNodeBuffers where the flag will be checked\n> > > anyway. Otherwise, we proceed to the traditional buffer scan. Thoughts?\n> > >\n> >\n> > Can we do a Truncate optimization once we decide about your other\n> > patch as I see a few problems with it? If we can get the first patch\n> > (vacuum optimization) committed it might be a bit easier for us to get\n> > the truncate optimization. If possible, let's focus on (auto)vacuum\n> > optimization first.\n\nSure. That'd be better.\n\n> > Few comments on patches:\n> > ======================\n> > v29-0002-Add-bool-param-in-smgrnblocks-for-cached-blocks\n> > ----------------------------------------------------------------------\n> > -------------\n> > 1.\n> > -smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n> > +smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n> > {\n> > BlockNumber result;\n> >\n> > /*\n> > * For now, we only use cached values in recovery due to lack of a\n> > shared\n> > - * invalidation mechanism for changes in file size.\n> > + * invalidation mechanism for changes in file size. The cached\n> > + values\n> > + * could be smaller than the actual number of existing buffers of the file.\n> > + * This is caused by lseek of buggy Linux kernels that might not have\n> > + * accounted for the recent write.\n> > */\n> > if (InRecovery && reln->smgr_cached_nblocks[forknum] !=\n> > InvalidBlockNumber)\n> > + {\n> > + if (accurate != NULL)\n> > + *accurate = true;\n> > +\n> >\n> > I don't understand this comment. Few emails back, I think we have\n> > discussed that cached value can't be less than the number of buffers\n> > during recovery. If that happens to be true then we have some problem.\n> > If you want to explain 'accurate' variable then you can do the same\n> > atop of function. Would it be better to name this variable as\n> > 'cached'?\n> \n> (I agree that the comment needs to be fixed.)\n> \n> FWIW I don't think 'cached' suggests the characteristics of the returned value\n> on its interface. It was introduced to reduce fseek() calls, and after that we\n> have found that it can be regarded as the authoritative source of the file size.\n> The \"accurate\" means that it is guaranteed that we don't have a buffer for the\n> file blocks further than that number. I don't come up with a more proper\n> word than \"accurate\" but also I don't think \"cached\" is proper here.\n\nI also couldn't think of a better parameter name. Accurate seems to be better fit\nas it describes a measurement close to an accepted value.\nHow about fixing the comment like below, would this suffice?\n\n/*\n *\tsmgrnblocks() -- Calculate the number of blocks in the\n *\t\t\t\t\t supplied relation.\n *\n*\t\taccurate flag acts as an authoritative source of the file size and\n *\t\tensures that no buffers exist for blocks after the file size is known\n *\t\tto the process.\n */\nBlockNumber\nsmgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n{\n\tBlockNumber result;\n\n\t/*\n\t * For now, we only use cached values in recovery due to lack of a shared\n\t * invalidation mechanism for changes in file size. In recovery, the cached\n\t * value returned by the first lseek could be smaller than the actual number\n\t * of existing buffers of the file, which is caused by buggy Linux kernels\n\t * that might not have accounted for the recent write. However, we can\n\t * still rely on the cached value even if we get a bogus value from first\n\t * lseek since it is impossible to have buffer for blocks after the file size.\n\t */\n\n\n> By the way, if there's a case where we extend a file by more than one block the\n> cached value becomes invalid. I'm not sure if it actually happens, but the\n> following sequence may lead to a problem. We need a protection for that\n> case.\n> \n> smgrnblocks() : cached n\n> truncate to n-5 : cached n=5\n> extend to m + 2 : cached invalid\n> (fsync failed)\n> smgrnblocks() : returns and cached n-5\n\nI am not sure if the patch should cover this or should be a separate thread altogether since\na number of functions also rely on the smgrnblocks(). But I'll take it into consideration.\n\n\n> > v29-0003-Optimize-DropRelFileNodeBuffers-during-recovery\n> > ----------------------------------------------------------------------\n> > ------------\n> > 2.\n> > + /* Check that it is in the buffer pool. If not, do nothing. */\n> > + LWLockAcquire(bufPartitionLock, LW_SHARED); buf_id =\n> > + BufTableLookup(&bufTag, bufHash); LWLockRelease(bufPartitionLock);\n> > +\n> > + if (buf_id < 0)\n> > + continue;\n> > +\n> > + bufHdr = GetBufferDescriptor(buf_id);\n> > +\n> > + buf_state = LockBufHdr(bufHdr);\n> > +\n> > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> >\n> > I think a pre-check for RelFileNode might be better before LockBufHdr\n> > for the reasons mentioned in this function few lines down.\n> \n> The equivalent check is already done by BufTableLookup(). The last line in\n> the above is not a precheck but the final check.\n\nRight. So I'll retain that current code.\n\n> > 3.\n> > -DropRelFileNodeBuffers(RelFileNodeBackend rnode, ForkNumber\n> *forkNum,\n> > +DropRelFileNodeBuffers(SMgrRelation smgr_reln, ForkNumber\n> *forkNum,\n> > int nforks, BlockNumber *firstDelBlock) {\n> > int i;\n> > int j;\n> > + RelFileNodeBackend rnode;\n> > + bool accurate;\n> >\n> > It is better to initialize accurate with false. Again, is it better to\n> > change this variable name as 'cached'.\n> \n> *I* agree to initilization.\n\nUnderstood. I'll include only the initialization in the next updated patch.\n\n\n> > 4.\n> > + /*\n> > + * Look up the buffer in the hashtable if the block size is known to\n> > + * be accurate and the total blocks to be invalidated is below the\n> > + * full scan threshold. Otherwise, give up the optimization.\n> > + */\n> > + if (accurate && nBlocksToInvalidate <\n> BUF_DROP_FULL_SCAN_THRESHOLD)\n> > + { for (j = 0; j < nforks; j++) { BlockNumber curBlock;\n> > +\n> > + for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks[j];\n> > + curBlock++) {\n> > + uint32 bufHash; /* hash value for tag */ BufferTag bufTag; /*\n> > + identity of requested block */\n> > + LWLock *bufPartitionLock; /* buffer partition lock for it */\n> > + int buf_id;\n> > +\n> > + /* create a tag so we can lookup the buffer */\n> > + INIT_BUFFERTAG(bufTag, rnode.node, forkNum[j], curBlock);\n> > +\n> > + /* determine its hash code and partition lock ID */ bufHash =\n> > + BufTableHashCode(&bufTag); bufPartitionLock =\n> > + BufMappingPartitionLock(bufHash);\n> > +\n> > + /* Check that it is in the buffer pool. If not, do nothing. */\n> > + LWLockAcquire(bufPartitionLock, LW_SHARED); buf_id =\n> > + BufTableLookup(&bufTag, bufHash); LWLockRelease(bufPartitionLock);\n> > +\n> > + if (buf_id < 0)\n> > + continue;\n> > +\n> > + bufHdr = GetBufferDescriptor(buf_id);\n> > +\n> > + buf_state = LockBufHdr(bufHdr);\n> > +\n> > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> > + bufHdr->tag.forkNum == forkNum[j] && tag.blockNum >=\n> > + bufHdr->firstDelBlock[j])\n> > + InvalidateBuffer(bufHdr); /* releases spinlock */ else\n> > + UnlockBufHdr(bufHdr, buf_state); } } return; }\n> >\n> > Can we move the code under this 'if' condition to a separate function,\n> > say FindAndDropRelFileNodeBuffers or something like that?\n> \n> Thinking about the TRUNCATE optimization, it sounds reasonable to have a\n> separate function, which runs the optmized dropping unconditionally.\n\nHmm, sure., although only DropRelFileNodeBuffers() would call the new function.\nI guess it won't be a problem. \n\n\n> > v29-0004-TRUNCATE-optimization\n> > ------------------------------------------------\n> > 5.\n> > + for (i = 0; i < n; i++)\n> > + {\n> > + nforks = 0;\n> > + nBlocksToInvalidate = 0;\n> > +\n> > + for (j = 0; j <= MAX_FORKNUM; j++)\n> > + {\n> > + if (!smgrexists(rels[i], j))\n> > + continue;\n> > +\n> > + /* Get the number of blocks for a relation's fork */ nblocks =\n> > + smgrnblocks(rels[i], j, NULL);\n> > +\n> > + nBlocksToInvalidate += nblocks;\n> > +\n> > + forks[nforks++] = j;\n> > + }\n> > + if (nBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)\n> goto\n> > + buffer_full_scan;\n> > +\n> > + DropRelFileNodeBuffers(rels[i], forks, nforks, firstDelBlocks); }\n> > + pfree(nodes); pfree(rels); pfree(rnodes); return;\n> >\n> > I think this can be slower than the current Truncate. Say there are\n> > BUF_DROP_FULL_SCAN_THRESHOLD then you would anyway have to\n> scan the\n> > entire shared buffers so the work done in optimized path for other two\n> > relations will add some over head.\n>\n> That's true. The criteria here is the number of blocks of all relations. And\n> even if all of the relations is smaller than the threshold, we should go to the\n> full-scan dropping if the total size exceeds the threshold. So we cannot\n> reuse DropRelFileNodeBuffers() as is here.\n> > Also, as written, I think you need to remove the nodes for which you\n> > have invalidated the buffers via optimized path, no.\n\nRight, in the current patch it is indeed slower.\nBut the decision criteria whether to optimize or not is decided per relation,\nnot for all relations. So there is a possibility that we have already invalidated\nbuffers of the first relation, but the next relation buffers exceed the threshold that we\nneed to do the full scan. So yes that should be fixed. Remove the nodes that we\nhave already invalidated so that we don't scan them anymore when scanning NBuffers.\nI will fix in the next version.\n\nThank you for the helpful feedback. I'll upload the updated set of patches soon\nalso when we reach a consensus on the boolean parameter name too.\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Thu, 5 Nov 2020 02:56:35 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Nov 5, 2020 at 8:26 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Thursday, November 5, 2020 10:22 AM, Horiguchi-san wrote:\n> > >\n> > > Can we do a Truncate optimization once we decide about your other\n> > > patch as I see a few problems with it? If we can get the first patch\n> > > (vacuum optimization) committed it might be a bit easier for us to get\n> > > the truncate optimization. If possible, let's focus on (auto)vacuum\n> > > optimization first.\n>\n> Sure. That'd be better.\n>\n\nOkay, thanks.\n\n> > > Few comments on patches:\n> > > ======================\n> > > v29-0002-Add-bool-param-in-smgrnblocks-for-cached-blocks\n> > > ----------------------------------------------------------------------\n> > > -------------\n> > > 1.\n> > > -smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n> > > +smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n> > > {\n> > > BlockNumber result;\n> > >\n> > > /*\n> > > * For now, we only use cached values in recovery due to lack of a\n> > > shared\n> > > - * invalidation mechanism for changes in file size.\n> > > + * invalidation mechanism for changes in file size. The cached\n> > > + values\n> > > + * could be smaller than the actual number of existing buffers of the file.\n> > > + * This is caused by lseek of buggy Linux kernels that might not have\n> > > + * accounted for the recent write.\n> > > */\n> > > if (InRecovery && reln->smgr_cached_nblocks[forknum] !=\n> > > InvalidBlockNumber)\n> > > + {\n> > > + if (accurate != NULL)\n> > > + *accurate = true;\n> > > +\n> > >\n> > > I don't understand this comment. Few emails back, I think we have\n> > > discussed that cached value can't be less than the number of buffers\n> > > during recovery. If that happens to be true then we have some problem.\n> > > If you want to explain 'accurate' variable then you can do the same\n> > > atop of function. Would it be better to name this variable as\n> > > 'cached'?\n> >\n> > (I agree that the comment needs to be fixed.)\n> >\n> > FWIW I don't think 'cached' suggests the characteristics of the returned value\n> > on its interface. It was introduced to reduce fseek() calls, and after that we\n> > have found that it can be regarded as the authoritative source of the file size.\n> > The \"accurate\" means that it is guaranteed that we don't have a buffer for the\n> > file blocks further than that number. I don't come up with a more proper\n> > word than \"accurate\" but also I don't think \"cached\" is proper here.\n>\n\nSure but that is not the guarantee this API gives. It has to be\nguaranteed by the logic else-where, so not sure if it is a good idea\nto try to reflect the same here. The comments in the caller where we\nuse this should explain why it is safe to use this value.\n\n\n> I also couldn't think of a better parameter name. Accurate seems to be better fit\n> as it describes a measurement close to an accepted value.\n> How about fixing the comment like below, would this suffice?\n>\n> /*\n> * smgrnblocks() -- Calculate the number of blocks in the\n> * supplied relation.\n> *\n> * accurate flag acts as an authoritative source of the file size and\n> * ensures that no buffers exist for blocks after the file size is known\n> * to the process.\n> */\n> BlockNumber\n> smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n> {\n> BlockNumber result;\n>\n> /*\n> * For now, we only use cached values in recovery due to lack of a shared\n> * invalidation mechanism for changes in file size. In recovery, the cached\n> * value returned by the first lseek could be smaller than the actual number\n> * of existing buffers of the file, which is caused by buggy Linux kernels\n> * that might not have accounted for the recent write. However, we can\n> * still rely on the cached value even if we get a bogus value from first\n> * lseek since it is impossible to have buffer for blocks after the file size.\n> */\n>\n>\n> > By the way, if there's a case where we extend a file by more than one block the\n> > cached value becomes invalid. I'm not sure if it actually happens, but the\n> > following sequence may lead to a problem. We need a protection for that\n> > case.\n> >\n> > smgrnblocks() : cached n\n> > truncate to n-5 : cached n=5\n> > extend to m + 2 : cached invalid\n> > (fsync failed)\n> > smgrnblocks() : returns and cached n-5\n>\n\nI think one possible idea is to actually commit the Assert patch\n(v29-0001-Prevent-invalidating-blocks-in-smgrextend-during) to ensure\nthat it can't happen during recovery. And even if it happens why would\nthere be any buffer with the block in it left when the fsync failed?\nAnd if there is no buffer with a block which doesn't account due to\nlseek lies then there shouldn't be any problem. Do you have any other\nideas on what better can be done here?\n\n> I am not sure if the patch should cover this or should be a separate thread altogether since\n> a number of functions also rely on the smgrnblocks(). But I'll take it into consideration.\n>\n>\n> > > v29-0003-Optimize-DropRelFileNodeBuffers-during-recovery\n> > > ----------------------------------------------------------------------\n> > > ------------\n> > > 2.\n> > > + /* Check that it is in the buffer pool. If not, do nothing. */\n> > > + LWLockAcquire(bufPartitionLock, LW_SHARED); buf_id =\n> > > + BufTableLookup(&bufTag, bufHash); LWLockRelease(bufPartitionLock);\n> > > +\n> > > + if (buf_id < 0)\n> > > + continue;\n> > > +\n> > > + bufHdr = GetBufferDescriptor(buf_id);\n> > > +\n> > > + buf_state = LockBufHdr(bufHdr);\n> > > +\n> > > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> > >\n> > > I think a pre-check for RelFileNode might be better before LockBufHdr\n> > > for the reasons mentioned in this function few lines down.\n> >\n> > The equivalent check is already done by BufTableLookup(). The last line in\n> > the above is not a precheck but the final check.\n>\n\nWhich check in that API you are talking about? Are you telling because\nwe are trying to use a hash value corresponding to rnode.node to find\nthe block then I don't think it is equivalent because there is a\ndifference in actual values. But even if we want to rely on that, a\ncomment is required but I guess we can do the check as well because it\nshouldn't be a costly pre-check.\n\n>\n> > > 4.\n> > > + /*\n> > > + * Look up the buffer in the hashtable if the block size is known to\n> > > + * be accurate and the total blocks to be invalidated is below the\n> > > + * full scan threshold. Otherwise, give up the optimization.\n> > > + */\n> > > + if (accurate && nBlocksToInvalidate <\n> > BUF_DROP_FULL_SCAN_THRESHOLD)\n> > > + { for (j = 0; j < nforks; j++) { BlockNumber curBlock;\n> > > +\n> > > + for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks[j];\n> > > + curBlock++) {\n> > > + uint32 bufHash; /* hash value for tag */ BufferTag bufTag; /*\n> > > + identity of requested block */\n> > > + LWLock *bufPartitionLock; /* buffer partition lock for it */\n> > > + int buf_id;\n> > > +\n> > > + /* create a tag so we can lookup the buffer */\n> > > + INIT_BUFFERTAG(bufTag, rnode.node, forkNum[j], curBlock);\n> > > +\n> > > + /* determine its hash code and partition lock ID */ bufHash =\n> > > + BufTableHashCode(&bufTag); bufPartitionLock =\n> > > + BufMappingPartitionLock(bufHash);\n> > > +\n> > > + /* Check that it is in the buffer pool. If not, do nothing. */\n> > > + LWLockAcquire(bufPartitionLock, LW_SHARED); buf_id =\n> > > + BufTableLookup(&bufTag, bufHash); LWLockRelease(bufPartitionLock);\n> > > +\n> > > + if (buf_id < 0)\n> > > + continue;\n> > > +\n> > > + bufHdr = GetBufferDescriptor(buf_id);\n> > > +\n> > > + buf_state = LockBufHdr(bufHdr);\n> > > +\n> > > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> > > + bufHdr->tag.forkNum == forkNum[j] && tag.blockNum >=\n> > > + bufHdr->firstDelBlock[j])\n> > > + InvalidateBuffer(bufHdr); /* releases spinlock */ else\n> > > + UnlockBufHdr(bufHdr, buf_state); } } return; }\n> > >\n> > > Can we move the code under this 'if' condition to a separate function,\n> > > say FindAndDropRelFileNodeBuffers or something like that?\n> >\n> > Thinking about the TRUNCATE optimization, it sounds reasonable to have a\n> > separate function, which runs the optmized dropping unconditionally.\n>\n> Hmm, sure., although only DropRelFileNodeBuffers() would call the new function.\n> I guess it won't be a problem.\n>\n\nThat shouldn't be a problem, you can make it a static function. It is\nmore from the code-readability perspective.\n\n>\n> > > v29-0004-TRUNCATE-optimization\n> > > ------------------------------------------------\n> > > 5.\n> > > + for (i = 0; i < n; i++)\n> > > + {\n> > > + nforks = 0;\n> > > + nBlocksToInvalidate = 0;\n> > > +\n> > > + for (j = 0; j <= MAX_FORKNUM; j++)\n> > > + {\n> > > + if (!smgrexists(rels[i], j))\n> > > + continue;\n> > > +\n> > > + /* Get the number of blocks for a relation's fork */ nblocks =\n> > > + smgrnblocks(rels[i], j, NULL);\n> > > +\n> > > + nBlocksToInvalidate += nblocks;\n> > > +\n> > > + forks[nforks++] = j;\n> > > + }\n> > > + if (nBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)\n> > goto\n> > > + buffer_full_scan;\n> > > +\n> > > + DropRelFileNodeBuffers(rels[i], forks, nforks, firstDelBlocks); }\n> > > + pfree(nodes); pfree(rels); pfree(rnodes); return;\n> > >\n> > > I think this can be slower than the current Truncate. Say there are\n> > > BUF_DROP_FULL_SCAN_THRESHOLD then you would anyway have to\n> > scan the\n> > > entire shared buffers so the work done in optimized path for other two\n> > > relations will add some over head.\n> >\n> > That's true. The criteria here is the number of blocks of all relations. And\n> > even if all of the relations is smaller than the threshold, we should go to the\n> > full-scan dropping if the total size exceeds the threshold. So we cannot\n> > reuse DropRelFileNodeBuffers() as is here.\n> > > Also, as written, I think you need to remove the nodes for which you\n> > > have invalidated the buffers via optimized path, no.\n>\n> Right, in the current patch it is indeed slower.\n> But the decision criteria whether to optimize or not is decided per relation,\n> not for all relations. So there is a possibility that we have already invalidated\n> buffers of the first relation, but the next relation buffers exceed the threshold that we\n> need to do the full scan. So yes that should be fixed. Remove the nodes that we\n> have already invalidated so that we don't scan them anymore when scanning NBuffers.\n> I will fix in the next version.\n>\n> Thank you for the helpful feedback. I'll upload the updated set of patches soon\n> also when we reach a consensus on the boolean parameter name too.\n>\n\nSure, but feel free to leave the truncate optimization patch for now,\nwe can do that as a follow-up patch once the vacuum-optimization patch\nis committed. Horiguchi-San, are you fine with this approach?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Nov 2020 11:07:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 5 Nov 2020 11:07:21 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Nov 5, 2020 at 8:26 AM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n> > > > Few comments on patches:\n> > > > ======================\n> > > > v29-0002-Add-bool-param-in-smgrnblocks-for-cached-blocks\n> > > > ----------------------------------------------------------------------\n> > > > -------------\n> > > > 1.\n> > > > -smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n> > > > +smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n> > > > {\n> > > > BlockNumber result;\n> > > >\n> > > > /*\n> > > > * For now, we only use cached values in recovery due to lack of a\n> > > > shared\n> > > > - * invalidation mechanism for changes in file size.\n> > > > + * invalidation mechanism for changes in file size. The cached\n> > > > + values\n> > > > + * could be smaller than the actual number of existing buffers of the file.\n> > > > + * This is caused by lseek of buggy Linux kernels that might not have\n> > > > + * accounted for the recent write.\n> > > > */\n> > > > if (InRecovery && reln->smgr_cached_nblocks[forknum] !=\n> > > > InvalidBlockNumber)\n> > > > + {\n> > > > + if (accurate != NULL)\n> > > > + *accurate = true;\n> > > > +\n> > > >\n> > > > I don't understand this comment. Few emails back, I think we have\n> > > > discussed that cached value can't be less than the number of buffers\n> > > > during recovery. If that happens to be true then we have some problem.\n> > > > If you want to explain 'accurate' variable then you can do the same\n> > > > atop of function. Would it be better to name this variable as\n> > > > 'cached'?\n> > >\n> > > (I agree that the comment needs to be fixed.)\n> > >\n> > > FWIW I don't think 'cached' suggests the characteristics of the returned value\n> > > on its interface. It was introduced to reduce fseek() calls, and after that we\n> > > have found that it can be regarded as the authoritative source of the file size.\n> > > The \"accurate\" means that it is guaranteed that we don't have a buffer for the\n> > > file blocks further than that number. I don't come up with a more proper\n> > > word than \"accurate\" but also I don't think \"cached\" is proper here.\n> >\n> \n> Sure but that is not the guarantee this API gives. It has to be\n> guaranteed by the logic else-where, so not sure if it is a good idea\n> to try to reflect the same here. The comments in the caller where we\n> use this should explain why it is safe to use this value.\n\nIsn't it already guaranteed by the bugmgr code that we don't have\nbuffers for nonexistent file blocks? What is needed here is, yeah,\nthe returned value from smgrblocks is \"reliable\". If \"reliable\" is\nstill not proper, I give up and agree to \"cached\".\n\n> > I also couldn't think of a better parameter name. Accurate seems to be better fit\n> > as it describes a measurement close to an accepted value.\n> > How about fixing the comment like below, would this suffice?\n> >\n> > /*\n> > * smgrnblocks() -- Calculate the number of blocks in the\n> > * supplied relation.\n> > *\n> > * accurate flag acts as an authoritative source of the file size and\n> > * ensures that no buffers exist for blocks after the file size is known\n> > * to the process.\n> > */\n> > BlockNumber\n> > smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n> > {\n> > BlockNumber result;\n> >\n> > /*\n> > * For now, we only use cached values in recovery due to lack of a shared\n> > * invalidation mechanism for changes in file size. In recovery, the cached\n> > * value returned by the first lseek could be smaller than the actual number\n> > * of existing buffers of the file, which is caused by buggy Linux kernels\n> > * that might not have accounted for the recent write. However, we can\n> > * still rely on the cached value even if we get a bogus value from first\n> > * lseek since it is impossible to have buffer for blocks after the file size.\n> > */\n> >\n> >\n> > > By the way, if there's a case where we extend a file by more than one block the\n> > > cached value becomes invalid. I'm not sure if it actually happens, but the\n> > > following sequence may lead to a problem. We need a protection for that\n> > > case.\n> > >\n> > > smgrnblocks() : cached n\n> > > truncate to n-5 : cached n=5\n> > > extend to m + 2 : cached invalid\n> > > (fsync failed)\n> > > smgrnblocks() : returns and cached n-5\n> >\n> \n> I think one possible idea is to actually commit the Assert patch\n> (v29-0001-Prevent-invalidating-blocks-in-smgrextend-during) to ensure\n> that it can't happen during recovery. And even if it happens why would\n> there be any buffer with the block in it left when the fsync failed?\n> And if there is no buffer with a block which doesn't account due to\n> lseek lies then there shouldn't be any problem. Do you have any other\n> ideas on what better can be done here?\n\nOuch! Sorry for the confusion. I confused that patch touches the\ntruncation side. Yes the 0001 does that.\n\n> > I am not sure if the patch should cover this or should be a separate thread altogether since\n> > a number of functions also rely on the smgrnblocks(). But I'll take it into consideration.\n> >\n> >\n> > > > v29-0003-Optimize-DropRelFileNodeBuffers-during-recovery\n> > > > ----------------------------------------------------------------------\n> > > > ------------\n> > > > 2.\n> > > > + /* Check that it is in the buffer pool. If not, do nothing. */\n> > > > + LWLockAcquire(bufPartitionLock, LW_SHARED); buf_id =\n> > > > + BufTableLookup(&bufTag, bufHash); LWLockRelease(bufPartitionLock);\n> > > > +\n> > > > + if (buf_id < 0)\n> > > > + continue;\n> > > > +\n> > > > + bufHdr = GetBufferDescriptor(buf_id);\n> > > > +\n> > > > + buf_state = LockBufHdr(bufHdr);\n> > > > +\n> > > > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> > > >\n> > > > I think a pre-check for RelFileNode might be better before LockBufHdr\n> > > > for the reasons mentioned in this function few lines down.\n> > >\n> > > The equivalent check is already done by BufTableLookup(). The last line in\n> > > the above is not a precheck but the final check.\n> >\n> \n> Which check in that API you are talking about? Are you telling because\n> we are trying to use a hash value corresponding to rnode.node to find\n> the block then I don't think it is equivalent because there is a\n> difference in actual values. But even if we want to rely on that, a\n> comment is required but I guess we can do the check as well because it\n> shouldn't be a costly pre-check.\n\nI think the only problematic case is that BufTableLookup wrongly\nmisses buffers actually to be dropped. (And the case of too-many\nfalse-positives, not critical though.) If omission is the case, we\ncannot adopt this optimization at all. And if the false-positive is\nthe case, maybe we need to adopt more restrictive prechecking, but\nRelFileNodeEquals is *not* more restrictive than BufTableLookup in the\nfirst place.\n\nWhat case do you think is problematic when considering\nBufTableLookup() as the prechecking?\n\n> > > > 4.\n> > > > + /*\n> > > > + * Look up the buffer in the hashtable if the block size is known to\n> > > > + * be accurate and the total blocks to be invalidated is below the\n> > > > + * full scan threshold. Otherwise, give up the optimization.\n> > > > + */\n> > > > + if (accurate && nBlocksToInvalidate <\n> > > BUF_DROP_FULL_SCAN_THRESHOLD)\n> > > > + { for (j = 0; j < nforks; j++) { BlockNumber curBlock;\n> > > > +\n> > > > + for (curBlock = firstDelBlock[j]; curBlock < nForkBlocks[j];\n> > > > + curBlock++) {\n> > > > + uint32 bufHash; /* hash value for tag */ BufferTag bufTag; /*\n> > > > + identity of requested block */\n> > > > + LWLock *bufPartitionLock; /* buffer partition lock for it */\n> > > > + int buf_id;\n> > > > +\n> > > > + /* create a tag so we can lookup the buffer */\n> > > > + INIT_BUFFERTAG(bufTag, rnode.node, forkNum[j], curBlock);\n> > > > +\n> > > > + /* determine its hash code and partition lock ID */ bufHash =\n> > > > + BufTableHashCode(&bufTag); bufPartitionLock =\n> > > > + BufMappingPartitionLock(bufHash);\n> > > > +\n> > > > + /* Check that it is in the buffer pool. If not, do nothing. */\n> > > > + LWLockAcquire(bufPartitionLock, LW_SHARED); buf_id =\n> > > > + BufTableLookup(&bufTag, bufHash); LWLockRelease(bufPartitionLock);\n> > > > +\n> > > > + if (buf_id < 0)\n> > > > + continue;\n> > > > +\n> > > > + bufHdr = GetBufferDescriptor(buf_id);\n> > > > +\n> > > > + buf_state = LockBufHdr(bufHdr);\n> > > > +\n> > > > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> > > > + bufHdr->tag.forkNum == forkNum[j] && tag.blockNum >=\n> > > > + bufHdr->firstDelBlock[j])\n> > > > + InvalidateBuffer(bufHdr); /* releases spinlock */ else\n> > > > + UnlockBufHdr(bufHdr, buf_state); } } return; }\n> > > >\n> > > > Can we move the code under this 'if' condition to a separate function,\n> > > > say FindAndDropRelFileNodeBuffers or something like that?\n> > >\n> > > Thinking about the TRUNCATE optimization, it sounds reasonable to have a\n> > > separate function, which runs the optmized dropping unconditionally.\n> >\n> > Hmm, sure., although only DropRelFileNodeBuffers() would call the new function.\n> > I guess it won't be a problem.\n> >\n> \n> That shouldn't be a problem, you can make it a static function. It is\n> more from the code-readability perspective.\n\n> Sure, but feel free to leave the truncate optimization patch for now,\n> we can do that as a follow-up patch once the vacuum-optimization patch\n> is committed. Horiguchi-San, are you fine with this approach?\n\nOf course. I don't think we have to commit the two at once at all.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 05 Nov 2020 17:29:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Nov 5, 2020 at 1:59 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 5 Nov 2020 11:07:21 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Thu, Nov 5, 2020 at 8:26 AM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> > > > > Few comments on patches:\n> > > > > ======================\n> > > > > v29-0002-Add-bool-param-in-smgrnblocks-for-cached-blocks\n> > > > > ----------------------------------------------------------------------\n> > > > > -------------\n> > > > > 1.\n> > > > > -smgrnblocks(SMgrRelation reln, ForkNumber forknum)\n> > > > > +smgrnblocks(SMgrRelation reln, ForkNumber forknum, bool *accurate)\n> > > > > {\n> > > > > BlockNumber result;\n> > > > >\n> > > > > /*\n> > > > > * For now, we only use cached values in recovery due to lack of a\n> > > > > shared\n> > > > > - * invalidation mechanism for changes in file size.\n> > > > > + * invalidation mechanism for changes in file size. The cached\n> > > > > + values\n> > > > > + * could be smaller than the actual number of existing buffers of the file.\n> > > > > + * This is caused by lseek of buggy Linux kernels that might not have\n> > > > > + * accounted for the recent write.\n> > > > > */\n> > > > > if (InRecovery && reln->smgr_cached_nblocks[forknum] !=\n> > > > > InvalidBlockNumber)\n> > > > > + {\n> > > > > + if (accurate != NULL)\n> > > > > + *accurate = true;\n> > > > > +\n> > > > >\n> > > > > I don't understand this comment. Few emails back, I think we have\n> > > > > discussed that cached value can't be less than the number of buffers\n> > > > > during recovery. If that happens to be true then we have some problem.\n> > > > > If you want to explain 'accurate' variable then you can do the same\n> > > > > atop of function. Would it be better to name this variable as\n> > > > > 'cached'?\n> > > >\n> > > > (I agree that the comment needs to be fixed.)\n> > > >\n> > > > FWIW I don't think 'cached' suggests the characteristics of the returned value\n> > > > on its interface. It was introduced to reduce fseek() calls, and after that we\n> > > > have found that it can be regarded as the authoritative source of the file size.\n> > > > The \"accurate\" means that it is guaranteed that we don't have a buffer for the\n> > > > file blocks further than that number. I don't come up with a more proper\n> > > > word than \"accurate\" but also I don't think \"cached\" is proper here.\n> > >\n> >\n> > Sure but that is not the guarantee this API gives. It has to be\n> > guaranteed by the logic else-where, so not sure if it is a good idea\n> > to try to reflect the same here. The comments in the caller where we\n> > use this should explain why it is safe to use this value.\n>\n> Isn't it already guaranteed by the bugmgr code that we don't have\n> buffers for nonexistent file blocks? What is needed here is, yeah,\n> the returned value from smgrblocks is \"reliable\". If \"reliable\" is\n> still not proper, I give up and agree to \"cached\".\n>\n\n\nI still feel 'cached' is a better name.\n\n>\n> > > I am not sure if the patch should cover this or should be a separate thread altogether since\n> > > a number of functions also rely on the smgrnblocks(). But I'll take it into consideration.\n> > >\n> > >\n> > > > > v29-0003-Optimize-DropRelFileNodeBuffers-during-recovery\n> > > > > ----------------------------------------------------------------------\n> > > > > ------------\n> > > > > 2.\n> > > > > + /* Check that it is in the buffer pool. If not, do nothing. */\n> > > > > + LWLockAcquire(bufPartitionLock, LW_SHARED); buf_id =\n> > > > > + BufTableLookup(&bufTag, bufHash); LWLockRelease(bufPartitionLock);\n> > > > > +\n> > > > > + if (buf_id < 0)\n> > > > > + continue;\n> > > > > +\n> > > > > + bufHdr = GetBufferDescriptor(buf_id);\n> > > > > +\n> > > > > + buf_state = LockBufHdr(bufHdr);\n> > > > > +\n> > > > > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> > > > >\n> > > > > I think a pre-check for RelFileNode might be better before LockBufHdr\n> > > > > for the reasons mentioned in this function few lines down.\n> > > >\n> > > > The equivalent check is already done by BufTableLookup(). The last line in\n> > > > the above is not a precheck but the final check.\n> > >\n> >\n> > Which check in that API you are talking about? Are you telling because\n> > we are trying to use a hash value corresponding to rnode.node to find\n> > the block then I don't think it is equivalent because there is a\n> > difference in actual values. But even if we want to rely on that, a\n> > comment is required but I guess we can do the check as well because it\n> > shouldn't be a costly pre-check.\n>\n> I think the only problematic case is that BufTableLookup wrongly\n> misses buffers actually to be dropped. (And the case of too-many\n> false-positives, not critical though.) If omission is the case, we\n> cannot adopt this optimization at all. And if the false-positive is\n> the case, maybe we need to adopt more restrictive prechecking, but\n> RelFileNodeEquals is *not* more restrictive than BufTableLookup in the\n> first place.\n>\n> What case do you think is problematic when considering\n> BufTableLookup() as the prechecking?\n>\n\nI was slightly worried about false-positives but on again thinking\nabout it, I think we don't need any additional pre-check here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Nov 2020 15:18:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Nov 5, 2020 at 10:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I still feel 'cached' is a better name.\n\nAmusingly, this thread is hitting all the hardest problems in computer\nscience according to the well known aphorism...\n\nHere's a devil's advocate position I thought about: It's OK to leave\nstray buffers (clean or dirty) in the buffer pool if files are\ntruncated underneath us by gremlins, as long as your system eventually\ncrashes before completing a checkpoint. The OID can't be recycled\nuntil after a successful checkpoint, so the stray blocks can't be\nconfused with the blocks of another relation, and weird errors are\nexpected on a system that is in serious trouble. It's actually much\nworse that we can give incorrect answers to queries when files are\ntruncated by gremlins (in the window of time before we presumably\ncrash because of EIO), because we're violating basic ACID principles\nin user-visible ways. In this thread, discussion has focused on\navailability (ie avoiding failures when trying to write back stray\nbuffers to a file that has been unlinked), but really a system that\ncan't see arbitrary committed transactions *shouldn't be available*.\nThis argument applies whether you think SEEK_END can only give weird\nanswers in the specific scenario I demonstrated with NFS, or whether\nyou think it's arbitrarily b0rked and reports random numbers: we\nfundamentally can't tolerate that, so why are we trying to?\n\n\n", "msg_date": "Fri, 6 Nov 2020 12:31:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, October 22, 2020 3:15 PM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> I'm not sure about the exact steps of the test, but it can be expected if we\n> have many small relations to truncate.\n> \n> Currently BUF_DROP_FULL_SCAN_THRESHOLD is set to Nbuffers / 512,\n> which is quite arbitrary that comes from a wild guess.\n> \n> Perhaps we need to run benchmarks that drops one relation with several\n> different ratios between the number of buffers to-be-dropped and Nbuffers,\n> and preferably both on spinning rust and SSD.\n\nSorry to get back to you on this just now.\nSince we're prioritizing the vacuum patch, we also need to finalize which threshold value to use.\nI proceeded testing with my latest set of patches because Amit-san's comments on the code, the ones we addressed,\ndon't really affect the performance. I'll post the updated patches for 0002 & 0003 after we come up with the proper\nboolean parameter name for smgrnblocks and the buffer full scan threshold value.\n\nTest the VACUUM performance with the following thresholds: \n NBuffers/512, NBuffers/256, NBuffers/128,\nand determine which of the ratio has the best performance in terms of speed.\n\nI tested this on my machine (CPU 4v, 8GB memory, ext4) running on SSD.\nConfigure streaming replication environment.\nshared_buffers = 100GB\nautovacuum = off\nfull_page_writes = off\ncheckpoint_timeout = 30min\n\n[Steps]\n1. Create TABLE\n2. INSERT data\n3. DELETE from TABLE\n4. Pause WAL replay on standby\n5. VACUUM. Stop the primary.\n6. Resume WAL replay and promote standby.\n\nWith 1 relation, there were no significant changes that we can observe:\n(In seconds)\n| s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 | \n|-------|--------|--------------|--------------|--------------| \n| 128MB | 0.106 | 0.105 | 0.105 | 0.105 | \n| 100GB | 0.106 | 0.105 | 0.105 | 0.105 |\n\nSo I tested with 100 tables and got more convincing measurements:\n\n| s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 | \n|-------|--------|--------------|--------------|--------------| \n| 128MB | 1.006 | 1.007 | 1.006 | 0.107 | \n| 1GB | 0.706 | 0.606 | 0.606 | 0.605 | \n| 20GB | 1.907 | 0.606 | 0.606 | 0.605 | \n| 100GB | 7.013 | 0.706 | 0.606 | 0.607 |\n\nThe threshold NBuffers/128 has the best performance for default shared_buffers (128MB)\nwith 0.107 s, and equally performing with large shared_buffers up to 100GB.\n\nWe can use NBuffers/128 for the threshold, although I don't have a measurement for HDD yet. \nHowever, I wonder if the above method would suffice to determine the final threshold that we\ncan use. If anyone has suggestions on how we can come up with the final value, like if I need\nto modify some steps above, I'd appreciate it.\n\nRegarding the parameter name. Instead of accurate, we can use \"cached\" as originally intended from\nthe early versions of the patch since it is the smgr that handles smgrnblocks to get the the block\nsize of smgr_cached_nblocks.. \"accurate\" may confuse us because the cached value may not\nbe actually accurate..\n\nRegards,\nKirk Jamison\n\n\n\n", "msg_date": "Fri, 6 Nov 2020 03:44:50 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Nov 6, 2020 at 5:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Nov 5, 2020 at 10:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I still feel 'cached' is a better name.\n>\n> Amusingly, this thread is hitting all the hardest problems in computer\n> science according to the well known aphorism...\n>\n> Here's a devil's advocate position I thought about: It's OK to leave\n> stray buffers (clean or dirty) in the buffer pool if files are\n> truncated underneath us by gremlins, as long as your system eventually\n> crashes before completing a checkpoint. The OID can't be recycled\n> until after a successful checkpoint, so the stray blocks can't be\n> confused with the blocks of another relation, and weird errors are\n> expected on a system that is in serious trouble. It's actually much\n> worse that we can give incorrect answers to queries when files are\n> truncated by gremlins (in the window of time before we presumably\n> crash because of EIO), because we're violating basic ACID principles\n> in user-visible ways. In this thread, discussion has focused on\n> availability (ie avoiding failures when trying to write back stray\n> buffers to a file that has been unlinked), but really a system that\n> can't see arbitrary committed transactions *shouldn't be available*.\n> This argument applies whether you think SEEK_END can only give weird\n> answers in the specific scenario I demonstrated with NFS, or whether\n> you think it's arbitrarily b0rked and reports random numbers: we\n> fundamentally can't tolerate that, so why are we trying to?\n>\n\nIt is not very clear to me how this argument applies to the patch\nin-discussion where we are relying on the cached value of blocks\nduring recovery. I understand your point that we might skip scanning\nthe pages and thus might not show some recently added data but that\npoint is not linked with what we are trying to do with this patch.\nAFAIU, the theory we discussed above is that there shouldn't be any\nstray blocks in the buffers with this patch because even if the\nsmgrnblocks(SEEK_END) didn't gave us the right answers, we shouldn't\nhave any buffers for the blocks after the size returned by smgrnblocks\nduring recovery. I think the problem could happen if we extend the\nrelation by multiple blocks which will invalidate the cached value\nduring recovery and then probably the future calls to smgrnblocks can\nlead to problems if it lies with us but as far as I know we don't do\nthat. Can you please be more specific how this patch can lead to a\nproblem?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 6 Nov 2020 09:40:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Nov 6, 2020 at 5:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Nov 6, 2020 at 5:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's a devil's advocate position I thought about: It's OK to leave\n> > stray buffers (clean or dirty) in the buffer pool if files are\n> > truncated underneath us by gremlins, as long as your system eventually\n> > crashes before completing a checkpoint. The OID can't be recycled\n> > until after a successful checkpoint, so the stray blocks can't be\n> > confused with the blocks of another relation, and weird errors are\n> > expected on a system that is in serious trouble. It's actually much\n> > worse that we can give incorrect answers to queries when files are\n> > truncated by gremlins (in the window of time before we presumably\n> > crash because of EIO), because we're violating basic ACID principles\n> > in user-visible ways. In this thread, discussion has focused on\n> > availability (ie avoiding failures when trying to write back stray\n> > buffers to a file that has been unlinked), but really a system that\n> > can't see arbitrary committed transactions *shouldn't be available*.\n> > This argument applies whether you think SEEK_END can only give weird\n> > answers in the specific scenario I demonstrated with NFS, or whether\n> > you think it's arbitrarily b0rked and reports random numbers: we\n> > fundamentally can't tolerate that, so why are we trying to?\n>\n> It is not very clear to me how this argument applies to the patch\n> in-discussion where we are relying on the cached value of blocks\n> during recovery. I understand your point that we might skip scanning\n> the pages and thus might not show some recently added data but that\n> point is not linked with what we are trying to do with this patch.\n\nIt's an argument for giving up the hard-to-name cache trick completely\nand going back to using unmodified smgrnblocks(), both in recovery and\nonline. If the only mechanism for unexpected file shrinkage is\nwriteback failure, then your system will be panicking soon enough\nanyway -- so is it really that bad if there are potentially some other\nweird errors logged some time before that? Maybe those errors will\neven take the system down sooner, and maybe that's appropriate? If\nthere are other mechanisms for random file shrinkage that don't imply\na panic in your near future, then we have bigger problems that can't\nbe solved by any number of bandaids, at least not without\nunderstanding the details of this hypothetical unknown failure mode.\n\nThe main argument I can think of against the idea of using plain old\nsmgrnblocks() is that the current error messages on smgrwrite()\nfailure for stray blocks would be indistinguishible from cases where\nan external actor unlinked the file. I don't mind getting an error\nthat prevents checkpointing -- your system is in big trouble! -- but\nit'd be nice to be able to detect that *we* unlinked the file,\nimplying the filesystem and bufferpool are out of sync, and spit out a\nspecial diagnostic message. I suppose if it's the checkpointer doing\nthe writing, it could check if the relfilenode is on the\nqueued-up-for-delete-after-the-checkpoint list, and if so, it could\nproduce a different error message just for this edge case.\nUnfortunately that's not a general solution, because any backend might\ntry to write a buffer out and they aren't synchronised with\ncheckpoints.\n\nI'm not sure what the best approach is. It'd certainly be nice to be\nable to drop small tables quickly online too, as a benefit of this\napproach.\n\n\n", "msg_date": "Fri, 6 Nov 2020 18:40:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Nov 6, 2020 at 11:10 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Nov 6, 2020 at 5:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > It is not very clear to me how this argument applies to the patch\n> > in-discussion where we are relying on the cached value of blocks\n> > during recovery. I understand your point that we might skip scanning\n> > the pages and thus might not show some recently added data but that\n> > point is not linked with what we are trying to do with this patch.\n>\n> It's an argument for giving up the hard-to-name cache trick completely\n> and going back to using unmodified smgrnblocks(), both in recovery and\n> online. If the only mechanism for unexpected file shrinkage is\n> writeback failure, then your system will be panicking soon enough\n> anyway\n>\n\nHow else (except for writeback failure due to unexpected shrinkage)\nthe system will panic? Are you saying that if users don't get some\ndata due to lseek lying with us then it will be equivalent to panic or\nare you indicating the scenario where ReadBuffer_common gives error\n\"unexpected data beyond EOF ....\"?\n\n> -- so is it really that bad if there are potentially some other\n> weird errors logged some time before that? Maybe those errors will\n> even take the system down sooner, and maybe that's appropriate?\n>\n\nYeah, it might be appropriate to panic in such situations but\nReadBuffer_common gives an error and ask user to update the system.\n\n\n> If\n> there are other mechanisms for random file shrinkage that don't imply\n> a panic in your near future, then we have bigger problems that can't\n> be solved by any number of bandaids, at least not without\n> understanding the details of this hypothetical unknown failure mode.\n>\n\nI think one of the problems is returning fewer rows and that too\nwithout any warning or error, so maybe that is a bigger problem but we\nseem to be okay with it as that is already a known thing though I\nthink that is not documented anywhere.\n\n> The main argument I can think of against the idea of using plain old\n> smgrnblocks() is that the current error messages on smgrwrite()\n> failure for stray blocks would be indistinguishible from cases where\n> an external actor unlinked the file. I don't mind getting an error\n> that prevents checkpointing -- your system is in big trouble! -- but\n> it'd be nice to be able to detect that *we* unlinked the file,\n> implying the filesystem and bufferpool are out of sync, and spit out a\n> special diagnostic message. I suppose if it's the checkpointer doing\n> the writing, it could check if the relfilenode is on the\n> queued-up-for-delete-after-the-checkpoint list, and if so, it could\n> produce a different error message just for this edge case.\n> Unfortunately that's not a general solution, because any backend might\n> try to write a buffer out and they aren't synchronised with\n> checkpoints.\n>\n\nYeah, but I am not sure if we can consider manual (external actor)\ntinkering with the files the same as something that happened due to\nthe database server relying on the wrong information.\n\n> I'm not sure what the best approach is. It'd certainly be nice to be\n> able to drop small tables quickly online too, as a benefit of this\n> approach.\n\nRight, that is why I was thinking to do it only for recovery where it\nis safe from the database server perspective. OTOH, if we broadly\naccept that any time filesystem lies with us the behavior could be\nunpredictable like the system can return fewer rows than expected or\nit could cause panic. I think there is an argument that it might be\nbetter to error out (even with panic) rather than silently returning\nfewer rows but unfortunately detecting the same in each and every case\ndoesn't seem feasible.\n\nOne vague idea could be to develop pg_test_seek which can detect such\nproblems but not sure if we can rely on such a tool to always give us\nthe right answer. Were you able to consistently reproduce the lseek\nproblem on the system where you have tried?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 6 Nov 2020 17:10:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "> From: k.jamison@fujitsu.com <k.jamison@fujitsu.com>\n> On Thursday, October 22, 2020 3:15 PM, Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I'm not sure about the exact steps of the test, but it can be expected\n> > if we have many small relations to truncate.\n> >\n> > Currently BUF_DROP_FULL_SCAN_THRESHOLD is set to Nbuffers / 512,\n> which\n> > is quite arbitrary that comes from a wild guess.\n> >\n> > Perhaps we need to run benchmarks that drops one relation with several\n> > different ratios between the number of buffers to-be-dropped and\n> > Nbuffers, and preferably both on spinning rust and SSD.\n> \n> Sorry to get back to you on this just now.\n> Since we're prioritizing the vacuum patch, we also need to finalize which\n> threshold value to use.\n> I proceeded testing with my latest set of patches because Amit-san's\n> comments on the code, the ones we addressed, don't really affect the\n> performance. I'll post the updated patches for 0002 & 0003 after we come up\n> with the proper boolean parameter name for smgrnblocks and the buffer full\n> scan threshold value.\n> \n> Test the VACUUM performance with the following thresholds:\n> NBuffers/512, NBuffers/256, NBuffers/128, and determine which of the\n> ratio has the best performance in terms of speed.\n> \n> I tested this on my machine (CPU 4v, 8GB memory, ext4) running on SSD.\n> Configure streaming replication environment.\n> shared_buffers = 100GB\n> autovacuum = off\n> full_page_writes = off\n> checkpoint_timeout = 30min\n> \n> [Steps]\n> 1. Create TABLE\n> 2. INSERT data\n> 3. DELETE from TABLE\n> 4. Pause WAL replay on standby\n> 5. VACUUM. Stop the primary.\n> 6. Resume WAL replay and promote standby.\n> \n> With 1 relation, there were no significant changes that we can observe:\n> (In seconds)\n> | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |\n> |-------|--------|--------------|--------------|--------------|\n> | 128MB | 0.106 | 0.105 | 0.105 | 0.105 |\n> | 100GB | 0.106 | 0.105 | 0.105 | 0.105 |\n> \n> So I tested with 100 tables and got more convincing measurements:\n> \n> | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |\n> |-------|--------|--------------|--------------|--------------|\n> | 128MB | 1.006 | 1.007 | 1.006 | 0.107 |\n> | 1GB | 0.706 | 0.606 | 0.606 | 0.605 |\n> | 20GB | 1.907 | 0.606 | 0.606 | 0.605 |\n> | 100GB | 7.013 | 0.706 | 0.606 | 0.607 |\n> \n> The threshold NBuffers/128 has the best performance for default\n> shared_buffers (128MB) with 0.107 s, and equally performing with large\n> shared_buffers up to 100GB.\n> \n> We can use NBuffers/128 for the threshold, although I don't have a\n> measurement for HDD yet.\n> However, I wonder if the above method would suffice to determine the final\n> threshold that we can use. If anyone has suggestions on how we can come\n> up with the final value, like if I need to modify some steps above, I'd\n> appreciate it.\n> \n> Regarding the parameter name. Instead of accurate, we can use \"cached\" as\n> originally intended from the early versions of the patch since it is the smgr\n> that handles smgrnblocks to get the the block size of smgr_cached_nblocks..\n> \"accurate\" may confuse us because the cached value may not be actually\n> accurate..\n\nHi, \n\nSo I proceeded to update the patches using the \"cached\" parameter and updated\nthe corresponding comments to it in 0002.\n\nI've addressed the suggestions and comments of Amit-san on 0003:\n1. For readability, I moved the code block to a new static function FindAndDropRelFileNodeBuffers()\n2. Initialize the bool cached with false.\n3. It's also decided that we don't need the extra pre-checking of RelFileNode\nwhen locking the bufhdr in FindAndDropRelFileNodeBuffers\n\nI repeated the recovery performance test for vacuum. (I made a mistake previously in NBuffers/128)\nThe 3 kinds of thresholds are almost equally performant. I chose NBuffers/256 for this patch.\n\n| s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 | \n|-------|--------|--------------|--------------|--------------| \n| 128MB | 1.006 | 1.007 | 1.007 | 1.007 | \n| 1GB | 0.706 | 0.606 | 0.606 | 0.606 | \n| 20GB | 1.907 | 0.606 | 0.606 | 0.606 | \n| 100GB | 7.013 | 0.706 | 0.606 | 0.606 |\n\nAlthough we said that we'll prioritize vacuum optimization first, I've also updated the 0004 patch\n(truncate optimization) which addresses the previous concern of slower truncate due to\nredundant lookup of already-dropped buffers. In the new patch, we initially drop relation buffers\nusing the optimized DropRelFileNodeBuffers() if the buffers do not exceed the full-scan threshold,\nthen later on we drop the remaining buffers using full-scan.\n\nRegards,\nKirk Jamison", "msg_date": "Tue, 10 Nov 2020 02:49:38 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Nov 10, 2020 at 8:19 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> > From: k.jamison@fujitsu.com <k.jamison@fujitsu.com>\n> > On Thursday, October 22, 2020 3:15 PM, Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > I'm not sure about the exact steps of the test, but it can be expected\n> > > if we have many small relations to truncate.\n> > >\n> > > Currently BUF_DROP_FULL_SCAN_THRESHOLD is set to Nbuffers / 512,\n> > which\n> > > is quite arbitrary that comes from a wild guess.\n> > >\n> > > Perhaps we need to run benchmarks that drops one relation with several\n> > > different ratios between the number of buffers to-be-dropped and\n> > > Nbuffers, and preferably both on spinning rust and SSD.\n> >\n> > Sorry to get back to you on this just now.\n> > Since we're prioritizing the vacuum patch, we also need to finalize which\n> > threshold value to use.\n> > I proceeded testing with my latest set of patches because Amit-san's\n> > comments on the code, the ones we addressed, don't really affect the\n> > performance. I'll post the updated patches for 0002 & 0003 after we come up\n> > with the proper boolean parameter name for smgrnblocks and the buffer full\n> > scan threshold value.\n> >\n> > Test the VACUUM performance with the following thresholds:\n> > NBuffers/512, NBuffers/256, NBuffers/128, and determine which of the\n> > ratio has the best performance in terms of speed.\n> >\n> > I tested this on my machine (CPU 4v, 8GB memory, ext4) running on SSD.\n> > Configure streaming replication environment.\n> > shared_buffers = 100GB\n> > autovacuum = off\n> > full_page_writes = off\n> > checkpoint_timeout = 30min\n> >\n> > [Steps]\n> > 1. Create TABLE\n> > 2. INSERT data\n> > 3. DELETE from TABLE\n> > 4. Pause WAL replay on standby\n> > 5. VACUUM. Stop the primary.\n> > 6. Resume WAL replay and promote standby.\n> >\n> > With 1 relation, there were no significant changes that we can observe:\n> > (In seconds)\n> > | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |\n> > |-------|--------|--------------|--------------|--------------|\n> > | 128MB | 0.106 | 0.105 | 0.105 | 0.105 |\n> > | 100GB | 0.106 | 0.105 | 0.105 | 0.105 |\n> >\n> > So I tested with 100 tables and got more convincing measurements:\n> >\n> > | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |\n> > |-------|--------|--------------|--------------|--------------|\n> > | 128MB | 1.006 | 1.007 | 1.006 | 0.107 |\n> > | 1GB | 0.706 | 0.606 | 0.606 | 0.605 |\n> > | 20GB | 1.907 | 0.606 | 0.606 | 0.605 |\n> > | 100GB | 7.013 | 0.706 | 0.606 | 0.607 |\n> >\n> > The threshold NBuffers/128 has the best performance for default\n> > shared_buffers (128MB) with 0.107 s, and equally performing with large\n> > shared_buffers up to 100GB.\n> >\n> > We can use NBuffers/128 for the threshold, although I don't have a\n> > measurement for HDD yet.\n> > However, I wonder if the above method would suffice to determine the final\n> > threshold that we can use. If anyone has suggestions on how we can come\n> > up with the final value, like if I need to modify some steps above, I'd\n> > appreciate it.\n> >\n> > Regarding the parameter name. Instead of accurate, we can use \"cached\" as\n> > originally intended from the early versions of the patch since it is the smgr\n> > that handles smgrnblocks to get the the block size of smgr_cached_nblocks..\n> > \"accurate\" may confuse us because the cached value may not be actually\n> > accurate..\n>\n> Hi,\n>\n> So I proceeded to update the patches using the \"cached\" parameter and updated\n> the corresponding comments to it in 0002.\n>\n> I've addressed the suggestions and comments of Amit-san on 0003:\n> 1. For readability, I moved the code block to a new static function FindAndDropRelFileNodeBuffers()\n> 2. Initialize the bool cached with false.\n> 3. It's also decided that we don't need the extra pre-checking of RelFileNode\n> when locking the bufhdr in FindAndDropRelFileNodeBuffers\n>\n> I repeated the recovery performance test for vacuum. (I made a mistake previously in NBuffers/128)\n> The 3 kinds of thresholds are almost equally performant. I chose NBuffers/256 for this patch.\n>\n> | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |\n> |-------|--------|--------------|--------------|--------------|\n> | 128MB | 1.006 | 1.007 | 1.007 | 1.007 |\n> | 1GB | 0.706 | 0.606 | 0.606 | 0.606 |\n> | 20GB | 1.907 | 0.606 | 0.606 | 0.606 |\n> | 100GB | 7.013 | 0.706 | 0.606 | 0.606 |\n>\n\nI think this data is not very clear. What is the unit of time? What is\nthe size of the relation used for the test? Did the test use an\noptimized path for all cases? If at 128MB, there is no performance\ngain, can we consider the size of shared buffers as 256MB as well for\nthe threshold?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 Nov 2020 08:33:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 10 Nov 2020 08:33:26 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Nov 10, 2020 at 8:19 AM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n> >\n> > I repeated the recovery performance test for vacuum. (I made a mistake previously in NBuffers/128)\n> > The 3 kinds of thresholds are almost equally performant. I chose NBuffers/256 for this patch.\n> >\n> > | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |\n> > |-------|--------|--------------|--------------|--------------|\n> > | 128MB | 1.006 | 1.007 | 1.007 | 1.007 |\n> > | 1GB | 0.706 | 0.606 | 0.606 | 0.606 |\n> > | 20GB | 1.907 | 0.606 | 0.606 | 0.606 |\n> > | 100GB | 7.013 | 0.706 | 0.606 | 0.606 |\n> >\n> \n> I think this data is not very clear. What is the unit of time? What is\n> the size of the relation used for the test? Did the test use an\n> optimized path for all cases? If at 128MB, there is no performance\n> gain, can we consider the size of shared buffers as 256MB as well for\n> the threshold?\n\nIn the previous testing, it was shown as:\n\nRecovery Time (in seconds)\n| s_b | master | patched | %reg | \n|-------|--------|---------|--------| \n| 128MB | 3.043 | 2.977 | -2.22% | \n| 1GB | 3.417 | 3.41 | -0.21% | \n| 20GB | 20.597 | 2.409 | -755% | \n| 100GB | 66.862 | 2.409 | -2676% |\n\n\nSo... The numbers seems to be in seconds, but the master gets about 10\ntimes faster than this result for uncertain reasons. It seems that the\nresult contains something different from the difference by this patch\nas a larger part.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Nov 2020 12:26:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Sat, Nov 7, 2020 at 12:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think one of the problems is returning fewer rows and that too\n> without any warning or error, so maybe that is a bigger problem but we\n> seem to be okay with it as that is already a known thing though I\n> think that is not documented anywhere.\n\nI'm not OK with it, and I'm not sure it's widely known or understood,\nthough I think we've made some progress in this thread. Perhaps, as a\nseparate project, we need to solve several related problems with a\nshmem table of relation sizes from not-yet-synced files so that\nsmgrnblocks() is fast and always sees all preceding smgrextend()\ncalls. If we're going to need something like that anyway, and if we\ncan come up with a simple way to detect and report this type of\nfailure in the meantime, maybe this fast DROP project should just go\nahead and use the existing smgrnblocks() function without the weird\ncaching bandaid that only works in recovery?\n\n> > The main argument I can think of against the idea of using plain old\n> > smgrnblocks() is that the current error messages on smgrwrite()\n> > failure for stray blocks would be indistinguishible from cases where\n> > an external actor unlinked the file. I don't mind getting an error\n> > that prevents checkpointing -- your system is in big trouble! -- but\n> > it'd be nice to be able to detect that *we* unlinked the file,\n> > implying the filesystem and bufferpool are out of sync, and spit out a\n> > special diagnostic message. I suppose if it's the checkpointer doing\n> > the writing, it could check if the relfilenode is on the\n> > queued-up-for-delete-after-the-checkpoint list, and if so, it could\n> > produce a different error message just for this edge case.\n> > Unfortunately that's not a general solution, because any backend might\n> > try to write a buffer out and they aren't synchronised with\n> > checkpoints.\n>\n> Yeah, but I am not sure if we can consider manual (external actor)\n> tinkering with the files the same as something that happened due to\n> the database server relying on the wrong information.\n\nHere's a rough idea I thought of to detect this case; I'm not sure if\nit has holes. When unlinking a relation, currently we truncate\nsegment 0 and unlink all the rest of the segments, and tell the\ncheckpointer to unlink segment 0 after the next checkpoint. What if\nwe also renamed segment 0 to \"$X.dropped\" (to be unlinked by the\ncheckpointer), and taught GetNewRelFileNode() to also skip anything\nfor which \"$X.dropped\" exists? Then mdwrite() could use\n_mdfd_getseg(EXTENSION_RETURN_NULL), and if it gets NULL (= no file),\nthen it checks if \"$X.dropped\" exists, and if so it knows that it is\ntrying to write a stray block from a dropped relation in the buffer\npool. Then we panic, or warn but drop the write. The point of the\nrenaming is that (1) mdwrite() for segment 0 will detect the missing\nfile (not just higher segments), (2) every backends can see that a\nrelation has been recently dropped, while also interlocking with the\ncheckpointer though buffer locks.\n\n> One vague idea could be to develop pg_test_seek which can detect such\n> problems but not sure if we can rely on such a tool to always give us\n> the right answer. Were you able to consistently reproduce the lseek\n> problem on the system where you have tried?\n\nYeah, I can reproduce that reliably, but it requires quite a bit of\nset-up as root so it might be tricky to package up in easy to run\nform. It might be quite nice to prepare an easy-to-use \"gallery of\nweird buffered I/O effects\" project, including some of the\nlocal-filesystem-with-fault-injection stuff that Craig Ringer and\nothers were testing with a couple of years ago, but maybe not in the\npg repo.\n\n\n", "msg_date": "Tue, 10 Nov 2020 17:29:47 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, November 10, 2020 12:27 PM, Horiguchi-san wrote:\n> To: amit.kapila16@gmail.com\n> Cc: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>; Tsunakawa,\n> Takayuki/綱川 貴之 <tsunakawa.takay@fujitsu.com>; tgl@sss.pgh.pa.us;\n> andres@anarazel.de; robertmhaas@gmail.com;\n> tomas.vondra@2ndquadrant.com; pgsql-hackers@postgresql.org\n> Subject: Re: [Patch] Optimize dropping of relation buffers using dlist\n> \n> At Tue, 10 Nov 2020 08:33:26 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > On Tue, Nov 10, 2020 at 8:19 AM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> > >\n> > > I repeated the recovery performance test for vacuum. (I made a\n> > > mistake previously in NBuffers/128) The 3 kinds of thresholds are almost\n> equally performant. I chose NBuffers/256 for this patch.\n> > >\n> > > | s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 |\n> > > |-------|--------|--------------|--------------|--------------|\n> > > | 128MB | 1.006 | 1.007 | 1.007 | 1.007 |\n> > > | 1GB | 0.706 | 0.606 | 0.606 | 0.606 |\n> > > | 20GB | 1.907 | 0.606 | 0.606 | 0.606 |\n> > > | 100GB | 7.013 | 0.706 | 0.606 | 0.606 |\n> > >\n> >\n> > I think this data is not very clear. What is the unit of time? What is\n> > the size of the relation used for the test? Did the test use an\n> > optimized path for all cases? If at 128MB, there is no performance\n> > gain, can we consider the size of shared buffers as 256MB as well for\n> > the threshold?\n> \n> In the previous testing, it was shown as:\n> \n> Recovery Time (in seconds)\n> | s_b | master | patched | %reg |\n> |-------|--------|---------|--------|\n> | 128MB | 3.043 | 2.977 | -2.22% |\n> | 1GB | 3.417 | 3.41 | -0.21% |\n> | 20GB | 20.597 | 2.409 | -755% |\n> | 100GB | 66.862 | 2.409 | -2676% |\n> \n> \n> So... The numbers seems to be in seconds, but the master gets about 10\n> times faster than this result for uncertain reasons. It seems that the result\n> contains something different from the difference by this patch as a larger\n> part.\n\nThe unit is in seconds.\nThe results that Horiguchi-san mentioned was the old test case I used where I vacuumed\ndatabase with 1000 relations that have been deleted.\nI used a new test case in my last results that's why they're smaller:\nVACUUM 1 parent table (350 MB) and 100 child partition tables (6 MB each)\nin separate transcations after deleting the tables. After vacuum, the\nparent table became 16kB and each child table was 2224kB.\n\nI added the test for 256MB shared_buffers, and the performance is also almost the same.\nWe gain performance benefits for the larger shared_buffers.\n\n| s_b | Master | NBuffers/512 | NBuffers/256 | NBuffers/128 | \n|--------|--------|--------------|--------------|--------------| \n| 128MB | 1.006 | 1.007 | 1.007 | 1.007 | \n| 256 MB | 1.006 | 1.006 | 0.906 | 0.906 | \n| 1GB | 0.706 | 0.606 | 0.606 | 0.606 | \n| 20GB | 1.907 | 0.606 | 0.606 | 0.606 | \n| 100GB | 7.013 | 0.706 | 0.606 | 0.606 |\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Tue, 10 Nov 2020 05:17:57 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Nov 10, 2020 at 10:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Nov 7, 2020 at 12:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think one of the problems is returning fewer rows and that too\n> > without any warning or error, so maybe that is a bigger problem but we\n> > seem to be okay with it as that is already a known thing though I\n> > think that is not documented anywhere.\n>\n> I'm not OK with it, and I'm not sure it's widely known or understood,\n>\n\nYeah, it is quite possible but may be because we don't see many field\nreports nobody thought of doing anything about it.\n\n> though I think we've made some progress in this thread. Perhaps, as a\n> separate project, we need to solve several related problems with a\n> shmem table of relation sizes from not-yet-synced files so that\n> smgrnblocks() is fast and always sees all preceding smgrextend()\n> calls. If we're going to need something like that anyway, and if we\n> can come up with a simple way to detect and report this type of\n> failure in the meantime, maybe this fast DROP project should just go\n> ahead and use the existing smgrnblocks() function without the weird\n> caching bandaid that only works in recovery?\n>\n\nI am not sure if it would be easy to detect all such failures and we\nmight end up opening other can of worms for us but if there is some\nsimpler way then sure we can consider it. OTOH, till we have a shared\ncache of relation sizes (which I think is good for multiple things) it\nseems the safe way to proceed by relying on the cache during recovery.\nAnd, it is not that we can't change this once we have a shared\nrelation size solution.\n\n> > > The main argument I can think of against the idea of using plain old\n> > > smgrnblocks() is that the current error messages on smgrwrite()\n> > > failure for stray blocks would be indistinguishible from cases where\n> > > an external actor unlinked the file. I don't mind getting an error\n> > > that prevents checkpointing -- your system is in big trouble! -- but\n> > > it'd be nice to be able to detect that *we* unlinked the file,\n> > > implying the filesystem and bufferpool are out of sync, and spit out a\n> > > special diagnostic message. I suppose if it's the checkpointer doing\n> > > the writing, it could check if the relfilenode is on the\n> > > queued-up-for-delete-after-the-checkpoint list, and if so, it could\n> > > produce a different error message just for this edge case.\n> > > Unfortunately that's not a general solution, because any backend might\n> > > try to write a buffer out and they aren't synchronised with\n> > > checkpoints.\n> >\n> > Yeah, but I am not sure if we can consider manual (external actor)\n> > tinkering with the files the same as something that happened due to\n> > the database server relying on the wrong information.\n>\n> Here's a rough idea I thought of to detect this case; I'm not sure if\n> it has holes. When unlinking a relation, currently we truncate\n> segment 0 and unlink all the rest of the segments, and tell the\n> checkpointer to unlink segment 0 after the next checkpoint.\n>\n\nDo we always truncate all the blocks? What if the vacuum has cleaned\nlast N (say 100) blocks then how do we handle it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 Nov 2020 10:49:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Nov 10, 2020 at 6:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Do we always truncate all the blocks? What if the vacuum has cleaned\n> last N (say 100) blocks then how do we handle it?\n\nOh, hmm. Yeah, that idea only works for DROP, not for truncate last N.\n\n\n", "msg_date": "Tue, 10 Nov 2020 18:33:51 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> So I proceeded to update the patches using the \"cached\" parameter and\n> updated the corresponding comments to it in 0002.\n\nOK, I'm in favor of the name \"cached\" now, although I first agreed with Horiguchi-san in that it's better to use a name that represents the nature (accurate) of information rather than the implementation (cached). Having a second thought, since smgr is a component that manages relation files on storage (file system), lseek(SEEK_END) is the accurate value for smgr. The cached value holds a possibly stale size up to which the relation has extended.\n\n\nThe patch looks almost good except for the minor ones:\n\n(1)\n+extern BlockNumber smgrnblocks(SMgrRelation reln, ForkNumber forknum,\n+\t\t\t\t\t\t\t bool *accurate);\n\nIt's still accurate here.\n\n\n(2)\n+ *\t\tthe buffer pool is sequentially scanned. Since buffers must not be\n+ *\t\tleft behind, the latter way is executed unless the sizes of all the\n+ *\t\tinvolved forks are already cached. See smgrnblocks() for more details.\n+ *\t\tThis is only called in recovery when the block count of any fork is\n+ *\t\tcached and the total number of to-be-invalidated blocks per relation\n\ncount of any fork is\n-> counts of all forks are\n\n\n(3)\nIn 0004, I thought you would add the invalidated block counts of all relations to determine if the optimization is done, as Horiguchi-san suggested. But I find the current patch okay too.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Tue, 10 Nov 2020 06:09:56 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, November 10, 2020 3:10 PM, Tsunakawa-san wrote:\n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > So I proceeded to update the patches using the \"cached\" parameter and\n> > updated the corresponding comments to it in 0002.\n> \n> OK, I'm in favor of the name \"cached\" now, although I first agreed with\n> Horiguchi-san in that it's better to use a name that represents the nature\n> (accurate) of information rather than the implementation (cached). Having\n> a second thought, since smgr is a component that manages relation files on\n> storage (file system), lseek(SEEK_END) is the accurate value for smgr. The\n> cached value holds a possibly stale size up to which the relation has\n> extended.\n> \n> \n> The patch looks almost good except for the minor ones:\n\nThank you for the review!\n\n> (1)\n> +extern BlockNumber smgrnblocks(SMgrRelation reln, ForkNumber\n> forknum,\n> +\t\t\t\t\t\t\t bool *accurate);\n> \n> It's still accurate here.\n\nAlready fixed in 0002.\n\n> (2)\n> + *\t\tThis is only called in recovery when the block count of any\n> fork is\n> + *\t\tcached and the total number of to-be-invalidated blocks per\n> relation\n> \n> count of any fork is\n> -> counts of all forks are\n\nFixed in 0003/\n \n> (3)\n> In 0004, I thought you would add the invalidated block counts of all relations\n> to determine if the optimization is done, as Horiguchi-san suggested. But I\n> find the current patch okay too.\n\nYeah, I found my approach easier to implement. The new change in 0004 is that\nwhen entering the optimized path we now call FindAndDropRelFileNodeBuffers()\ninstead of DropRelFileNodeBuffers().\n\nI have attached all the updated patches.\nI'd appreciate your feedback.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 12 Nov 2020 04:00:14 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "The patch looks OK. I think as Thomas-san suggested, we can remove the modification to smgrnblocks() and don't care wheter the size is cached or not. But I think the current patch is good too, so I'd like to leave it up to a committer to decide which to choose.\n\nI measured performance in a different angle -- the time DropRelFileNodeBuffers() and DropRelFileNodeAllBuffers() took. That reveals the direct improvement and degradation.\n\nI used 1,000 tables, each of which is 1 MB. I used shared_buffers = 128 MB for the case where the traditional full buffer scan is done, and shared_buffers = 100 GB for the case where the optimization path takes effect.\n\nThe results are almost good as follows:\n\nA. UNPATCHED\n\n128 MB shared_buffers\n1. VACUUM = 0.04 seconds\n2. TRUNCATE = 0.04 seconds\n\n100 GB shared_buffers\n3. VACUUM = 69.4 seconds\n4. TRUNCATE = 69.1 seconds\n\n\nB. PATCHED\n\n128 MB shared_buffers (full scan)\n5. VACUUM = 0.04 seconds\n6. TRUNCATE = 0.07 seconds\n\n100 GB shared_buffers (optimized path)\n7. VACUUM = 0.02 seconds\n8. TRUNCATE = 0.08 seconds\n\n\nSo, I'd like to mark this as ready for committer.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 12 Nov 2020 04:13:35 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, November 12, 2020 1:14 PM, Tsunakawa-san wrote:\n> The patch looks OK. I think as Thomas-san suggested, we can remove the\n> modification to smgrnblocks() and don't care wheter the size is cached or not.\n> But I think the current patch is good too, so I'd like to leave it up to a\n> committer to decide which to choose.\n> I measured performance in a different angle -- the time\n> DropRelFileNodeBuffers() and DropRelFileNodeAllBuffers() took. That\n> reveals the direct improvement and degradation.\n> \n> I used 1,000 tables, each of which is 1 MB. I used shared_buffers = 128 MB\n> for the case where the traditional full buffer scan is done, and shared_buffers\n> = 100 GB for the case where the optimization path takes effect.\n> \n> The results are almost good as follows:\n> \n> A. UNPATCHED\n> \n> 128 MB shared_buffers\n> 1. VACUUM = 0.04 seconds\n> 2. TRUNCATE = 0.04 seconds\n> \n> 100 GB shared_buffers\n> 3. VACUUM = 69.4 seconds\n> 4. TRUNCATE = 69.1 seconds\n> \n> \n> B. PATCHED\n> \n> 128 MB shared_buffers (full scan)\n> 5. VACUUM = 0.04 seconds\n> 6. TRUNCATE = 0.07 seconds\n> \n> 100 GB shared_buffers (optimized path)\n> 7. VACUUM = 0.02 seconds\n> 8. TRUNCATE = 0.08 seconds\n> \n> \n> So, I'd like to mark this as ready for committer.\nI forgot to reply.\nThank you very much Tsunakawa-san for testing and to everyone\nwho has provided their reviews and insights as well.\n\nNow thinking about smgrnblocks(), currently Thomas Munro is also working on implementing a \nshared SmgrRelation [1] to store sizes. However, since that is still under development and the\ndiscussion is still ongoing, I hope we can first commit these set of patches here as these are already\nin committable form. I think it's alright to accept the early improvements implemented in this thread\nto the source code.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2B7Ok26MHiFWVEiAy2UMgHkrCieycQ1eFdA%3Dt2JTfUgwA%40mail.gmail.com\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Wed, 18 Nov 2020 09:04:49 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Nov 18, 2020 at 2:34 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Thursday, November 12, 2020 1:14 PM, Tsunakawa-san wrote:\n> I forgot to reply.\n> Thank you very much Tsunakawa-san for testing and to everyone\n> who has provided their reviews and insights as well.\n>\n> Now thinking about smgrnblocks(), currently Thomas Munro is also working on implementing a\n> shared SmgrRelation [1] to store sizes. However, since that is still under development and the\n> discussion is still ongoing, I hope we can first commit these set of patches here as these are already\n> in committable form. I think it's alright to accept the early improvements implemented in this thread\n> to the source code.\n>\n\nYeah, that won't be a bad idea especially because the patch being\ndiscussed in the thread you referred is still in an exploratory phase.\nI haven't tested or done a detailed review but I feel there shouldn't\nbe many problems if we agree on the approach.\n\nThomas/others, do you have objections to proceeding here? It shouldn't\nbe a big problem to change the code in this area even if we get the\nshared relation size stuff in.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 18 Nov 2020 17:34:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\n\nOn 2020-11-18 17:34:31 +0530, Amit Kapila wrote:\n> Yeah, that won't be a bad idea especially because the patch being\n> discussed in the thread you referred is still in an exploratory phase.\n> I haven't tested or done a detailed review but I feel there shouldn't\n> be many problems if we agree on the approach.\n> \n> Thomas/others, do you have objections to proceeding here? It shouldn't\n> be a big problem to change the code in this area even if we get the\n> shared relation size stuff in.\n\nI'm doubtful the patches as is are a good idea / address the correctness\nconcerns to a sufficient degree.\n\nOne important part of that is that the patch includes pretty much zero\nexplanations about why it is safe what it is doing. Something having\nbeing discussed deep in this thread won't help us in a few months, not\nto speak of a few years.\n\n\nThe commit message says:\n> While recovery, we can get a reliable cached value of nblocks for\n> supplied relation's fork, and it's safe because there are no other\n> processes but the startup process that changes the relation size\n> during recovery.\n\nand the code only applies the optimized scan only when cached:\n+\t/*\n+\t * Look up the buffers in the hashtable and drop them if the block size\n+\t * is already cached and the total blocks to be invalidated is below the\n+\t * full scan threshold. Otherwise, give up the optimization.\n+\t */\n+\tif (cached && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n\n\nThis seems quite narrow to me. There's plenty cases where there's no\ncached relation size in the startup process, restricting the\navailability of this optimization as written. Where do we even use\nDropRelFileNodeBuffers() in recovery? The most common path is\nDropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers(),\nwhich 3/4 doesn't address and 4/4 doesn't mention.\n\n4/4 seems to address DropRelationFiles(), but only talks about TRUNCATE?\n\nI'm also worried about the cases where this could cause buffers left in\nthe buffer pool, without a crosscheck like Thomas' patch would allow to\nadd. Obviously other processes can dirty buffers in hot_standby, so any\nleftover buffer could have bad consequences.\n\nI also don't get why 4/4 would be a good idea on its own. It uses\nBUF_DROP_FULL_SCAN_THRESHOLD to guard FindAndDropRelFileNodeBuffers() on\na per relation basis. But since DropRelFileNodesAllBuffers() can be used\nfor many relations at once, this could end up doing\nBUF_DROP_FULL_SCAN_THRESHOLD - 1 lookups a lot of times, once for each\nof nnodes relations?\n\nAlso, how is 4/4 safe - this is outside of recovery too?\n\n\nSmaller comment:\n\n+static void\n+FindAndDropRelFileNodeBuffers(RelFileNode rnode, ForkNumber *forkNum, int nforks,\n+\t\t\t\t\t\t\t BlockNumber *nForkBlocks, BlockNumber *firstDelBlock)\n...\n+\t\t\t/* Check that it is in the buffer pool. If not, do nothing. */\n+\t\t\tLWLockAcquire(bufPartitionLock, LW_SHARED);\n+\t\t\tbuf_id = BufTableLookup(&bufTag, bufHash);\n...\n+\t\t\tbufHdr = GetBufferDescriptor(buf_id);\n+\n+\t\t\tbuf_state = LockBufHdr(bufHdr);\n+\n+\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode) &&\n+\t\t\t\tbufHdr->tag.forkNum == forkNum[i] &&\n+\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[i])\n+\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases spinlock */\n+\t\t\telse\n+\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\na\n\nI'm a bit confused about the check here. We hold a buffer partition\nlock, and have done a lookup in the mapping table. Why are we then\nrechecking the relfilenode/fork/blocknum? And why are we doing so\nholding the buffer header lock, which is essentially a spinlock, so\nshould only ever be held for very short portions?\n\nThis looks like it's copying logic from DropRelFileNodeBuffers() etc,\nbut there the situation is different: We haven't done a buffer mapping\nlookup, and we don't hold a partition lock!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 18 Nov 2020 10:13:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Nov 18, 2020 at 11:43 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-11-18 17:34:31 +0530, Amit Kapila wrote:\n> > Yeah, that won't be a bad idea especially because the patch being\n> > discussed in the thread you referred is still in an exploratory phase.\n> > I haven't tested or done a detailed review but I feel there shouldn't\n> > be many problems if we agree on the approach.\n> >\n> > Thomas/others, do you have objections to proceeding here? It shouldn't\n> > be a big problem to change the code in this area even if we get the\n> > shared relation size stuff in.\n>\n> I'm doubtful the patches as is are a good idea / address the correctness\n> concerns to a sufficient degree.\n>\n> One important part of that is that the patch includes pretty much zero\n> explanations about why it is safe what it is doing. Something having\n> being discussed deep in this thread won't help us in a few months, not\n> to speak of a few years.\n>\n>\n> The commit message says:\n> > While recovery, we can get a reliable cached value of nblocks for\n> > supplied relation's fork, and it's safe because there are no other\n> > processes but the startup process that changes the relation size\n> > during recovery.\n>\n> and the code only applies the optimized scan only when cached:\n> + /*\n> + * Look up the buffers in the hashtable and drop them if the block size\n> + * is already cached and the total blocks to be invalidated is below the\n> + * full scan threshold. Otherwise, give up the optimization.\n> + */\n> + if (cached && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n>\n>\n> This seems quite narrow to me. There's plenty cases where there's no\n> cached relation size in the startup process, restricting the\n> availability of this optimization as written. Where do we even use\n> DropRelFileNodeBuffers() in recovery?\n>\n\nThis will be used in the recovery of truncate done by vacuum (via\nreplay of XLOG_SMGR_TRUNCATE->smgrtruncate->DropRelFileNodeBuffers).\nAnd Kirk-San has done some testing [1][2] to show the performance\nbenefits of the same.\n\n> The most common path is\n> DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers(),\n> which 3/4 doesn't address and 4/4 doesn't mention.\n>\n> 4/4 seems to address DropRelationFiles(), but only talks about TRUNCATE?\n>\n> I'm also worried about the cases where this could cause buffers left in\n> the buffer pool, without a crosscheck like Thomas' patch would allow to\n> add. Obviously other processes can dirty buffers in hot_standby, so any\n> leftover buffer could have bad consequences.\n>\n\nThe problem can only arise if other processes extend the relation. The\nidea was that in recovery it extends relation by one process which\nhelps to maintain the cache. Kirk seems to have done testing to\ncross-verify it by using his first patch\n(Prevent-invalidating-blocks-in-smgrextend-during). Which other\ncrosscheck you are referring here?\n\nI agree that we can do a better job by expanding comments to clearly\nstate why it is safe.\n\n[1] - https://www.postgresql.org/message-id/OSBPR01MB23413F14ED6B2D0D007698F4EFED0%40OSBPR01MB2341.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/OSBPR01MB234176B1829AECFE9FDDFCC2EFE90%40OSBPR01MB2341.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 19 Nov 2020 11:19:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Andres Freund <andres@anarazel.de>\n> DropRelFileNodeBuffers() in recovery? The most common path is\n> DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers(),\n> which 3/4 doesn't address and 4/4 doesn't mention.\n> \n> 4/4 seems to address DropRelationFiles(), but only talks about TRUNCATE?\n\nYes. DropRelationFiles() is used in the following two paths:\n\n[Replay of TRUNCATE during recovery]\nxact_redo_commit/abort() -> DropRelationFiles()\n -> smgrdounlinkall() -> DropRelFileNodesAllBuffers()\n\n[COMMIT/ROLLBACK PREPARED]\nFinishPreparedTransaction() -> DropRelationFiles()\n -> smgrdounlinkall() -> DropRelFileNodesAllBuffers()\n\n\n\n> I also don't get why 4/4 would be a good idea on its own. It uses\n> BUF_DROP_FULL_SCAN_THRESHOLD to guard\n> FindAndDropRelFileNodeBuffers() on a per relation basis. But since\n> DropRelFileNodesAllBuffers() can be used for many relations at once, this\n> could end up doing BUF_DROP_FULL_SCAN_THRESHOLD - 1 lookups a lot of\n> times, once for each of nnodes relations?\n\nSo, the threshold value should be compared with the total number of blocks of all target relations, not each relation. You seem to be right, got it.\n\n\n> Also, how is 4/4 safe - this is outside of recovery too?\n\nIt seems that DropRelFileNodesAllBuffers() should trigger the new optimization path only when InRecovery == true, because it intentionally doesn't check the \"accurate\" value returned from smgrnblocks().\n\n\n> Smaller comment:\n> \n> +static void\n> +FindAndDropRelFileNodeBuffers(RelFileNode rnode, ForkNumber *forkNum,\n> int nforks,\n> +\t\t\t\t\t\t\t BlockNumber\n> *nForkBlocks, BlockNumber *firstDelBlock)\n> ...\n> +\t\t\t/* Check that it is in the buffer pool. If not, do nothing.\n> */\n> +\t\t\tLWLockAcquire(bufPartitionLock, LW_SHARED);\n> +\t\t\tbuf_id = BufTableLookup(&bufTag, bufHash);\n> ...\n> +\t\t\tbufHdr = GetBufferDescriptor(buf_id);\n> +\n> +\t\t\tbuf_state = LockBufHdr(bufHdr);\n> +\n> +\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode) &&\n> +\t\t\t\tbufHdr->tag.forkNum == forkNum[i] &&\n> +\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[i])\n> +\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases\n> spinlock */\n> +\t\t\telse\n> +\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\n> \n> I'm a bit confused about the check here. We hold a buffer partition lock, and\n> have done a lookup in the mapping table. Why are we then rechecking the\n> relfilenode/fork/blocknum? And why are we doing so holding the buffer header\n> lock, which is essentially a spinlock, so should only ever be held for very short\n> portions?\n> \n> This looks like it's copying logic from DropRelFileNodeBuffers() etc, but there\n> the situation is different: We haven't done a buffer mapping lookup, and we\n> don't hold a partition lock!\n\nThat's because the buffer partition lock is released immediately after the hash table has been looked up. As an aside, InvalidateBuffer() requires the caller to hold the buffer header spinlock and doesn't hold the buffer partition lock.\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 19 Nov 2020 07:07:34 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, November 19, 2020 4:08 PM, Tsunakawa, Takayuki wrote:\n> From: Andres Freund <andres@anarazel.de>\n> > DropRelFileNodeBuffers() in recovery? The most common path is\n> > DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers(),\n> > which 3/4 doesn't address and 4/4 doesn't mention.\n> >\n> > 4/4 seems to address DropRelationFiles(), but only talks about\n> TRUNCATE?\n> \n> Yes. DropRelationFiles() is used in the following two paths:\n> \n> [Replay of TRUNCATE during recovery]\n> xact_redo_commit/abort() -> DropRelationFiles() -> smgrdounlinkall() ->\n> DropRelFileNodesAllBuffers()\n> \n> [COMMIT/ROLLBACK PREPARED]\n> FinishPreparedTransaction() -> DropRelationFiles() -> smgrdounlinkall()\n> -> DropRelFileNodesAllBuffers()\n\nYes. The concern is that it was not clear in the function descriptions and commit logs\nwhat the optimizations for the DropRelFileNodeBuffers() and DropRelFileNodesAllBuffers()\nare for. So I revised only the function description of DropRelFileNodeBuffers() and the\ncommit logs of the 0003-0004 patches. Please check if the brief descriptions would suffice.\n\n\n> > I also don't get why 4/4 would be a good idea on its own. It uses\n> > BUF_DROP_FULL_SCAN_THRESHOLD to guard\n> > FindAndDropRelFileNodeBuffers() on a per relation basis. But since\n> > DropRelFileNodesAllBuffers() can be used for many relations at once,\n> > this could end up doing BUF_DROP_FULL_SCAN_THRESHOLD - 1\n> lookups a lot\n> > of times, once for each of nnodes relations?\n> \n> So, the threshold value should be compared with the total number of blocks\n> of all target relations, not each relation. You seem to be right, got it.\n\nFixed this in 0004 patch. Now we compare the total number of buffers-to-be-invalidated\nFor ALL relations to the BUF_DROP_FULL_SCAN_THRESHOLD.\n\n> > Also, how is 4/4 safe - this is outside of recovery too?\n> \n> It seems that DropRelFileNodesAllBuffers() should trigger the new\n> optimization path only when InRecovery == true, because it intentionally\n> doesn't check the \"accurate\" value returned from smgrnblocks().\n\nFixed it in 0004 patch. Now we ensure that we only enter the optimization path\nIff during recovery.\n \n\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> On Wed, Nov 18, 2020 at 11:43 PM Andres Freund <andres@anarazel.de>\n> > I'm also worried about the cases where this could cause buffers left\n> > in the buffer pool, without a crosscheck like Thomas' patch would\n> > allow to add. Obviously other processes can dirty buffers in\n> > hot_standby, so any leftover buffer could have bad consequences.\n> >\n> \n> The problem can only arise if other processes extend the relation. The idea\n> was that in recovery it extends relation by one process which helps to\n> maintain the cache. Kirk seems to have done testing to cross-verify it by using\n> his first patch (Prevent-invalidating-blocks-in-smgrextend-during). Which\n> other crosscheck you are referring here?\n> \n> I agree that we can do a better job by expanding comments to clearly state\n> why it is safe.\n\nYes, basically what Amit-san also mentioned above. The first patch prevents that.\nAnd in the description of DropRelFileNodeBuffers in the 0003 patch, please check\nIf that would suffice.\n\n\n> > Smaller comment:\n> >\n> > +static void\n> > +FindAndDropRelFileNodeBuffers(RelFileNode rnode, ForkNumber\n> *forkNum,\n> > int nforks,\n> > +\t\t\t\t\t\t\t BlockNumber\n> > *nForkBlocks, BlockNumber *firstDelBlock) ...\n> > +\t\t\t/* Check that it is in the buffer pool. If not, do\n> nothing.\n> > */\n> > +\t\t\tLWLockAcquire(bufPartitionLock, LW_SHARED);\n> > +\t\t\tbuf_id = BufTableLookup(&bufTag, bufHash);\n> > ...\n> > +\t\t\tbufHdr = GetBufferDescriptor(buf_id);\n> > +\n> > +\t\t\tbuf_state = LockBufHdr(bufHdr);\n> > +\n> > +\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode)\n> &&\n> > +\t\t\t\tbufHdr->tag.forkNum == forkNum[i] &&\n> > +\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[i])\n> > +\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases\n> > spinlock */\n> > +\t\t\telse\n> > +\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\n> >\n> > I'm a bit confused about the check here. We hold a buffer partition\n> > lock, and have done a lookup in the mapping table. Why are we then\n> > rechecking the relfilenode/fork/blocknum? And why are we doing so\n> > holding the buffer header lock, which is essentially a spinlock, so\n> > should only ever be held for very short portions?\n> >\n> > This looks like it's copying logic from DropRelFileNodeBuffers() etc,\n> > but there the situation is different: We haven't done a buffer mapping\n> > lookup, and we don't hold a partition lock!\n> \n> That's because the buffer partition lock is released immediately after the hash\n> table has been looked up. As an aside, InvalidateBuffer() requires the caller\n> to hold the buffer header spinlock and doesn't hold the buffer partition lock.\n\nYes. Holding the buffer header spinlock is necessary to invalidate the buffers.\nAs for buffer mapping partition lock, as mentioned by Tsunakawa-san, it is\nreleased immediately after BufTableLookup, which is similar to lookup done in\nPrefetchSharedBuffer. So I retained these changes.\n\nI have attached the updated patches. Aside from descriptions, no other major\nchanges in the patch set except 0004. Feedbacks are welcome.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 26 Nov 2020 03:04:10 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "> From: k.jamison@fujitsu.com <k.jamison@fujitsu.com>\n> On Thursday, November 19, 2020 4:08 PM, Tsunakawa, Takayuki wrote:\n> > From: Andres Freund <andres@anarazel.de>\n> > > DropRelFileNodeBuffers() in recovery? The most common path is\n> > > DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers()\n> > > , which 3/4 doesn't address and 4/4 doesn't mention.\n> > >\n> > > 4/4 seems to address DropRelationFiles(), but only talks about\n> > TRUNCATE?\n> >\n> > Yes. DropRelationFiles() is used in the following two paths:\n> >\n> > [Replay of TRUNCATE during recovery]\n> > xact_redo_commit/abort() -> DropRelationFiles() -> smgrdounlinkall()\n> > ->\n> > DropRelFileNodesAllBuffers()\n> >\n> > [COMMIT/ROLLBACK PREPARED]\n> > FinishPreparedTransaction() -> DropRelationFiles() ->\n> > smgrdounlinkall()\n> > -> DropRelFileNodesAllBuffers()\n> \n> Yes. The concern is that it was not clear in the function descriptions and\n> commit logs what the optimizations for the DropRelFileNodeBuffers() and\n> DropRelFileNodesAllBuffers() are for. So I revised only the function\n> description of DropRelFileNodeBuffers() and the commit logs of the\n> 0003-0004 patches. Please check if the brief descriptions would suffice.\n> \n> \n> > > I also don't get why 4/4 would be a good idea on its own. It uses\n> > > BUF_DROP_FULL_SCAN_THRESHOLD to guard\n> > > FindAndDropRelFileNodeBuffers() on a per relation basis. But since\n> > > DropRelFileNodesAllBuffers() can be used for many relations at once,\n> > > this could end up doing BUF_DROP_FULL_SCAN_THRESHOLD - 1\n> > lookups a lot\n> > > of times, once for each of nnodes relations?\n> >\n> > So, the threshold value should be compared with the total number of\n> > blocks of all target relations, not each relation. You seem to be right, got it.\n> \n> Fixed this in 0004 patch. Now we compare the total number of\n> buffers-to-be-invalidated For ALL relations to the\n> BUF_DROP_FULL_SCAN_THRESHOLD.\n> \n> > > Also, how is 4/4 safe - this is outside of recovery too?\n> >\n> > It seems that DropRelFileNodesAllBuffers() should trigger the new\n> > optimization path only when InRecovery == true, because it\n> > intentionally doesn't check the \"accurate\" value returned from\n> smgrnblocks().\n> \n> Fixed it in 0004 patch. Now we ensure that we only enter the optimization path\n> Iff during recovery.\n> \n> \n> > From: Amit Kapila <amit.kapila16@gmail.com> On Wed, Nov 18, 2020 at\n> > 11:43 PM Andres Freund <andres@anarazel.de>\n> > > I'm also worried about the cases where this could cause buffers left\n> > > in the buffer pool, without a crosscheck like Thomas' patch would\n> > > allow to add. Obviously other processes can dirty buffers in\n> > > hot_standby, so any leftover buffer could have bad consequences.\n> > >\n> >\n> > The problem can only arise if other processes extend the relation. The\n> > idea was that in recovery it extends relation by one process which\n> > helps to maintain the cache. Kirk seems to have done testing to\n> > cross-verify it by using his first patch\n> > (Prevent-invalidating-blocks-in-smgrextend-during). Which other\n> crosscheck you are referring here?\n> >\n> > I agree that we can do a better job by expanding comments to clearly\n> > state why it is safe.\n> \n> Yes, basically what Amit-san also mentioned above. The first patch prevents\n> that.\n> And in the description of DropRelFileNodeBuffers in the 0003 patch, please\n> check If that would suffice.\n> \n> \n> > > Smaller comment:\n> > >\n> > > +static void\n> > > +FindAndDropRelFileNodeBuffers(RelFileNode rnode, ForkNumber\n> > *forkNum,\n> > > int nforks,\n> > > +\t\t\t\t\t\t\t BlockNumber\n> > > *nForkBlocks, BlockNumber *firstDelBlock) ...\n> > > +\t\t\t/* Check that it is in the buffer pool. If not, do\n> > nothing.\n> > > */\n> > > +\t\t\tLWLockAcquire(bufPartitionLock, LW_SHARED);\n> > > +\t\t\tbuf_id = BufTableLookup(&bufTag, bufHash);\n> > > ...\n> > > +\t\t\tbufHdr = GetBufferDescriptor(buf_id);\n> > > +\n> > > +\t\t\tbuf_state = LockBufHdr(bufHdr);\n> > > +\n> > > +\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode)\n> > &&\n> > > +\t\t\t\tbufHdr->tag.forkNum == forkNum[i] &&\n> > > +\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[i])\n> > > +\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases\n> > > spinlock */\n> > > +\t\t\telse\n> > > +\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\n> > >\n> > > I'm a bit confused about the check here. We hold a buffer partition\n> > > lock, and have done a lookup in the mapping table. Why are we then\n> > > rechecking the relfilenode/fork/blocknum? And why are we doing so\n> > > holding the buffer header lock, which is essentially a spinlock, so\n> > > should only ever be held for very short portions?\n> > >\n> > > This looks like it's copying logic from DropRelFileNodeBuffers()\n> > > etc, but there the situation is different: We haven't done a buffer\n> > > mapping lookup, and we don't hold a partition lock!\n> >\n> > That's because the buffer partition lock is released immediately after\n> > the hash table has been looked up. As an aside, InvalidateBuffer()\n> > requires the caller to hold the buffer header spinlock and doesn't hold the\n> buffer partition lock.\n> \n> Yes. Holding the buffer header spinlock is necessary to invalidate the buffers.\n> As for buffer mapping partition lock, as mentioned by Tsunakawa-san, it is\n> released immediately after BufTableLookup, which is similar to lookup done\n> in PrefetchSharedBuffer. So I retained these changes.\n> \n> I have attached the updated patches. Aside from descriptions, no other major\n> changes in the patch set except 0004. Feedbacks are welcome.\n\nHi, \n\nGiven that I modified the 0004 patch. I repeated the recovery performance\ntests I did in [1]. But this time I used 1000 relations (1MB per relation).\nBecause of this rel size, it is expected that sequential full buffer scan is\nexecuted for 128MB shared_buffers, while the optimized process is\nimplemented for the larger shared_buffers.\n\nBelow are the results:\n\n[TRUNCATE]\n| s_b | MASTER (sec) | PATCHED (sec) | \n|--------|--------------|---------------| \n| 128 MB | 0.506 | 0.506 | \n| 1 GB | 0.906 | 0.506 | \n| 20 GB | 19.33 | 0.506 | \n| 100 GB | 74.941 | 0.506 | \n\n[VACUUM]\n| s_b | MASTER (sec) | PATCHED (sec) | \n|--------|--------------|---------------| \n| 128 MB | 1.207 | 0.737 | \n| 1 GB | 1.707 | 0.806 | \n| 20 GB | 14.325 | 0.806 | \n| 100 GB | 64.728 | 1.307 | \n\nLooking at the results for both VACUUM and TRUNCATE, we can see\nthe improvement of performance because of the optimizations.\nIn addition, there was no regression for the full scan of whole buffer\nPool (as seen in 128MB s_b).\n\nRegards,\nKirk Jamison\n\n[1] https://www.postgresql.org/message-id/OSBPR01MB234176B1829AECFE9FDDFCC2EFE90%40OSBPR01MB2341.jpnprd01.prod.outlook.com\n\n\n", "msg_date": "Thu, 26 Nov 2020 05:23:28 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hello, Kirk. Thank you for the new version.\n\nAt Thu, 26 Nov 2020 03:04:10 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> On Thursday, November 19, 2020 4:08 PM, Tsunakawa, Takayuki wrote:\n> > From: Andres Freund <andres@anarazel.de>\n> > > DropRelFileNodeBuffers() in recovery? The most common path is\n> > > DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers(),\n> > > which 3/4 doesn't address and 4/4 doesn't mention.\n> > >\n> > > 4/4 seems to address DropRelationFiles(), but only talks about\n> > TRUNCATE?\n> > \n> > Yes. DropRelationFiles() is used in the following two paths:\n> > \n> > [Replay of TRUNCATE during recovery]\n> > xact_redo_commit/abort() -> DropRelationFiles() -> smgrdounlinkall() ->\n> > DropRelFileNodesAllBuffers()\n> > \n> > [COMMIT/ROLLBACK PREPARED]\n> > FinishPreparedTransaction() -> DropRelationFiles() -> smgrdounlinkall()\n> > -> DropRelFileNodesAllBuffers()\n> \n> Yes. The concern is that it was not clear in the function descriptions and commit logs\n> what the optimizations for the DropRelFileNodeBuffers() and DropRelFileNodesAllBuffers()\n> are for. So I revised only the function description of DropRelFileNodeBuffers() and the\n> commit logs of the 0003-0004 patches. Please check if the brief descriptions would suffice.\n\nI read the commit message of 3/4. (Though this is not involved\nliterally in the final commit.)\n\n> While recovery, when WAL files of XLOG_SMGR_TRUNCATE from vacuum\n> or autovacuum are replayed, the buffers are dropped when the sizes\n> of all involved forks of a relation are already \"cached\". We can get\n\nThis sentence seems missing \"dropped by (or using) what\".\n\n> a reliable size of nblocks for supplied relation's fork at that time,\n> and it's safe because DropRelFileNodeBuffers() relies on the behavior\n> that cached nblocks will not be invalidated by file extension during\n> recovery. Otherwise, or if not in recovery, proceed to sequential\n> search of the whole buffer pool.\n\nThis sentence seems involving confusion. It reads as if \"we can rely\non it because we're relying on it\". And \"the cached value won't be\ninvalidated\" doesn't explain the reason precisely. The reason I think\nis that the cached value is guaranteed to be the maximum page we have\nin shared buffer at least while recovery, and that guarantee is holded\nby not asking fseek once we cached the value.\n\n> > > I also don't get why 4/4 would be a good idea on its own. It uses\n> > > BUF_DROP_FULL_SCAN_THRESHOLD to guard\n> > > FindAndDropRelFileNodeBuffers() on a per relation basis. But since\n> > > DropRelFileNodesAllBuffers() can be used for many relations at once,\n> > > this could end up doing BUF_DROP_FULL_SCAN_THRESHOLD - 1\n> > lookups a lot\n> > > of times, once for each of nnodes relations?\n> > \n> > So, the threshold value should be compared with the total number of blocks\n> > of all target relations, not each relation. You seem to be right, got it.\n> \n> Fixed this in 0004 patch. Now we compare the total number of buffers-to-be-invalidated\n> For ALL relations to the BUF_DROP_FULL_SCAN_THRESHOLD.\n\nI didn't see the previous version, but the row of additional\npalloc/pfree's in this version looks uneasy.\n\n\n \tint\t\t\ti,\n+\t\t\t\tj,\n+\t\t\t\t*nforks,\n \t\t\t\tn = 0;\n\nPerhaps I think we don't define variable in different types at once.\n(I'm not sure about defining multple variables at once.)\n\n\n@@ -3110,7 +3125,10 @@ DropRelFileNodesAllBuffers(RelFileNodeBackend *rnodes, int nnodes)\n \t\t\t\tDropRelFileNodeAllLocalBuffers(rnodes[i].node);\n \t\t}\n \t\telse\n+\t\t{\n+\t\t\trels[n] = smgr_reln[i];\n \t\t\tnodes[n++] = rnodes[i].node;\n+\t\t}\n \t}\n\nWe don't need to remember nodes and rnodes here since rnodes[n] is\nrels[n]->smgr_rnode here. Or we don't even need to store rels since\nwe can scan the smgr_reln later again.\n\nnodes is needed in the full-scan path but it is enough to collect it\nafter finding that we do full-scan.\n\n\n \t/*\n@@ -3120,6 +3138,68 @@ DropRelFileNodesAllBuffers(RelFileNodeBackend *rnodes, int nnodes)\n \tif (n == 0)\n \t{\n \t\tpfree(nodes);\n+\t\tpfree(rels);\n+\t\tpfree(rnodes);\n+\t\treturn;\n+\t}\n+\n+\tnforks = palloc(sizeof(int) * n);\n+\tforks = palloc(sizeof(ForkNumber *) * n);\n+\tblocks = palloc(sizeof(BlockNumber *) * n);\n+\tfirstDelBlocks = palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM + 1));\n+\tfor (i = 0; i < n; i++)\n+\t{\n+\t\tforks[i] = palloc(sizeof(ForkNumber) * (MAX_FORKNUM + 1));\n+\t\tblocks[i] = palloc(sizeof(BlockNumber) * (MAX_FORKNUM + 1));\n+\t}\n\nWe can allocate the whole array at once like this.\n\n BlockNumber (*blocks)[MAX_FORKNUM+1] =\n (BlockNumber (*)[MAX_FORKNUM+1])\n\t palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM + 1))\n\nThe elements of forks[][] and blocks[][] are not initialized bacause\nsome of the elemets may be skipped due to the absense of the\ncorresponding fork.\n\n+\t\t\tif (!smgrexists(rels[i], j))\n+\t\t\t\tcontinue;\n+\n+\t\t\t/* Get the number of blocks for a relation's fork */\n+\t\t\tblocks[i][numForks] = smgrnblocks(rels[i], j, NULL);\n\nIf we see a fork which its size is not cached we must give up this\noptimization for all target relations.\n\n+\t\t\tnBlocksToInvalidate += blocks[i][numForks];\n+\n+\t\t\tforks[i][numForks++] = j;\n\nWe can signal to the later code the absense of a fork by setting\nInvalidBlockNumber to blocks. Thus forks[], nforks and numForks can be\nremoved.\n\n+\t/* Zero the array of blocks because these will all be dropped anyway */\n+\tMemSet(firstDelBlocks, 0, sizeof(BlockNumber) * n * (MAX_FORKNUM + 1));\n\nWe don't need to prepare nforks, forks and firstDelBlocks for all\nrelations before looping over relations. In other words, we can fill\nin the arrays for a relation at every iteration of relations.\n\n+\t * We enter the optimization iff we are in recovery and the number of blocks to\n\nThis comment ticks out of 80 columns. (I'm not sure whether that\nconvention is still valid..)\n\n+\tif (InRecovery && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n\nWe don't need to check InRecovery here. DropRelFileNodeBuffers doesn't\ndo that.\n\n+\t\tfor (j = 0; j < n; j++)\n+\t\t{\n+\t\t\tFindAndDropRelFileNodeBuffers(nodes[j], forks[j], nforks[j],\n\ni is not used at this nesting level so we can use i here.\n\n\n\n> > > Also, how is 4/4 safe - this is outside of recovery too?\n> > \n> > It seems that DropRelFileNodesAllBuffers() should trigger the new\n> > optimization path only when InRecovery == true, because it intentionally\n> > doesn't check the \"accurate\" value returned from smgrnblocks().\n> \n> Fixed it in 0004 patch. Now we ensure that we only enter the optimization path\n> Iff during recovery.\n\nIf the size of any of the target relations is not cached, we give up\nthe optimization at all even while recoverying. Or am I missing\nsomething?\n\n> > From: Amit Kapila <amit.kapila16@gmail.com>\n> > On Wed, Nov 18, 2020 at 11:43 PM Andres Freund <andres@anarazel.de>\n> > > I'm also worried about the cases where this could cause buffers left\n> > > in the buffer pool, without a crosscheck like Thomas' patch would\n> > > allow to add. Obviously other processes can dirty buffers in\n> > > hot_standby, so any leftover buffer could have bad consequences.\n> > >\n> > \n> > The problem can only arise if other processes extend the relation. The idea\n> > was that in recovery it extends relation by one process which helps to\n> > maintain the cache. Kirk seems to have done testing to cross-verify it by using\n> > his first patch (Prevent-invalidating-blocks-in-smgrextend-during). Which\n> > other crosscheck you are referring here?\n> > \n> > I agree that we can do a better job by expanding comments to clearly state\n> > why it is safe.\n> \n> Yes, basically what Amit-san also mentioned above. The first patch prevents that.\n> And in the description of DropRelFileNodeBuffers in the 0003 patch, please check\n> If that would suffice.\n\n+ *\t\tWhile in recovery, if the expected maximum number of buffers to be\n+ *\t\tdropped is small enough and the sizes of all involved forks are\n+ *\t\talready cached, individual buffer is located by BufTableLookup().\n+ *\t\tIt is safe because cached blocks will not be invalidated by file\n+ *\t\textension during recovery. See smgrnblocks() and smgrextend() for\n+ *\t\tmore details. Otherwise, if the conditions for optimization are not\n+ *\t\tmet, the buffer pool is sequentially scanned so that no buffers are\n+ *\t\tleft behind.\n\nI'm not confident on it, but it seems somewhat obscure. How about\nsomething like this?\n\nWe mustn't leave a buffer for the relations to be dropped. We\ninvalidate buffer blocks by locating using BufTableLookup() when we\nassure that we know up to what page of every fork we possiblly have a\nbuffer for. We can know that by the \"cached\" flag returned by\nsmgrblocks. It currently gets true only while recovery. See\nsmgrnblocks() and smgrextend(). Otherwise we scan the whole buffer\npool to find buffers for the relation, which is slower when a small\npart of buffers are to be dropped.\n\n> > > Smaller comment:\n> > >\n> > > +static void\n> > > +FindAndDropRelFileNodeBuffers(RelFileNode rnode, ForkNumber\n> > *forkNum,\n> > > int nforks,\n> > > +\t\t\t\t\t\t\t BlockNumber\n> > > *nForkBlocks, BlockNumber *firstDelBlock) ...\n> > > +\t\t\t/* Check that it is in the buffer pool. If not, do\n> > nothing.\n> > > */\n> > > +\t\t\tLWLockAcquire(bufPartitionLock, LW_SHARED);\n> > > +\t\t\tbuf_id = BufTableLookup(&bufTag, bufHash);\n> > > ...\n> > > +\t\t\tbufHdr = GetBufferDescriptor(buf_id);\n> > > +\n> > > +\t\t\tbuf_state = LockBufHdr(bufHdr);\n> > > +\n> > > +\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode)\n> > &&\n> > > +\t\t\t\tbufHdr->tag.forkNum == forkNum[i] &&\n> > > +\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[i])\n> > > +\t\t\t\tInvalidateBuffer(bufHdr);\t/* releases\n> > > spinlock */\n> > > +\t\t\telse\n> > > +\t\t\t\tUnlockBufHdr(bufHdr, buf_state);\n> > >\n> > > I'm a bit confused about the check here. We hold a buffer partition\n> > > lock, and have done a lookup in the mapping table. Why are we then\n> > > rechecking the relfilenode/fork/blocknum? And why are we doing so\n> > > holding the buffer header lock, which is essentially a spinlock, so\n> > > should only ever be held for very short portions?\n> > >\n> > > This looks like it's copying logic from DropRelFileNodeBuffers() etc,\n> > > but there the situation is different: We haven't done a buffer mapping\n> > > lookup, and we don't hold a partition lock!\n> > \n> > That's because the buffer partition lock is released immediately after the hash\n> > table has been looked up. As an aside, InvalidateBuffer() requires the caller\n> > to hold the buffer header spinlock and doesn't hold the buffer partition lock.\n> \n> Yes. Holding the buffer header spinlock is necessary to invalidate the buffers.\n> As for buffer mapping partition lock, as mentioned by Tsunakawa-san, it is\n> released immediately after BufTableLookup, which is similar to lookup done in\n> PrefetchSharedBuffer. So I retained these changes.\n> \n> I have attached the updated patches. Aside from descriptions, no other major\n> changes in the patch set except 0004. Feedbacks are welcome.\n\nFWIW, As tunakawa-san mentioned, the partition lock is release\nimmedately after the look-up. The reason that we may release the\npartition lock immediately is that it is OK that the buffer has been\nevicted by someone to reuse it for other relations. We can know that\ncase by rechecking the buffer tag after holding header lock.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 Nov 2020 16:18:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 26 Nov 2020 16:18:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> +\t/* Zero the array of blocks because these will all be dropped anyway */\n> +\tMemSet(firstDelBlocks, 0, sizeof(BlockNumber) * n * (MAX_FORKNUM + 1));\n> \n> We don't need to prepare nforks, forks and firstDelBlocks for all\n> relations before looping over relations. In other words, we can fill\n> in the arrays for a relation at every iteration of relations.\n\nOr even we could call FindAndDropRelFileNodeBuffers() for each\nforks. It dones't matter in the performance perspective whether the\nfunction loops over forks or the function is called for each forks.\n\nregrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 26 Nov 2020 16:40:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Hello, Kirk. Thank you for the new version.\n\nHi, Horiguchi-san. Thank you for your very helpful feedback.\nI'm updating the patches addressing those.\n\n> +\t\t\tif (!smgrexists(rels[i], j))\n> +\t\t\t\tcontinue;\n> +\n> +\t\t\t/* Get the number of blocks for a relation's fork */\n> +\t\t\tblocks[i][numForks] = smgrnblocks(rels[i], j,\n> NULL);\n> \n> If we see a fork which its size is not cached we must give up this optimization\n> for all target relations.\n\nI did not use the \"cached\" flag in DropRelFileNodesAllBuffers and use InRecovery\nwhen deciding for optimization because of the following reasons:\nXLogReadBufferExtended() calls smgrnblocks() to apply changes to relation page\ncontents. So in DropRelFileNodeBuffers(), XLogReadBufferExtended() is called\nduring VACUUM replay because VACUUM changes the page content.\nOTOH, TRUNCATE doesn't change the relation content, it just truncates relation pages\nwithout changing the page contents. So XLogReadBufferExtended() is not called, and\nthe \"cached\" flag will always return false. I tested with \"cached\" flags before, and this\nalways return false, at least in DropRelFileNodesAllBuffers. Due to this, we cannot use\nthe cached flag in DropRelFileNodesAllBuffers(). However, I think we can still rely on\nsmgrnblocks to get the file size as long as we're InRecovery. That cached nblocks is still\nguaranteed to be the maximum in the shared buffer.\nThoughts?\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Fri, 27 Nov 2020 02:19:57 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Fri, 27 Nov 2020 02:19:57 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > Hello, Kirk. Thank you for the new version.\n> \n> Hi, Horiguchi-san. Thank you for your very helpful feedback.\n> I'm updating the patches addressing those.\n> \n> > +\t\t\tif (!smgrexists(rels[i], j))\n> > +\t\t\t\tcontinue;\n> > +\n> > +\t\t\t/* Get the number of blocks for a relation's fork */\n> > +\t\t\tblocks[i][numForks] = smgrnblocks(rels[i], j,\n> > NULL);\n> > \n> > If we see a fork which its size is not cached we must give up this optimization\n> > for all target relations.\n> \n> I did not use the \"cached\" flag in DropRelFileNodesAllBuffers and use InRecovery\n> when deciding for optimization because of the following reasons:\n> XLogReadBufferExtended() calls smgrnblocks() to apply changes to relation page\n> contents. So in DropRelFileNodeBuffers(), XLogReadBufferExtended() is called\n> during VACUUM replay because VACUUM changes the page content.\n> OTOH, TRUNCATE doesn't change the relation content, it just truncates relation pages\n> without changing the page contents. So XLogReadBufferExtended() is not called, and\n> the \"cached\" flag will always return false. I tested with \"cached\" flags before, and this\n\nA bit different from the point, but if some tuples have been inserted\nto the truncated table, XLogReadBufferExtended() is called for the\ntable and the length is cached.\n\n> always return false, at least in DropRelFileNodesAllBuffers. Due to this, we cannot use\n> the cached flag in DropRelFileNodesAllBuffers(). However, I think we can still rely on\n> smgrnblocks to get the file size as long as we're InRecovery. That cached nblocks is still\n> guaranteed to be the maximum in the shared buffer.\n> Thoughts?\n\nThat means that we always think as if smgrnblocks returns \"cached\" (or\n\"safe\") value during recovery, which is out of our current\nconsensus. If we go on that side, we don't need to consult the\n\"cached\" returned from smgrnblocks at all and it's enough to see only\nInRecovery.\n\nI got confused..\n\nWe are relying on the \"fact\" that the first lseek() call of a\n(startup) process tells the truth. We added an assertion so that we\nmake sure that the cached value won't be cleared during recovery. A\npossible remaining danger would be closing of an smgr object of a live\nrelation just after a file extension failure. I think we are thinking\nthat that doesn't happen during recovery. Although it seems to me\ntrue, I'm not confident.\n\nIf that's true, we don't even need to look at the \"cached\" flag at all\nand always be able to rely on the returned value from msgrnblocks()\nduring recovery. Otherwise, we need to avoid the danger situation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 27 Nov 2020 15:06:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi!\n\nI've found this patch is RFC on commitfest application. I've quickly\nchecked if it's really ready for commit. It seems there are still\nunaddressed review notes. I'm going to switch it to WFA.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 27 Nov 2020 11:30:22 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> We are relying on the \"fact\" that the first lseek() call of a\n> (startup) process tells the truth. We added an assertion so that we\n> make sure that the cached value won't be cleared during recovery. A\n> possible remaining danger would be closing of an smgr object of a live\n> relation just after a file extension failure. I think we are thinking\n> that that doesn't happen during recovery. Although it seems to me\n> true, I'm not confident.\n> \n> If that's true, we don't even need to look at the \"cached\" flag at all\n> and always be able to rely on the returned value from msgrnblocks()\n> during recovery. Otherwise, we need to avoid the danger situation.\n\nHmm, I've gotten to think that smgrnblocks() doesn't need the cached parameter, too. DropRel*Buffers() can just check InRecovery. Regarding the only concern about smgrclose() by startup process, I was afraid of the cache invalidation by CacheInvalidateSmgr(), but startup process doesn't receive shared inval messages. So, it doesn't call smgrclose*() due to shared cache invalidation.\n\n[InitRecoveryTransactionEnvironment()]\n /* Initialize shared invalidation management for Startup process, being\n * Initialize shared invalidation management for Startup process, being\n * careful to register ourselves as a sendOnly process so we don't need to\n * read messages, nor will we get signaled when the queue starts filling\n * up.\n */\n SharedInvalBackendInit(true);\n\n\nKirk-san,\nCan you check to see if smgrclose() and its friends are not called during recovery using the regression test?\n\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Tue, 1 Dec 2020 02:46:07 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, November 26, 2020 4:19 PM, Horiguchi-san wrote:\n> Hello, Kirk. Thank you for the new version.\n\nApologies for the delay, but attached are the updated versions to simplify the patches.\nThe changes reflected most of your comments/suggestions.\n\nSummary of changes in the latest versions.\n1. Updated the function description of DropRelFileNodeBuffers in 0003.\n2. Updated the commit logs of 0003 and 0004.\n3, FindAndDropRelFileNodeBuffers is now called for each relation fork,\n instead of for all involved forks.\n4. Removed the unnecessary palloc() and subscripts like forks[][],\n firstDelBlock[], nforks, as advised by Horiguchi-san. The memory\n allocation for block[][] was also simplified.\n So 0004 became simpler and more readable.\n\n\n> At Thu, 26 Nov 2020 03:04:10 +0000, \"k.jamison@fujitsu.com\"\n> <k.jamison@fujitsu.com> wrote in\n> > On Thursday, November 19, 2020 4:08 PM, Tsunakawa, Takayuki wrote:\n> > > From: Andres Freund <andres@anarazel.de>\n> > > > DropRelFileNodeBuffers() in recovery? The most common path is\n> > > > DropRelationFiles()->smgrdounlinkall()->DropRelFileNodesAllBuffers\n> > > > (), which 3/4 doesn't address and 4/4 doesn't mention.\n> > > >\n> > > > 4/4 seems to address DropRelationFiles(), but only talks about\n> > > TRUNCATE?\n> > >\n> > > Yes. DropRelationFiles() is used in the following two paths:\n> > >\n> > > [Replay of TRUNCATE during recovery]\n> > > xact_redo_commit/abort() -> DropRelationFiles() ->\n> > > smgrdounlinkall() ->\n> > > DropRelFileNodesAllBuffers()\n> > >\n> > > [COMMIT/ROLLBACK PREPARED]\n> > > FinishPreparedTransaction() -> DropRelationFiles() ->\n> > > smgrdounlinkall()\n> > > -> DropRelFileNodesAllBuffers()\n> >\n> > Yes. The concern is that it was not clear in the function descriptions\n> > and commit logs what the optimizations for the\n> > DropRelFileNodeBuffers() and DropRelFileNodesAllBuffers() are for. So\n> > I revised only the function description of DropRelFileNodeBuffers() and the\n> commit logs of the 0003-0004 patches. Please check if the brief descriptions\n> would suffice.\n> \n> I read the commit message of 3/4. (Though this is not involved literally in the\n> final commit.)\n> \n> > While recovery, when WAL files of XLOG_SMGR_TRUNCATE from vacuum\n> or\n> > autovacuum are replayed, the buffers are dropped when the sizes of all\n> > involved forks of a relation are already \"cached\". We can get\n> \n> This sentence seems missing \"dropped by (or using) what\".\n> \n> > a reliable size of nblocks for supplied relation's fork at that time,\n> > and it's safe because DropRelFileNodeBuffers() relies on the behavior\n> > that cached nblocks will not be invalidated by file extension during\n> > recovery. Otherwise, or if not in recovery, proceed to sequential\n> > search of the whole buffer pool.\n> \n> This sentence seems involving confusion. It reads as if \"we can rely on it\n> because we're relying on it\". And \"the cached value won't be invalidated\"\n> doesn't explain the reason precisely. The reason I think is that the cached\n> value is guaranteed to be the maximum page we have in shared buffer at least\n> while recovery, and that guarantee is holded by not asking fseek once we\n> cached the value.\n\nFixed the commit log of 0003.\n\n> > > > I also don't get why 4/4 would be a good idea on its own. It uses\n> > > > BUF_DROP_FULL_SCAN_THRESHOLD to guard\n> > > > FindAndDropRelFileNodeBuffers() on a per relation basis. But since\n> > > > DropRelFileNodesAllBuffers() can be used for many relations at\n> > > > once, this could end up doing BUF_DROP_FULL_SCAN_THRESHOLD\n> - 1\n> > > lookups a lot\n> > > > of times, once for each of nnodes relations?\n> > >\n> > > So, the threshold value should be compared with the total number of\n> > > blocks of all target relations, not each relation. You seem to be right, got\n> it.\n> >\n> > Fixed this in 0004 patch. Now we compare the total number of\n> > buffers-to-be-invalidated For ALL relations to the\n> BUF_DROP_FULL_SCAN_THRESHOLD.\n> \n> I didn't see the previous version, but the row of additional palloc/pfree's in\n> this version looks uneasy.\n\nFixed this too.\n \n> \tint\t\t\ti,\n> +\t\t\t\tj,\n> +\t\t\t\t*nforks,\n> \t\t\t\tn = 0;\n> \n> Perhaps I think we don't define variable in different types at once.\n> (I'm not sure about defining multple variables at once.)\n\nFixed this too.\n\n> @@ -3110,7 +3125,10 @@ DropRelFileNodesAllBuffers(RelFileNodeBackend\n> *rnodes, int nnodes)\n> \n> \tDropRelFileNodeAllLocalBuffers(rnodes[i].node);\n> \t\t}\n> \t\telse\n> +\t\t{\n> +\t\t\trels[n] = smgr_reln[i];\n> \t\t\tnodes[n++] = rnodes[i].node;\n> +\t\t}\n> \t}\n> \n> We don't need to remember nodes and rnodes here since rnodes[n] is\n> rels[n]->smgr_rnode here. Or we don't even need to store rels since we can\n> scan the smgr_reln later again.\n> \n> nodes is needed in the full-scan path but it is enough to collect it after finding\n> that we do full-scan.\n\nI followed your advice and removed the rnodes[] and rels[].\nnodes[] is allocated later at full scan path.\n\n\n> +\tnforks = palloc(sizeof(int) * n);\n> +\tforks = palloc(sizeof(ForkNumber *) * n);\n> +\tblocks = palloc(sizeof(BlockNumber *) * n);\n> +\tfirstDelBlocks = palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM\n> + 1));\n> +\tfor (i = 0; i < n; i++)\n> +\t{\n> +\t\tforks[i] = palloc(sizeof(ForkNumber) * (MAX_FORKNUM +\n> 1));\n> +\t\tblocks[i] = palloc(sizeof(BlockNumber) * (MAX_FORKNUM\n> + 1));\n> +\t}\n> \n> We can allocate the whole array at once like this.\n> \n> BlockNumber (*blocks)[MAX_FORKNUM+1] =\n> (BlockNumber (*)[MAX_FORKNUM+1])\n> \t palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM + 1))\n\nThank you for suggesting to reduce the lines for the 2d dynamic memory alloc.\nI followed this way in 0004, but it's my first time to see it written this way.\nI am very glad it works, though is it okay to write it this way since I cannot find\na similar code of declaring and allocating 2D arrays like this in Postgres source code?\n\n> +\t\t\tnBlocksToInvalidate += blocks[i][numForks];\n> +\n> +\t\t\tforks[i][numForks++] = j;\n> \n> We can signal to the later code the absense of a fork by setting\n> InvalidBlockNumber to blocks. Thus forks[], nforks and numForks can be\n> removed.\n\nFollowed it in 0004.\n\n> +\t/* Zero the array of blocks because these will all be dropped anyway\n> */\n> +\tMemSet(firstDelBlocks, 0, sizeof(BlockNumber) * n *\n> (MAX_FORKNUM +\n> +1));\n> \n> We don't need to prepare nforks, forks and firstDelBlocks for all relations\n> before looping over relations. In other words, we can fill in the arrays for a\n> relation at every iteration of relations.\n\nFollowed your advice. Although I now drop the buffers per fork, which now\nremoves forks[][], nforks, firstDelBlocks[].\n \n> +\t * We enter the optimization iff we are in recovery and the number of\n> +blocks to\n> \n> This comment ticks out of 80 columns. (I'm not sure whether that convention\n> is still valid..)\n\nFixed.\n \n> +\tif (InRecovery && nBlocksToInvalidate <\n> BUF_DROP_FULL_SCAN_THRESHOLD)\n> \n> We don't need to check InRecovery here. DropRelFileNodeBuffers doesn't do\n> that.\n\n\nAs for DropRelFileNodesAllBuffers use case, I used InRecovery\nso that the optimization still works.\n Horiguchi-san also wrote in another mail:\n> A bit different from the point, but if some tuples have been inserted to the\n> truncated table, XLogReadBufferExtended() is called for the table and the\n> length is cached.\nI was wrong in my previous claim that the \"cached\" value always return false.\nWhen I checked the recovery test log from recovery tap test, there was only\none example when \"cached\" became true (script below) and entered the\noptimization path. However, in all other cases including the TRUNCATE test case\nin my patch, the \"cached\" flag returns \"false\".\n\n\"cached\" flag became true:\n\t# in different subtransaction patterns\n\t$node->safe_psql(\n\t\t'postgres', \"\n\t\tBEGIN;\n\t\tCREATE TABLE spc_commit (id serial PRIMARY KEY, id2 int);\n\t\tINSERT INTO spc_commit VALUES (DEFAULT, generate_series(1,3000));\n\t\tTRUNCATE spc_commit;\n\t\tSAVEPOINT s; ALTER TABLE spc_commit SET TABLESPACE other; RELEASE s;\n\t\tCOPY spc_commit FROM '$copy_file' DELIMITER ',';\n\t\tCOMMIT;\");\n\t$node->stop('immediate');\n\t$node->start;\n\nSo I used the InRecovery for the optimization case of DropRelFileNodesAllBuffers.\nI retained the smgrnblocks' \"cached\" parameter as it is useful in\nDropRelFileNodeBuffers.\n\n\n> > > I agree that we can do a better job by expanding comments to clearly\n> > > state why it is safe.\n> >\n> > Yes, basically what Amit-san also mentioned above. The first patch\n> prevents that.\n> > And in the description of DropRelFileNodeBuffers in the 0003 patch,\n> > please check If that would suffice.\n> \n> + *\t\tWhile in recovery, if the expected maximum number of\n> buffers to be\n> + *\t\tdropped is small enough and the sizes of all involved forks\n> are\n> + *\t\talready cached, individual buffer is located by\n> BufTableLookup().\n> + *\t\tIt is safe because cached blocks will not be invalidated by file\n> + *\t\textension during recovery. See smgrnblocks() and\n> smgrextend() for\n> + *\t\tmore details. Otherwise, if the conditions for optimization are\n> not\n> + *\t\tmet, the buffer pool is sequentially scanned so that no\n> buffers are\n> + *\t\tleft behind.\n> \n> I'm not confident on it, but it seems somewhat obscure. How about\n> something like this?\n> \n> We mustn't leave a buffer for the relations to be dropped. We invalidate\n> buffer blocks by locating using BufTableLookup() when we assure that we\n> know up to what page of every fork we possiblly have a buffer for. We can\n> know that by the \"cached\" flag returned by smgrblocks. It currently gets true\n> only while recovery. See\n> smgrnblocks() and smgrextend(). Otherwise we scan the whole buffer pool to\n> find buffers for the relation, which is slower when a small part of buffers are\n> to be dropped.\n\nFollowed your advice and modified it a bit.\n\nI have changed the status to \"Needs Review\".\nFeedbacks are always welcome.\n\nRegards,\nKirk Jamison", "msg_date": "Thu, 3 Dec 2020 03:49:27 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> Apologies for the delay, but attached are the updated versions to simplify the\n> patches.\n\nLooks good for me. Thanks to Horiguchi-san and Andres-san, the code bebecame further compact and easier to read. I've marked this ready for committer.\n\n\nTo the committer:\nI don't think it's necessary to refer to COMMIT/ROLLBACK PREPARED in the following part of the 0003 commit message. They surely call DropRelFileNodesAllBuffers(), but COMMIT/ROLLBACK also call it.\n\nthe full scan threshold. This improves the DropRelationFiles()\nperformance when the TRUNCATE command truncated off any of the empty\npages at the end of relation, and when dropping relation buffers if a\ncommit/rollback transaction has been prepared in FinishPreparedTransaction().\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 3 Dec 2020 07:18:16 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hello, Kirk\n\nThanks for providing the new patches.\nI did the recovery performance test on them, the results look good. I'd like to share them with you and everyone else. \n(I also record VACUUM and TRUNCATE execution time on master/primary in case you want to have a look.) \n\n1. VACUUM and Failover test results(average of 15 times) \n[VACUUM] ---execution time on master/primary\nshared_buffers master(sec) patched(sec) %reg=((patched-master)/master)\n--------------------------------------------------------------------------------------\n128M 9.440 9.483 0%\n10G 74.689 76.219 2%\n20G 152.538 138.292 -9%\n\n[Failover] ---execution time on standby\nshared_buffers master(sec) patched(sec) %reg=((patched-master)/master)\n--------------------------------------------------------------------------------------\n128M 3.629 2.961 -18%\n10G 82.443 2.627 -97%\n20G 171.388 2.607 -98%\n\n2. TRUNCATE and Failover test results(average of 15 times) \n[TRUNCATE] ---execution time on master/primary\nshared_buffers master(sec) patched(sec) %reg=((patched-master)/master)\n--------------------------------------------------------------------------------------\n128M 49.271 49.867 1%\n10G 172.437 175.197 2%\n20G 279.658 278.752 0%\n\n[Failover] ---execution time on standby\nshared_buffers master(sec) patched(sec) %reg=((patched-master)/master)\n--------------------------------------------------------------------------------------\n128M 4.877 3.989 -18%\n10G 92.680 3.975 -96%\n20G 182.035 3.962 -98% \n\n[Machine spec]\nCPU : 40 processors (Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz)\nMemory: 64G\nOS: CentOS 8\n\n[Failover test data]\nTotal table Size: 700M\nTable: 10000 tables (1000 rows per table)\n\nIf you have question on my test, please let me know.\n\nRegards,\nTang\n\n\n\n\n", "msg_date": "Fri, 4 Dec 2020 03:42:19 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Thanks for the new version.\n\nThis contains only replies. I'll send some further comments in another\nmail later.\n\t \nAt Thu, 3 Dec 2020 03:49:27 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> On Thursday, November 26, 2020 4:19 PM, Horiguchi-san wrote:\n> > Hello, Kirk. Thank you for the new version.\n> \n> Apologies for the delay, but attached are the updated versions to simplify the patches.\n> The changes reflected most of your comments/suggestions.\n> \n> Summary of changes in the latest versions.\n> 1. Updated the function description of DropRelFileNodeBuffers in 0003.\n> 2. Updated the commit logs of 0003 and 0004.\n> 3, FindAndDropRelFileNodeBuffers is now called for each relation fork,\n> instead of for all involved forks.\n> 4. Removed the unnecessary palloc() and subscripts like forks[][],\n> firstDelBlock[], nforks, as advised by Horiguchi-san. The memory\n> allocation for block[][] was also simplified.\n> So 0004 became simpler and more readable.\n...\n> > > a reliable size of nblocks for supplied relation's fork at that time,\n> > > and it's safe because DropRelFileNodeBuffers() relies on the behavior\n> > > that cached nblocks will not be invalidated by file extension during\n> > > recovery. Otherwise, or if not in recovery, proceed to sequential\n> > > search of the whole buffer pool.\n> > \n> > This sentence seems involving confusion. It reads as if \"we can rely on it\n> > because we're relying on it\". And \"the cached value won't be invalidated\"\n> > doesn't explain the reason precisely. The reason I think is that the cached\n> > value is guaranteed to be the maximum page we have in shared buffer at least\n> > while recovery, and that guarantee is holded by not asking fseek once we\n> > cached the value.\n> \n> Fixed the commit log of 0003.\n\nThanks!\n\n...\n> > +\tnforks = palloc(sizeof(int) * n);\n> > +\tforks = palloc(sizeof(ForkNumber *) * n);\n> > +\tblocks = palloc(sizeof(BlockNumber *) * n);\n> > +\tfirstDelBlocks = palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM\n> > + 1));\n> > +\tfor (i = 0; i < n; i++)\n> > +\t{\n> > +\t\tforks[i] = palloc(sizeof(ForkNumber) * (MAX_FORKNUM +\n> > 1));\n> > +\t\tblocks[i] = palloc(sizeof(BlockNumber) * (MAX_FORKNUM\n> > + 1));\n> > +\t}\n> > \n> > We can allocate the whole array at once like this.\n> > \n> > BlockNumber (*blocks)[MAX_FORKNUM+1] =\n> > (BlockNumber (*)[MAX_FORKNUM+1])\n> > \t palloc(sizeof(BlockNumber) * n * (MAX_FORKNUM + 1))\n> \n> Thank you for suggesting to reduce the lines for the 2d dynamic memory alloc.\n> I followed this way in 0004, but it's my first time to see it written this way.\n> I am very glad it works, though is it okay to write it this way since I cannot find\n> a similar code of declaring and allocating 2D arrays like this in Postgres source code?\n\nActually it would be somewhat novel for a certain portion of people,\nbut it is fundamentally the same with function pointers. Hard to make\nit from scratch, but I suppose not so hard to read:)\n\nint (*func_char_to_int)(char x) = some_func;\n\nFWIW isn.c has the following part:\n\n> static bool\n> check_table(const char *(*TABLE)[2], const unsigned TABLE_index[10][2])\n\n\n> > +\t\t\tnBlocksToInvalidate += blocks[i][numForks];\n> > +\n> > +\t\t\tforks[i][numForks++] = j;\n> > \n> > We can signal to the later code the absense of a fork by setting\n> > InvalidBlockNumber to blocks. Thus forks[], nforks and numForks can be\n> > removed.\n> \n> Followed it in 0004.\n\nLooks fine to me, thanks.\n\n> > +\t/* Zero the array of blocks because these will all be dropped anyway\n> > */\n> > +\tMemSet(firstDelBlocks, 0, sizeof(BlockNumber) * n *\n> > (MAX_FORKNUM +\n> > +1));\n> > \n> > We don't need to prepare nforks, forks and firstDelBlocks for all relations\n> > before looping over relations. In other words, we can fill in the arrays for a\n> > relation at every iteration of relations.\n> \n> Followed your advice. Although I now drop the buffers per fork, which now\n> removes forks[][], nforks, firstDelBlocks[].\n\nThat's fine for me.\n\n> > +\t * We enter the optimization iff we are in recovery and the number of\n> > +blocks to\n> > \n> > This comment ticks out of 80 columns. (I'm not sure whether that convention\n> > is still valid..)\n> \n> Fixed.\n> \n> > +\tif (InRecovery && nBlocksToInvalidate <\n> > BUF_DROP_FULL_SCAN_THRESHOLD)\n> > \n> > We don't need to check InRecovery here. DropRelFileNodeBuffers doesn't do\n> > that.\n> \n> \n> As for DropRelFileNodesAllBuffers use case, I used InRecovery\n> so that the optimization still works.\n> Horiguchi-san also wrote in another mail:\n> > A bit different from the point, but if some tuples have been inserted to the\n> > truncated table, XLogReadBufferExtended() is called for the table and the\n> > length is cached.\n> I was wrong in my previous claim that the \"cached\" value always return false.\n> When I checked the recovery test log from recovery tap test, there was only\n> one example when \"cached\" became true (script below) and entered the\n> optimization path. However, in all other cases including the TRUNCATE test case\n> in my patch, the \"cached\" flag returns \"false\".\n\nYeah, I agree that smgrnblocks returns false in the targetted cases,\nso we should want some amendment. We need to disucssion on this point.\n\n> \"cached\" flag became true:\n> \t# in different subtransaction patterns\n> \t$node->safe_psql(\n> \t\t'postgres', \"\n> \t\tBEGIN;\n> \t\tCREATE TABLE spc_commit (id serial PRIMARY KEY, id2 int);\n> \t\tINSERT INTO spc_commit VALUES (DEFAULT, generate_series(1,3000));\n> \t\tTRUNCATE spc_commit;\n> \t\tSAVEPOINT s; ALTER TABLE spc_commit SET TABLESPACE other; RELEASE s;\n> \t\tCOPY spc_commit FROM '$copy_file' DELIMITER ',';\n> \t\tCOMMIT;\");\n> \t$node->stop('immediate');\n> \t$node->start;\n> \n> So I used the InRecovery for the optimization case of DropRelFileNodesAllBuffers.\n> I retained the smgrnblocks' \"cached\" parameter as it is useful in\n> DropRelFileNodeBuffers.\n\nI think that's ok as this version of the patch.\n\n> > > > I agree that we can do a better job by expanding comments to clearly\n> > > > state why it is safe.\n> > >\n> > > Yes, basically what Amit-san also mentioned above. The first patch\n> > prevents that.\n> > > And in the description of DropRelFileNodeBuffers in the 0003 patch,\n> > > please check If that would suffice.\n> > \n> > + *\t\tWhile in recovery, if the expected maximum number of\n> > buffers to be\n> > + *\t\tdropped is small enough and the sizes of all involved forks\n> > are\n> > + *\t\talready cached, individual buffer is located by\n> > BufTableLookup().\n> > + *\t\tIt is safe because cached blocks will not be invalidated by file\n> > + *\t\textension during recovery. See smgrnblocks() and\n> > smgrextend() for\n> > + *\t\tmore details. Otherwise, if the conditions for optimization are\n> > not\n> > + *\t\tmet, the buffer pool is sequentially scanned so that no\n> > buffers are\n> > + *\t\tleft behind.\n> > \n> > I'm not confident on it, but it seems somewhat obscure. How about\n> > something like this?\n> > \n> > We mustn't leave a buffer for the relations to be dropped. We invalidate\n> > buffer blocks by locating using BufTableLookup() when we assure that we\n> > know up to what page of every fork we possiblly have a buffer for. We can\n> > know that by the \"cached\" flag returned by smgrblocks. It currently gets true\n> > only while recovery. See\n> > smgrnblocks() and smgrextend(). Otherwise we scan the whole buffer pool to\n> > find buffers for the relation, which is slower when a small part of buffers are\n> > to be dropped.\n> \n> Followed your advice and modified it a bit.\n> \n> I have changed the status to \"Needs Review\".\n> Feedbacks are always welcome.\n> \n> Regards,\n> Kirk Jamison\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 04 Dec 2020 14:04:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 3 Dec 2020 07:18:16 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > Apologies for the delay, but attached are the updated versions to simplify the\n> > patches.\n> \n> Looks good for me. Thanks to Horiguchi-san and Andres-san, the code bebecame further compact and easier to read. I've marked this ready for committer.\n> \n> \n> To the committer:\n> I don't think it's necessary to refer to COMMIT/ROLLBACK PREPARED in the following part of the 0003 commit message. They surely call DropRelFileNodesAllBuffers(), but COMMIT/ROLLBACK also call it.\n> \n> the full scan threshold. This improves the DropRelationFiles()\n> performance when the TRUNCATE command truncated off any of the empty\n> pages at the end of relation, and when dropping relation buffers if a\n> commit/rollback transaction has been prepared in FinishPreparedTransaction().\n\nI think whether we can use this optimization only by looking\nInRecovery is still in doubt. Or if we can decide that on that\ncriteria, 0003 also can be simplivied using the same assumption.\n\n\nSeparate from the maybe-remaining discussion, I have a comment on the\nrevised code in 0004.\n\n+\t\t * equal to the full scan threshold.\n+\t\t */\n+\t\tif (nBlocksToInvalidate >= BUF_DROP_FULL_SCAN_THRESHOLD)\n+\t\t{\n+\t\t\tpfree(block);\n+\t\t\tgoto buffer_full_scan;\n+\t\t}\n\nI don't particularily hate goto statement but we can easily avoid that\nby reversing the condition here. You might consider the length of the\nline calling \"FindAndDropRelFileNodeBuffers\" but the indentation can\nbe lowered by inverting the condition on BLockNumberIsValid.\n\n!| if (nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n | {\n | \tfor (i = 0; i < n; i++)\n | \t{\n | \t\t/*\n | \t\t * If block to drop is valid, drop the buffers of the fork.\n | \t\t * Zero the firstDelBlock because all buffers will be\n | \t\t * dropped anyway.\n | \t\t */\n | \t\tfor (j = 0; j <= MAX_FORKNUM; j++)\n | \t\t{\n!| \t\t\tif (!BlockNumberIsValid(block[i][j]))\n!| \t\t\t\tcontinue;\n | \n | \t\t\tFindAndDropRelFileNodeBuffers(smgr_reln[i]->smgr_rnode.node,\n | \t\t\t\t\t\t\t\t\t\t j, block[i][j], 0);\n | \t\t}\n | \t}\n | \tpfree(block);\n | \treturn;\n | }\n | \n | pfree(block);\n\nOr we can separate the calcualtion part and the execution part by\nintroducing a flag \"do_fullscan\".\n\n |\t/*\n |\t * We enter the optimization iff we are in recovery. Otherwise,\n |\t * we proceed to full scan of the whole buffer pool.\n |\t */\n |\tif (InRecovery)\n |\t{\n...\n!| \t\tif (nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\n!|\t\t\tdo_fullscan = false;\n!|\t}\n!|\n!|\tif (!do_fullscan)\n!|\t{\n |\t\tfor (i = 0; i < n; i++)\n |\t\t{\n |\t\t\t/*\n |\t\t\t * If block to drop is valid, drop the buffers of the fork.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 04 Dec 2020 14:28:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, December 4, 2020 12:42 PM, Tang, Haiying wrote:\n> Hello, Kirk\n> \n> Thanks for providing the new patches.\n> I did the recovery performance test on them, the results look good. I'd like to\n> share them with you and everyone else.\n> (I also record VACUUM and TRUNCATE execution time on master/primary in\n> case you want to have a look.)\n\nHi, Tang.\nThank you very much for verifying the performance using the latest set of patches.\nAlthough it's not supposed to affect the non-recovery path (execution on primary),\nIt's good to see those results too.\n\n> 1. VACUUM and Failover test results(average of 15 times) [VACUUM]\n> ---execution time on master/primary\n> shared_buffers master(sec)\n> patched(sec) %reg=((patched-master)/master)\n> -------------------------------------------------------------------------------------\n> -\n> 128M 9.440 9.483 0%\n> 10G 74.689 76.219 2%\n> 20G 152.538 138.292 -9%\n> \n> [Failover] ---execution time on standby\n> shared_buffers master(sec)\n> patched(sec) %reg=((patched-master)/master)\n> -------------------------------------------------------------------------------------\n> -\n> 128M 3.629 2.961 -18%\n> 10G 82.443 2.627 -97%\n> 20G 171.388 2.607 -98%\n> \n> 2. TRUNCATE and Failover test results(average of 15 times) [TRUNCATE]\n> ---execution time on master/primary\n> shared_buffers master(sec)\n> patched(sec) %reg=((patched-master)/master)\n> -------------------------------------------------------------------------------------\n> -\n> 128M 49.271 49.867 1%\n> 10G 172.437 175.197 2%\n> 20G 279.658 278.752 0%\n> \n> [Failover] ---execution time on standby\n> shared_buffers master(sec)\n> patched(sec) %reg=((patched-master)/master)\n> -------------------------------------------------------------------------------------\n> -\n> 128M 4.877 3.989 -18%\n> 10G 92.680 3.975 -96%\n> 20G 182.035 3.962 -98%\n> \n> [Machine spec]\n> CPU : 40 processors (Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz)\n> Memory: 64G\n> OS: CentOS 8\n> \n> [Failover test data]\n> Total table Size: 700M\n> Table: 10000 tables (1000 rows per table)\n> \n> If you have question on my test, please let me know.\n\nLooks great.\nThat was helpful to see if there were any performance differences than the previous\nversions' results. But I am glad it turned out great too.\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Fri, 4 Dec 2020 07:05:50 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Nov 27, 2020 at 11:36 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 27 Nov 2020 02:19:57 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in\n> > > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > > Hello, Kirk. Thank you for the new version.\n> >\n> > Hi, Horiguchi-san. Thank you for your very helpful feedback.\n> > I'm updating the patches addressing those.\n> >\n> > > + if (!smgrexists(rels[i], j))\n> > > + continue;\n> > > +\n> > > + /* Get the number of blocks for a relation's fork */\n> > > + blocks[i][numForks] = smgrnblocks(rels[i], j,\n> > > NULL);\n> > >\n> > > If we see a fork which its size is not cached we must give up this optimization\n> > > for all target relations.\n> >\n> > I did not use the \"cached\" flag in DropRelFileNodesAllBuffers and use InRecovery\n> > when deciding for optimization because of the following reasons:\n> > XLogReadBufferExtended() calls smgrnblocks() to apply changes to relation page\n> > contents. So in DropRelFileNodeBuffers(), XLogReadBufferExtended() is called\n> > during VACUUM replay because VACUUM changes the page content.\n> > OTOH, TRUNCATE doesn't change the relation content, it just truncates relation pages\n> > without changing the page contents. So XLogReadBufferExtended() is not called, and\n> > the \"cached\" flag will always return false. I tested with \"cached\" flags before, and this\n>\n> A bit different from the point, but if some tuples have been inserted\n> to the truncated table, XLogReadBufferExtended() is called for the\n> table and the length is cached.\n>\n> > always return false, at least in DropRelFileNodesAllBuffers. Due to this, we cannot use\n> > the cached flag in DropRelFileNodesAllBuffers(). However, I think we can still rely on\n> > smgrnblocks to get the file size as long as we're InRecovery. That cached nblocks is still\n> > guaranteed to be the maximum in the shared buffer.\n> > Thoughts?\n>\n> That means that we always think as if smgrnblocks returns \"cached\" (or\n> \"safe\") value during recovery, which is out of our current\n> consensus. If we go on that side, we don't need to consult the\n> \"cached\" returned from smgrnblocks at all and it's enough to see only\n> InRecovery.\n>\n> I got confused..\n>\n> We are relying on the \"fact\" that the first lseek() call of a\n> (startup) process tells the truth. We added an assertion so that we\n> make sure that the cached value won't be cleared during recovery. A\n> possible remaining danger would be closing of an smgr object of a live\n> relation just after a file extension failure. I think we are thinking\n> that that doesn't happen during recovery. Although it seems to me\n> true, I'm not confident.\n>\n\nYeah, I also think it might not be worth depending upon whether smgr\nclose has been done before or not. I feel the current idea of using\n'cached' parameter is relatively solid and we should rely on that.\nAlso, which means that in DropRelFileNodesAllBuffers() we should rely\non the same, I think doing things differently in this regard will lead\nto confusion. I agree in some cases we might not get benefits but it\nis more important to be correct and keep the code consistent to avoid\nintroducing bugs now or in the future.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 4 Dec 2020 16:57:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, December 4, 2020 8:27 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Nov 27, 2020 at 11:36 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Fri, 27 Nov 2020 02:19:57 +0000, \"k.jamison@fujitsu.com\"\r\n> > <k.jamison@fujitsu.com> wrote in\r\n> > > > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Hello, Kirk.\r\n> > > > Thank you for the new version.\r\n> > >\r\n> > > Hi, Horiguchi-san. Thank you for your very helpful feedback.\r\n> > > I'm updating the patches addressing those.\r\n> > >\r\n> > > > + if (!smgrexists(rels[i], j))\r\n> > > > + continue;\r\n> > > > +\r\n> > > > + /* Get the number of blocks for a relation's fork */\r\n> > > > + blocks[i][numForks] = smgrnblocks(rels[i], j,\r\n> > > > NULL);\r\n> > > >\r\n> > > > If we see a fork which its size is not cached we must give up this\r\n> > > > optimization for all target relations.\r\n> > >\r\n> > > I did not use the \"cached\" flag in DropRelFileNodesAllBuffers and\r\n> > > use InRecovery when deciding for optimization because of the following\r\n> reasons:\r\n> > > XLogReadBufferExtended() calls smgrnblocks() to apply changes to\r\n> > > relation page contents. So in DropRelFileNodeBuffers(),\r\n> > > XLogReadBufferExtended() is called during VACUUM replay because\r\n> VACUUM changes the page content.\r\n> > > OTOH, TRUNCATE doesn't change the relation content, it just\r\n> > > truncates relation pages without changing the page contents. So\r\n> > > XLogReadBufferExtended() is not called, and the \"cached\" flag will\r\n> > > always return false. I tested with \"cached\" flags before, and this\r\n> >\r\n> > A bit different from the point, but if some tuples have been inserted\r\n> > to the truncated table, XLogReadBufferExtended() is called for the\r\n> > table and the length is cached.\r\n> >\r\n> > > always return false, at least in DropRelFileNodesAllBuffers. Due to\r\n> > > this, we cannot use the cached flag in DropRelFileNodesAllBuffers().\r\n> > > However, I think we can still rely on smgrnblocks to get the file\r\n> > > size as long as we're InRecovery. That cached nblocks is still guaranteed\r\n> to be the maximum in the shared buffer.\r\n> > > Thoughts?\r\n> >\r\n> > That means that we always think as if smgrnblocks returns \"cached\" (or\r\n> > \"safe\") value during recovery, which is out of our current consensus.\r\n> > If we go on that side, we don't need to consult the \"cached\" returned\r\n> > from smgrnblocks at all and it's enough to see only InRecovery.\r\n> >\r\n> > I got confused..\r\n> >\r\n> > We are relying on the \"fact\" that the first lseek() call of a\r\n> > (startup) process tells the truth. We added an assertion so that we\r\n> > make sure that the cached value won't be cleared during recovery. A\r\n> > possible remaining danger would be closing of an smgr object of a live\r\n> > relation just after a file extension failure. I think we are thinking\r\n> > that that doesn't happen during recovery. Although it seems to me\r\n> > true, I'm not confident.\r\n> >\r\n> \r\n> Yeah, I also think it might not be worth depending upon whether smgr close\r\n> has been done before or not. I feel the current idea of using 'cached'\r\n> parameter is relatively solid and we should rely on that.\r\n> Also, which means that in DropRelFileNodesAllBuffers() we should rely on\r\n> the same, I think doing things differently in this regard will lead to confusion. I\r\n> agree in some cases we might not get benefits but it is more important to be\r\n> correct and keep the code consistent to avoid introducing bugs now or in the\r\n> future.\r\n> \r\nHi, \r\nI have reported before that it is not always the case that the \"cached\" flag of\r\nsrnblocks() return true. So when I checked the truncate test case used in my\r\npatch, it does not enter the optimization path despite doing INSERT before\r\ntruncation of table.\r\nThe reason for that is because in TRUNCATE, a new RelFileNode is assigned\r\nto the relation when creating a new file. In recovery, XLogReadBufferExtended()\r\nalways opens the RelFileNode and calls smgrnblocks() for that RelFileNode for the\r\nfirst time. And for recovery processing, different RelFileNodes are used for the\r\nINSERTs to the table and TRUNCATE to the same table.\r\n\r\nAs we cannot use \"cached\" flag for both DropRelFileNodeBuffers() and\r\nDropRelFileNodesAllBuffers() based from above.\r\nI am thinking that if we want consistency, correctness, and to still make use of\r\nthe optimization, we can completely drop the \"cached\" flag parameter in smgrnblocks,\r\nand use InRecovery.\r\nTsunakawa-san mentioned in [1] that it is safe because smgrclose is not called\r\nby the startup process in recovery. Shared-inval messages are not sent to startup\r\nprocess.\r\n\r\nOtherwise, we use the current patch form as it is: using \"cached\" in\r\nDropRelFileNodeBuffers() and InRecovery for DropRelFileNodesAllBuffers().\r\nHowever, that does not seem to be what is wanted in this thread.\r\n\r\nThoughts?\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n[1] https://www.postgresql.org/message-id/TYAPR01MB2990B42570A5FAC349EE983AFEF40%40TYAPR01MB2990.jpnprd01.prod.outlook.com\r\n", "msg_date": "Mon, 7 Dec 2020 07:02:35 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Mon, Dec 7, 2020 at 12:32 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Friday, December 4, 2020 8:27 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Nov 27, 2020 at 11:36 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Fri, 27 Nov 2020 02:19:57 +0000, \"k.jamison@fujitsu.com\"\n> > > <k.jamison@fujitsu.com> wrote in\n> > > > > From: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Hello, Kirk.\n> > > > > Thank you for the new version.\n> > > >\n> > > > Hi, Horiguchi-san. Thank you for your very helpful feedback.\n> > > > I'm updating the patches addressing those.\n> > > >\n> > > > > + if (!smgrexists(rels[i], j))\n> > > > > + continue;\n> > > > > +\n> > > > > + /* Get the number of blocks for a relation's fork */\n> > > > > + blocks[i][numForks] = smgrnblocks(rels[i], j,\n> > > > > NULL);\n> > > > >\n> > > > > If we see a fork which its size is not cached we must give up this\n> > > > > optimization for all target relations.\n> > > >\n> > > > I did not use the \"cached\" flag in DropRelFileNodesAllBuffers and\n> > > > use InRecovery when deciding for optimization because of the following\n> > reasons:\n> > > > XLogReadBufferExtended() calls smgrnblocks() to apply changes to\n> > > > relation page contents. So in DropRelFileNodeBuffers(),\n> > > > XLogReadBufferExtended() is called during VACUUM replay because\n> > VACUUM changes the page content.\n> > > > OTOH, TRUNCATE doesn't change the relation content, it just\n> > > > truncates relation pages without changing the page contents. So\n> > > > XLogReadBufferExtended() is not called, and the \"cached\" flag will\n> > > > always return false. I tested with \"cached\" flags before, and this\n> > >\n> > > A bit different from the point, but if some tuples have been inserted\n> > > to the truncated table, XLogReadBufferExtended() is called for the\n> > > table and the length is cached.\n> > >\n> > > > always return false, at least in DropRelFileNodesAllBuffers. Due to\n> > > > this, we cannot use the cached flag in DropRelFileNodesAllBuffers().\n> > > > However, I think we can still rely on smgrnblocks to get the file\n> > > > size as long as we're InRecovery. That cached nblocks is still guaranteed\n> > to be the maximum in the shared buffer.\n> > > > Thoughts?\n> > >\n> > > That means that we always think as if smgrnblocks returns \"cached\" (or\n> > > \"safe\") value during recovery, which is out of our current consensus.\n> > > If we go on that side, we don't need to consult the \"cached\" returned\n> > > from smgrnblocks at all and it's enough to see only InRecovery.\n> > >\n> > > I got confused..\n> > >\n> > > We are relying on the \"fact\" that the first lseek() call of a\n> > > (startup) process tells the truth. We added an assertion so that we\n> > > make sure that the cached value won't be cleared during recovery. A\n> > > possible remaining danger would be closing of an smgr object of a live\n> > > relation just after a file extension failure. I think we are thinking\n> > > that that doesn't happen during recovery. Although it seems to me\n> > > true, I'm not confident.\n> > >\n> >\n> > Yeah, I also think it might not be worth depending upon whether smgr close\n> > has been done before or not. I feel the current idea of using 'cached'\n> > parameter is relatively solid and we should rely on that.\n> > Also, which means that in DropRelFileNodesAllBuffers() we should rely on\n> > the same, I think doing things differently in this regard will lead to confusion. I\n> > agree in some cases we might not get benefits but it is more important to be\n> > correct and keep the code consistent to avoid introducing bugs now or in the\n> > future.\n> >\n> Hi,\n> I have reported before that it is not always the case that the \"cached\" flag of\n> srnblocks() return true. So when I checked the truncate test case used in my\n> patch, it does not enter the optimization path despite doing INSERT before\n> truncation of table.\n> The reason for that is because in TRUNCATE, a new RelFileNode is assigned\n> to the relation when creating a new file. In recovery, XLogReadBufferExtended()\n> always opens the RelFileNode and calls smgrnblocks() for that RelFileNode for the\n> first time. And for recovery processing, different RelFileNodes are used for the\n> INSERTs to the table and TRUNCATE to the same table.\n>\n\nHmm, how is it possible if Insert is done before Truncate? The insert\nshould happen in old RelFileNode only. I have verified by adding a\nbreak-in (while (1), so that it stops there) heap_xlog_insert and\nDropRelFileNodesAllBuffers(), and both get the same (old) RelFileNode.\nHow have you verified what you are saying?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Dec 2020 17:18:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Dec 7, 2020 at 12:32 PM k.jamison@fujitsu.com\n> <k.jamison@fujitsu.com> wrote:\n> >\n> > On Friday, December 4, 2020 8:27 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Hi,\n> > I have reported before that it is not always the case that the \"cached\" flag of\n> > srnblocks() return true. So when I checked the truncate test case used in my\n> > patch, it does not enter the optimization path despite doing INSERT before\n> > truncation of table.\n> > The reason for that is because in TRUNCATE, a new RelFileNode is assigned\n> > to the relation when creating a new file. In recovery, XLogReadBufferExtended()\n> > always opens the RelFileNode and calls smgrnblocks() for that RelFileNode for the\n> > first time. And for recovery processing, different RelFileNodes are used for the\n> > INSERTs to the table and TRUNCATE to the same table.\n> >\n> \n> Hmm, how is it possible if Insert is done before Truncate? The insert\n> should happen in old RelFileNode only. I have verified by adding a\n> break-in (while (1), so that it stops there) heap_xlog_insert and\n> DropRelFileNodesAllBuffers(), and both get the same (old) RelFileNode.\n> How have you verified what you are saying?\n\nYou might be thinking of in-transaction sequence of\nInert-truncate. What *I* mention before is truncation of a relation\nthat smgrnblocks() has already been called for. The most common way\nto make it happen was INSERTs *before* the truncating transaction\nstarts. It may be a SELECT on a hot-standby. Sorry for the confusing\nexpression.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Dec 2020 09:45:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> > Hmm, how is it possible if Insert is done before Truncate? The insert\n> > should happen in old RelFileNode only. I have verified by adding a\n> > break-in (while (1), so that it stops there) heap_xlog_insert and\n> > DropRelFileNodesAllBuffers(), and both get the same (old) RelFileNode.\n> > How have you verified what you are saying?\n> \n> You might be thinking of in-transaction sequence of\n> Inert-truncate. What *I* mention before is truncation of a relation\n> that smgrnblocks() has already been called for. The most common way\n> to make it happen was INSERTs *before* the truncating transaction\n> starts. It may be a SELECT on a hot-standby. Sorry for the confusing\n> expression.\n\nAnd ,to make sure, it is a bit off from the point of the discussion as\nI noted. I just meant that the proposition that \"smgrnblokcs() always\nreturns false for \"cached\" when it is called in\nDropRelFileNodesAllBuffers()\" doesn't always holds.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Dec 2020 09:53:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 8, 2020 at 6:23 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > Hmm, how is it possible if Insert is done before Truncate? The insert\n> > > should happen in old RelFileNode only. I have verified by adding a\n> > > break-in (while (1), so that it stops there) heap_xlog_insert and\n> > > DropRelFileNodesAllBuffers(), and both get the same (old) RelFileNode.\n> > > How have you verified what you are saying?\n> >\n> > You might be thinking of in-transaction sequence of\n> > Inert-truncate. What *I* mention before is truncation of a relation\n> > that smgrnblocks() has already been called for. The most common way\n> > to make it happen was INSERTs *before* the truncating transaction\n> > starts.\n\nWhat I have tried is Insert and Truncate in separate transactions like below:\npostgres=# insert into mytbl values(1);\nINSERT 0 1\npostgres=# truncate mytbl;\nTRUNCATE TABLE\n\nAfter above, manually killed the server, and then during recovery, we\nhave called heap_xlog_insert() and DropRelFileNodesAllBuffers() and at\nboth places, RelFileNode is the same and I don't see any reason for it\nto be different.\n\n> > It may be a SELECT on a hot-standby. Sorry for the confusing\n> > expression.\n>\n> And ,to make sure, it is a bit off from the point of the discussion as\n> I noted. I just meant that the proposition that \"smgrnblokcs() always\n> returns false for \"cached\" when it is called in\n> DropRelFileNodesAllBuffers()\" doesn't always holds.\n>\n\nRight, I feel in some cases the 'cached' won't be true like if we\nwould have done Checkpoint after Insert in the above case (say when\nthe only WAL to replay during recovery is of Truncate) but I think\nthat should be fine. What do you think?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Dec 2020 07:13:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "I'm out of it more than usual..\n\nAt Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> > On Mon, Dec 7, 2020 at 12:32 PM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> > >\n> > > On Friday, December 4, 2020 8:27 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Hi,\n> > > I have reported before that it is not always the case that the \"cached\" flag of\n> > > srnblocks() return true. So when I checked the truncate test case used in my\n> > > patch, it does not enter the optimization path despite doing INSERT before\n> > > truncation of table.\n> > > The reason for that is because in TRUNCATE, a new RelFileNode is assigned\n> > > to the relation when creating a new file. In recovery, XLogReadBufferExtended()\n> > > always opens the RelFileNode and calls smgrnblocks() for that RelFileNode for the\n> > > first time. And for recovery processing, different RelFileNodes are used for the\n> > > INSERTs to the table and TRUNCATE to the same table.\n> > >\n> > \n> > Hmm, how is it possible if Insert is done before Truncate? The insert\n> > should happen in old RelFileNode only. I have verified by adding a\n> > break-in (while (1), so that it stops there) heap_xlog_insert and\n> > DropRelFileNodesAllBuffers(), and both get the same (old) RelFileNode.\n> > How have you verified what you are saying?\n\nIt's irrelvant that the insert happens on the old relfilenode. We drop\nbuffers for the old relfilenode on truncation anyway.\n\nWhat I did is:\n\na: Create a physical replication pair.\nb: On the master, create a table. (without explicitly starting a tx)\nc: On the master, insert a tuple into the table.\nd: On the master truncate the table.\n\nOn the standby, smgrnblocks is called for the old relfilenode of the\ntable at c, then the same function is called for the same relfilenode\nat d and the function takes the cached path.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Dec 2020 10:54:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> I'm out of it more than usual..\n>\n> At Tue, 08 Dec 2020 09:45:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Mon, 7 Dec 2020 17:18:31 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Mon, Dec 7, 2020 at 12:32 PM k.jamison@fujitsu.com\n> > > <k.jamison@fujitsu.com> wrote:\n> > > >\n> > > > On Friday, December 4, 2020 8:27 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Hi,\n> > > > I have reported before that it is not always the case that the \"cached\" flag of\n> > > > srnblocks() return true. So when I checked the truncate test case used in my\n> > > > patch, it does not enter the optimization path despite doing INSERT before\n> > > > truncation of table.\n> > > > The reason for that is because in TRUNCATE, a new RelFileNode is assigned\n> > > > to the relation when creating a new file. In recovery, XLogReadBufferExtended()\n> > > > always opens the RelFileNode and calls smgrnblocks() for that RelFileNode for the\n> > > > first time. And for recovery processing, different RelFileNodes are used for the\n> > > > INSERTs to the table and TRUNCATE to the same table.\n> > > >\n> > >\n> > > Hmm, how is it possible if Insert is done before Truncate? The insert\n> > > should happen in old RelFileNode only. I have verified by adding a\n> > > break-in (while (1), so that it stops there) heap_xlog_insert and\n> > > DropRelFileNodesAllBuffers(), and both get the same (old) RelFileNode.\n> > > How have you verified what you are saying?\n>\n> It's irrelvant that the insert happens on the old relfilenode.\n>\n\nI think it is relevant because it will allow the 'blocks' value to be cached.\n\n> We drop\n> buffers for the old relfilenode on truncation anyway.\n>\n> What I did is:\n>\n> a: Create a physical replication pair.\n> b: On the master, create a table. (without explicitly starting a tx)\n> c: On the master, insert a tuple into the table.\n> d: On the master truncate the table.\n>\n> On the standby, smgrnblocks is called for the old relfilenode of the\n> table at c, then the same function is called for the same relfilenode\n> at d and the function takes the cached path.\n>\n\nThis is on the lines I have tried for recovery. So, it seems we are in\nagreement that we can use the 'cached' flag in\nDropRelFileNodesAllBuffers and it will take the optimized path in many\nsuch cases, right?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Dec 2020 08:08:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 8 Dec 2020 08:08:25 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > We drop\n> > buffers for the old relfilenode on truncation anyway.\n> >\n> > What I did is:\n> >\n> > a: Create a physical replication pair.\n> > b: On the master, create a table. (without explicitly starting a tx)\n> > c: On the master, insert a tuple into the table.\n> > d: On the master truncate the table.\n> >\n> > On the standby, smgrnblocks is called for the old relfilenode of the\n> > table at c, then the same function is called for the same relfilenode\n> > at d and the function takes the cached path.\n> >\n> \n> This is on the lines I have tried for recovery. So, it seems we are in\n> agreement that we can use the 'cached' flag in\n> DropRelFileNodesAllBuffers and it will take the optimized path in many\n> such cases, right?\n\n\nMmm. There seems to be a misunderstanding.. What I opposed to is\nreferring only to InRecovery and ignoring the value of \"cached\".\n\nThe remaining issue is we don't get to the optimized path when a\nstandby makes the first call to smgrnblocks() when truncating a\nrelation. Still we can get to the optimized path as far as any\nupdate(+insert) or select is performed earlier on the relation so I\nthink it doesn't matter so match.\n\nBut I'm not sure what others think.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Dec 2020 14:11:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 8, 2020 at 10:41 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 8 Dec 2020 08:08:25 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > We drop\n> > > buffers for the old relfilenode on truncation anyway.\n> > >\n> > > What I did is:\n> > >\n> > > a: Create a physical replication pair.\n> > > b: On the master, create a table. (without explicitly starting a tx)\n> > > c: On the master, insert a tuple into the table.\n> > > d: On the master truncate the table.\n> > >\n> > > On the standby, smgrnblocks is called for the old relfilenode of the\n> > > table at c, then the same function is called for the same relfilenode\n> > > at d and the function takes the cached path.\n> > >\n> >\n> > This is on the lines I have tried for recovery. So, it seems we are in\n> > agreement that we can use the 'cached' flag in\n> > DropRelFileNodesAllBuffers and it will take the optimized path in many\n> > such cases, right?\n>\n>\n> Mmm. There seems to be a misunderstanding.. What I opposed to is\n> referring only to InRecovery and ignoring the value of \"cached\".\n>\n\nOkay, I think it was Kirk-San who proposed to use InRecovery and\nignoring the value of \"cached\" based on the theory that even if Insert\n(or other DMLs) are done before Truncate, it won't use an optimized\npath and I don't agree with the same. So, I did a small test to check\nthe same and found that it should use the optimized path and the same\nis true for the experiment done by you. I am not sure why Kirk-San is\nseeing something different?\n\n> The remaining issue is we don't get to the optimized path when a\n> standby makes the first call to smgrnblocks() when truncating a\n> relation. Still we can get to the optimized path as far as any\n> update(+insert) or select is performed earlier on the relation so I\n> think it doesn't matter so match.\n>\n\n+1.\n\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Dec 2020 11:05:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, December 8, 2020 2:35 PM, Amit Kapila wrote:\r\n\r\n> On Tue, Dec 8, 2020 at 10:41 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Tue, 8 Dec 2020 08:08:25 +0530, Amit Kapila\r\n> > <amit.kapila16@gmail.com> wrote in\r\n> > > On Tue, Dec 8, 2020 at 7:24 AM Kyotaro Horiguchi\r\n> > > <horikyota.ntt@gmail.com> wrote:\r\n> > > > We drop\r\n> > > > buffers for the old relfilenode on truncation anyway.\r\n> > > >\r\n> > > > What I did is:\r\n> > > >\r\n> > > > a: Create a physical replication pair.\r\n> > > > b: On the master, create a table. (without explicitly starting a\r\n> > > > tx)\r\n> > > > c: On the master, insert a tuple into the table.\r\n> > > > d: On the master truncate the table.\r\n> > > >\r\n> > > > On the standby, smgrnblocks is called for the old relfilenode of\r\n> > > > the table at c, then the same function is called for the same\r\n> > > > relfilenode at d and the function takes the cached path.\r\n> > > >\r\n> > >\r\n> > > This is on the lines I have tried for recovery. So, it seems we are\r\n> > > in agreement that we can use the 'cached' flag in\r\n> > > DropRelFileNodesAllBuffers and it will take the optimized path in\r\n> > > many such cases, right?\r\n> >\r\n> >\r\n> > Mmm. There seems to be a misunderstanding.. What I opposed to is\r\n> > referring only to InRecovery and ignoring the value of \"cached\".\r\n> >\r\n> \r\n> Okay, I think it was Kirk-San who proposed to use InRecovery and ignoring\r\n> the value of \"cached\" based on the theory that even if Insert (or other DMLs)\r\n> are done before Truncate, it won't use an optimized path and I don't agree\r\n> with the same. So, I did a small test to check the same and found that it\r\n> should use the optimized path and the same is true for the experiment done\r\n> by you. I am not sure why Kirk-San is seeing something different?\r\n> \r\n> > The remaining issue is we don't get to the optimized path when a\r\n> > standby makes the first call to smgrnblocks() when truncating a\r\n> > relation. Still we can get to the optimized path as far as any\r\n> > update(+insert) or select is performed earlier on the relation so I\r\n> > think it doesn't matter so match.\r\n> >\r\n> \r\n> +1.\r\n\r\nMy question/proposal before was to either use InRecovery,\r\nor completely drop the smgrnblocks' \"cached\" flag.\r\nBut that is coming from the results of my investigation below when\r\nI used \"cached\" in DropRelFileNodesAllBuffers().\r\nThe optimization path was skipped because one of the\r\nRels' \"cached\" value was \"false\".\r\n\r\nTest Case. (shared_buffer = 1GB)\r\n0. Set physical replication to both master and standby.\r\n1. Create 1 table.\r\n2. Insert Data (1MB) to TABLE.\r\n\t16385 is the relnode for insert (both Master and Standby).\r\n\r\n3. Pause WAL on Standby.\r\n4. TRUNCATE table on Primary.\r\n nrels = 3. relNodes 16389, 16388, 16385.\r\n\r\n5. Stop Primary.\r\n\r\n6. Promote standby and resume WAL recovery. nrels = 3 \r\n 1st rel's check for optimization: \"cached\" is TRUE. relNode = 16389.\r\n 2nd rel's check for optimization. \"cached\" was returned FALSE by\r\n smgrnblocks). relNode = 16388.\r\n Since one of the rels' cached is \"FALSE\", the optimization check for\r\n 3rd relation and the whole optimization itself is skipped.\r\n Go to full-scan path in DropRelFileNodesAllBuffers().\r\n Then smgrclose for relNodes 16389, 16388, 16385.\r\n\r\nBecause one of the rel's cached value was false, it forced the\r\nfull-scan path for TRUNCATE.\r\nIs there a possible workaround for this? \r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Tue, 8 Dec 2020 06:17:52 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> Because one of the rel's cached value was false, it forced the\r\n> full-scan path for TRUNCATE.\r\n> Is there a possible workaround for this?\r\n\r\nHmm, the other two relfilenodes are for the TOAST table and index of the target table. I think the INSERT didn't access those TOAST relfilenodes because the inserted data was stored in the main storage. But TRUNCATE always truncates all the three relfilenodes. So, the standby had not opened the relfilenode for the TOAST stuff or cached its size when replaying the TRUNCATE.\r\n\r\nI'm afraid this is more common than we can ignore and accept the slow traditional path, but I don't think of a good idea to use the cached flag.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Tue, 8 Dec 2020 06:43:31 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > Because one of the rel's cached value was false, it forced the\n> > full-scan path for TRUNCATE.\n> > Is there a possible workaround for this?\n>\n> Hmm, the other two relfilenodes are for the TOAST table and index of the target table. I think the INSERT didn't access those TOAST relfilenodes because the inserted data was stored in the main storage. But TRUNCATE always truncates all the three relfilenodes. So, the standby had not opened the relfilenode for the TOAST stuff or cached its size when replaying the TRUNCATE.\n>\n> I'm afraid this is more common than we can ignore and accept the slow traditional path, but I don't think of a good idea to use the cached flag.\n>\n\nI also can't think of a way to use an optimized path for such cases\nbut I don't agree with your comment on if it is common enough that we\nleave this optimization entirely for the truncate path.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Dec 2020 16:28:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > > Because one of the rel's cached value was false, it forced the\n> > > full-scan path for TRUNCATE.\n> > > Is there a possible workaround for this?\n> >\n> > Hmm, the other two relfilenodes are for the TOAST table and index of the target table. I think the INSERT didn't access those TOAST relfilenodes because the inserted data was stored in the main storage. But TRUNCATE always truncates all the three relfilenodes. So, the standby had not opened the relfilenode for the TOAST stuff or cached its size when replaying the TRUNCATE.\n> >\n> > I'm afraid this is more common than we can ignore and accept the slow traditional path, but I don't think of a good idea to use the cached flag.\n> >\n> \n> I also can't think of a way to use an optimized path for such cases\n> but I don't agree with your comment on if it is common enough that we\n> leave this optimization entirely for the truncate path.\n\nMmm. At least btree doesn't need to call smgrnblocks except at\nexpansion, so we cannot get to the optimized path in major cases of\ntruncation involving btree (and/or maybe other indexes). TOAST\nrelations are not accessed until we insert/update/retrive the values\nin it.\n\nAn ugly way to cope with it would be to let other smgr functions\nmanage the cached value, for example, by calling smgrnblocks while\nInRecovery. Or letting smgr remember the maximum block number ever\naccessed. But we cannot fully rely on that since smgr can be closed\nmidst of a session and smgr doesn't offer such persistence. In the\nfirst place smgr doesn't seem to be the place to store such persistent\ninformation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Dec 2020 10:02:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n > I also can't think of a way to use an optimized path for such cases\n> > but I don't agree with your comment on if it is common enough that we\n> > leave this optimization entirely for the truncate path.\n> \n> An ugly way to cope with it would be to let other smgr functions\n> manage the cached value, for example, by calling smgrnblocks while\n> InRecovery. Or letting smgr remember the maximum block number ever\n> accessed. But we cannot fully rely on that since smgr can be closed\n> midst of a session and smgr doesn't offer such persistence. In the\n> first place smgr doesn't seem to be the place to store such persistent\n> information.\n\nYeah, considering the future evolution of this patch to operations during normal running, I don't think that would be a good fit, either.\n\nThen, the as we're currently targeting just recovery, the options we can take are below. Which would vote for? My choice would be (3) > (2) > (1).\n\n\n(1)\nUse the cached flag in both VACUUM (0003) and TRUNCATE (0004).\nThis brings the most safety and code consistency.\nBut this would not benefit from optimization for TRUNCATE in unexpectedly many cases -- when TOAST storage exists but it's not written, or FSM/VM is not updated after checkpoint.\n\n\n(2)\nUse the cached flag in VACUUM (0003), but use InRecovery instead of the cached flag in TRUNCATE (0004).\nThis benefits from the optimization in all cases.\nBut this lacks code consistency.\nYou may be afraid of safety if the startup process smgrclose()s the relation after the shared buffer flushing hits disk full. However, startup process doesn't smgrclose(), so it should be safe. Just in case the startup process smgrclose()s, the worst consequence is PANIC shutdown after repeated failure of checkpoints due to lingering orphaned dirty shared buffers. Accept it as Thomas-san's devil's suggestion.\n\n\n(3)\nDo not use the cached flag in either VACUUM (0003) or TRUNCATE (0004).\nThis benefits from the optimization in all cases.\nThe code is consistent and smaller.\nAs for the safety, this is the same as (2), but it applies to VACUUM as well.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Wed, 9 Dec 2020 01:57:42 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, December 9, 2020 10:58 AM, Tsunakawa, Takayuki wrote: \n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila\n> > <amit.kapila16@gmail.com> wrote in\n> > I also can't think of a way to use an optimized path for such cases\n> > > but I don't agree with your comment on if it is common enough that\n> > > we leave this optimization entirely for the truncate path.\n> >\n> > An ugly way to cope with it would be to let other smgr functions\n> > manage the cached value, for example, by calling smgrnblocks while\n> > InRecovery. Or letting smgr remember the maximum block number ever\n> > accessed. But we cannot fully rely on that since smgr can be closed\n> > midst of a session and smgr doesn't offer such persistence. In the\n> > first place smgr doesn't seem to be the place to store such persistent\n> > information.\n> \n> Yeah, considering the future evolution of this patch to operations during\n> normal running, I don't think that would be a good fit, either.\n> \n> Then, the as we're currently targeting just recovery, the options we can take\n> are below. Which would vote for? My choice would be (3) > (2) > (1).\n> \n> \n> (1)\n> Use the cached flag in both VACUUM (0003) and TRUNCATE (0004).\n> This brings the most safety and code consistency.\n> But this would not benefit from optimization for TRUNCATE in unexpectedly\n> many cases -- when TOAST storage exists but it's not written, or FSM/VM is\n> not updated after checkpoint.\n> \n> \n> (2)\n> Use the cached flag in VACUUM (0003), but use InRecovery instead of the\n> cached flag in TRUNCATE (0004).\n> This benefits from the optimization in all cases.\n> But this lacks code consistency.\n> You may be afraid of safety if the startup process smgrclose()s the relation\n> after the shared buffer flushing hits disk full. However, startup process\n> doesn't smgrclose(), so it should be safe. Just in case the startup process\n> smgrclose()s, the worst consequence is PANIC shutdown after repeated\n> failure of checkpoints due to lingering orphaned dirty shared buffers. Accept\n> it as Thomas-san's devil's suggestion.\n> \n> \n> (3)\n> Do not use the cached flag in either VACUUM (0003) or TRUNCATE (0004).\n> This benefits from the optimization in all cases.\n> The code is consistent and smaller.\n> As for the safety, this is the same as (2), but it applies to VACUUM as well.\n\nIf we want code consistency, then we'd fall in either 1 or 3.\nAnd if we want to take the benefits of optimization for both DropRelFileNodeBuffers\nand DropRelFileNodesAllBuffers, then I'd choose 3.\nHowever, if the reviewers and committer want to make use of the \"cached\" flag,\nthen we can live with \"cached\" value in place there even if it's not common to\nget the optimization for TRUNCATE path. So only VACUUM would take the most\nbenefit.\nMy vote is also (3), then (2), and (1).\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Wed, 9 Dec 2020 02:08:27 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > >\n> > > From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > > > Because one of the rel's cached value was false, it forced the\n> > > > full-scan path for TRUNCATE.\n> > > > Is there a possible workaround for this?\n> > >\n> > > Hmm, the other two relfilenodes are for the TOAST table and index of the target table. I think the INSERT didn't access those TOAST relfilenodes because the inserted data was stored in the main storage. But TRUNCATE always truncates all the three relfilenodes. So, the standby had not opened the relfilenode for the TOAST stuff or cached its size when replaying the TRUNCATE.\n> > >\n> > > I'm afraid this is more common than we can ignore and accept the slow traditional path, but I don't think of a good idea to use the cached flag.\n> > >\n> >\n> > I also can't think of a way to use an optimized path for such cases\n> > but I don't agree with your comment on if it is common enough that we\n> > leave this optimization entirely for the truncate path.\n>\n> Mmm. At least btree doesn't need to call smgrnblocks except at\n> expansion, so we cannot get to the optimized path in major cases of\n> truncation involving btree (and/or maybe other indexes).\n>\n\nAFAICS, btree insert should call smgrnblocks via\nbtree_xlog_insert->XLogReadBufferForRedo->XLogReadBufferForRedoExtended->XLogReadBufferExtended->smgrnblocks.\nSimilarly delete should also call smgrnblocks. Can you be bit more\nspecific related to the btree case you have in mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Dec 2020 16:27:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Wed, 9 Dec 2020 16:27:30 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 8 Dec 2020 16:28:41 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Tue, Dec 8, 2020 at 12:13 PM tsunakawa.takay@fujitsu.com\n> > > <tsunakawa.takay@fujitsu.com> wrote:\n> > > >\n> > > > From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\n> > > > > Because one of the rel's cached value was false, it forced the\n> > > > > full-scan path for TRUNCATE.\n> > > > > Is there a possible workaround for this?\n> > > >\n> > > > Hmm, the other two relfilenodes are for the TOAST table and index of the target table. I think the INSERT didn't access those TOAST relfilenodes because the inserted data was stored in the main storage. But TRUNCATE always truncates all the three relfilenodes. So, the standby had not opened the relfilenode for the TOAST stuff or cached its size when replaying the TRUNCATE.\n> > > >\n> > > > I'm afraid this is more common than we can ignore and accept the slow traditional path, but I don't think of a good idea to use the cached flag.\n> > > >\n> > >\n> > > I also can't think of a way to use an optimized path for such cases\n> > > but I don't agree with your comment on if it is common enough that we\n> > > leave this optimization entirely for the truncate path.\n> >\n> > Mmm. At least btree doesn't need to call smgrnblocks except at\n> > expansion, so we cannot get to the optimized path in major cases of\n> > truncation involving btree (and/or maybe other indexes).\n> >\n> \n> AFAICS, btree insert should call smgrnblocks via\n> btree_xlog_insert->XLogReadBufferForRedo->XLogReadBufferForRedoExtended->XLogReadBufferExtended->smgrnblocks.\n> Similarly delete should also call smgrnblocks. Can you be bit more\n> specific related to the btree case you have in mind?\n\nOh, sorry. I wrongly looked to non-recovery path. smgrnblocks is\ncalled during buffer loading while recovery. So, smgrnblock is called\nfor indexes if any update happens on the heap relation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 10 Dec 2020 10:41:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Oh, sorry. I wrongly looked to non-recovery path. smgrnblocks is\n> called during buffer loading while recovery. So, smgrnblock is called\n> for indexes if any update happens on the heap relation.\n\nI misunderstood that you said there's no problem with the TOAST index because TRUNCATE creates the meta page, resulting in the caching of the page and size of the relation. Anyway, I'm relieved the concern disappeared.\n\nThen, I'd like to hear your vote in my previous mail...\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 10 Dec 2020 02:46:08 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Dec 10, 2020 at 7:11 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 9 Dec 2020 16:27:30 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi\n> > > Mmm. At least btree doesn't need to call smgrnblocks except at\n> > > expansion, so we cannot get to the optimized path in major cases of\n> > > truncation involving btree (and/or maybe other indexes).\n> > >\n> >\n> > AFAICS, btree insert should call smgrnblocks via\n> > btree_xlog_insert->XLogReadBufferForRedo->XLogReadBufferForRedoExtended->XLogReadBufferExtended->smgrnblocks.\n> > Similarly delete should also call smgrnblocks. Can you be bit more\n> > specific related to the btree case you have in mind?\n>\n> Oh, sorry. I wrongly looked to non-recovery path. smgrnblocks is\n> called during buffer loading while recovery. So, smgrnblock is called\n> for indexes if any update happens on the heap relation.\n>\n\nOkay, so this means that we can get the benefit of optimization in\nmany cases in the Truncate code path as well even if we use 'cached'\nflag? If so, then I would prefer to keep the code consistent for both\nvacuum and truncate recovery code path.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 10 Dec 2020 08:56:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, December 10, 2020 12:27 PM, Amit Kapila wrote: \r\n> On Thu, Dec 10, 2020 at 7:11 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Wed, 9 Dec 2020 16:27:30 +0530, Amit Kapila\r\n> > <amit.kapila16@gmail.com> wrote in\r\n> > > On Wed, Dec 9, 2020 at 6:32 AM Kyotaro Horiguchi\r\n> > > > Mmm. At least btree doesn't need to call smgrnblocks except at\r\n> > > > expansion, so we cannot get to the optimized path in major cases\r\n> > > > of truncation involving btree (and/or maybe other indexes).\r\n> > > >\r\n> > >\r\n> > > AFAICS, btree insert should call smgrnblocks via\r\n> > >\r\n> btree_xlog_insert->XLogReadBufferForRedo->XLogReadBufferForRedoExte\r\n> nded->XLogReadBufferExtended->smgrnblocks.\r\n> > > Similarly delete should also call smgrnblocks. Can you be bit more\r\n> > > specific related to the btree case you have in mind?\r\n> >\r\n> > Oh, sorry. I wrongly looked to non-recovery path. smgrnblocks is\r\n> > called during buffer loading while recovery. So, smgrnblock is called\r\n> > for indexes if any update happens on the heap relation.\r\n> >\r\n> \r\n> Okay, so this means that we can get the benefit of optimization in many cases\r\n> in the Truncate code path as well even if we use 'cached'\r\n> flag? If so, then I would prefer to keep the code consistent for both vacuum\r\n> and truncate recovery code path.\r\n\r\nYes, I have tested that optimization works for index relations.\r\n\r\nI have attached the V34, following the conditions that we use \"cached\" flag\r\nfor both DropRelFileNodesBuffers() and DropRelFileNodesBuffers() for\r\nconsistency.\r\nI added comment in 0004 the limitation of optimization when there are TOAST\r\nrelations that use NON-PLAIN strategy. i.e. The optimization works if the data\r\ntypes used are integers, OID, bytea, etc. But for TOAST-able data types like text,\r\nthe optimization will be skipped and force a full scan during recovery.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Thu, 10 Dec 2020 08:09:55 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> I added comment in 0004 the limitation of optimization when there are TOAST\r\n> relations that use NON-PLAIN strategy. i.e. The optimization works if the data\r\n> types used are integers, OID, bytea, etc. But for TOAST-able data types like text,\r\n> the optimization will be skipped and force a full scan during recovery.\r\n\r\nbytea is a TOAST-able type.\r\n\r\n\r\n+\t/*\r\n+\t * Enter the optimization if the total number of blocks to be\r\n+\t * invalidated for all relations is below the full scan threshold.\r\n+\t */\r\n+\tif (cached && nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\r\n\r\nChecking cached here doesn't seem to be necessary, because if cached is false, the control goes to the full scan path as below:\r\n\r\n+\t\t\tif (!cached)\r\n+\t\t\t\tgoto buffer_full_scan;\r\n+\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 10 Dec 2020 08:28:56 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Dec 10, 2020 at 1:40 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Yes, I have tested that optimization works for index relations.\n>\n> I have attached the V34, following the conditions that we use \"cached\" flag\n> for both DropRelFileNodesBuffers() and DropRelFileNodesBuffers() for\n> consistency.\n> I added comment in 0004 the limitation of optimization when there are TOAST\n> relations that use NON-PLAIN strategy. i.e. The optimization works if the data\n> types used are integers, OID, bytea, etc. But for TOAST-able data types like text,\n> the optimization will be skipped and force a full scan during recovery.\n>\n\nAFAIU, it won't take optimization path only when we have TOAST\nrelation but there is no insertion corresponding to it. If so, then we\ndon't need to mention it specifically because there are other similar\ncases where the optimization won't work like when during recovery we\nhave to just perform TRUNCATE.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 10 Dec 2020 16:42:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thursday, December 10, 2020 8:12 PM, Amit Kapila wrote:\r\n> On Thu, Dec 10, 2020 at 1:40 PM k.jamison@fujitsu.com\r\n> <k.jamison@fujitsu.com> wrote:\r\n> >\r\n> > Yes, I have tested that optimization works for index relations.\r\n> >\r\n> > I have attached the V34, following the conditions that we use \"cached\"\r\n> > flag for both DropRelFileNodesBuffers() and DropRelFileNodesBuffers()\r\n> > for consistency.\r\n> > I added comment in 0004 the limitation of optimization when there are\r\n> > TOAST relations that use NON-PLAIN strategy. i.e. The optimization\r\n> > works if the data types used are integers, OID, bytea, etc. But for\r\n> > TOAST-able data types like text, the optimization will be skipped and force a\r\n> full scan during recovery.\r\n> >\r\n> \r\n> AFAIU, it won't take optimization path only when we have TOAST relation but\r\n> there is no insertion corresponding to it. If so, then we don't need to mention\r\n> it specifically because there are other similar cases where the optimization\r\n> won't work like when during recovery we have to just perform TRUNCATE.\r\n> \r\n\r\nRight, I forgot to add that there should be an update like insert to the TOAST\r\nrelation for truncate optimization to work. However, that is only limited to\r\nTOAST relations with PLAIN strategy. I have tested with text data type, with\r\nInserts before truncate, and it did not enter the optimization path. OTOH,\r\nIt worked for data type like integer. So should I still not include that information?\r\n\r\nAlso, I will remove the unnecessary \"cached\" from the line that Tsunakawa-san\r\nmentioned. I will wait for a few more comments before reuploading, hopefully,\r\nthe final version & including the test for truncate,\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Fri, 11 Dec 2020 00:24:45 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> On Thursday, December 10, 2020 8:12 PM, Amit Kapila wrote:\r\n> > AFAIU, it won't take optimization path only when we have TOAST relation but\r\n> > there is no insertion corresponding to it. If so, then we don't need to mention\r\n> > it specifically because there are other similar cases where the optimization\r\n> > won't work like when during recovery we have to just perform TRUNCATE.\r\n> >\r\n> \r\n> Right, I forgot to add that there should be an update like insert to the TOAST\r\n> relation for truncate optimization to work. However, that is only limited to\r\n> TOAST relations with PLAIN strategy. I have tested with text data type, with\r\n> Inserts before truncate, and it did not enter the optimization path. OTOH,\r\n> It worked for data type like integer. So should I still not include that information?\r\n\r\nWhat's valuable as a code comment to describe the remaining issue is that the reader can find clues to if this is related to the problem he/she has hit, and/or how to solve the issue. I don't think the current comment is so bad in that regard, but it seems better to add:\r\n\r\n* The condition of the issue: the table's ancillary storage (index, TOAST table, FSM, VM, etc.) was not updated during recovery.\r\n(As an aside, \"during recovery\" here does not mean \"after the last checkpoint\" but \"from the start of recovery\", because the standby experiences many checkpoints (the correct term is restartpoints in case of standby).)\r\n\r\n* The cause as a hint to solve the issue: The startup process does not find page modification WAL records. As a result, it won't call XLogReadBufferExtended() and smgrnblocks() called therein, so the relation/fork size is not cached.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 11 Dec 2020 01:21:19 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\r\n> What's valuable as a code comment to describe the remaining issue is that the\r\n\r\nYou can attach XXX or FIXME in front of the issue description for easier search. (XXX appears to be used much more often in Postgres.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 11 Dec 2020 01:26:06 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Dec 11, 2020 at 5:54 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Thursday, December 10, 2020 8:12 PM, Amit Kapila wrote:\n> > On Thu, Dec 10, 2020 at 1:40 PM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> > >\n> > > Yes, I have tested that optimization works for index relations.\n> > >\n> > > I have attached the V34, following the conditions that we use \"cached\"\n> > > flag for both DropRelFileNodesBuffers() and DropRelFileNodesBuffers()\n> > > for consistency.\n> > > I added comment in 0004 the limitation of optimization when there are\n> > > TOAST relations that use NON-PLAIN strategy. i.e. The optimization\n> > > works if the data types used are integers, OID, bytea, etc. But for\n> > > TOAST-able data types like text, the optimization will be skipped and force a\n> > full scan during recovery.\n> > >\n> >\n> > AFAIU, it won't take optimization path only when we have TOAST relation but\n> > there is no insertion corresponding to it. If so, then we don't need to mention\n> > it specifically because there are other similar cases where the optimization\n> > won't work like when during recovery we have to just perform TRUNCATE.\n> >\n>\n> Right, I forgot to add that there should be an update like insert to the TOAST\n> relation for truncate optimization to work. However, that is only limited to\n> TOAST relations with PLAIN strategy. I have tested with text data type, with\n> Inserts before truncate, and it did not enter the optimization path.\n>\n\nI think you are seeing because text datatype allows creating toast\nstorage and your data is small enough to be toasted.\n\n> OTOH,\n> It worked for data type like integer.\n>\n\nIt is not related to any datatype, it can happen whenever we don't\nhave any operation on any of the forks after recovery.\n\n> So should I still not include that information?\n>\n\nI think we can extend your existing comment like: \"Otherwise if the\nsize of a relation fork is not cached, we proceed to a full scan of\nthe whole buffer pool. This can happen if there is no update to a\nparticular fork during recovery.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Dec 2020 06:56:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Friday, December 11, 2020 10:27 AM, Amit Kapila wrote:\r\n> On Fri, Dec 11, 2020 at 5:54 AM k.jamison@fujitsu.com\r\n> <k.jamison@fujitsu.com> wrote:\r\n> > So should I still not include that information?\r\n> >\r\n> \r\n> I think we can extend your existing comment like: \"Otherwise if the size of a\r\n> relation fork is not cached, we proceed to a full scan of the whole buffer pool.\r\n> This can happen if there is no update to a particular fork during recovery.\"\r\n\r\nAttached are the final updated patches.\r\nI followed this advice and updated the source code comment a little bit.\r\nThere are no changes from the previous except that and the unnecessary\r\n\"cached\" condition which Tsunakawa-san mentioned.\r\n\r\nBelow is also the updated recovery performance test results for TRUNCATE.\r\n(1000 tables, 1MB per table, results measured in seconds)\r\n| s_b | Master | Patched | % Reg | \r\n|-------|--------|---------|---------| \r\n| 128MB | 0.406 | 0.406 | 0% | \r\n| 512MB | 0.506 | 0.406 | -25% | \r\n| 1GB | 0.806 | 0.406 | -99% | \r\n| 20GB | 15.224 | 0.406 | -3650% | \r\n| 100GB | 81.506 | 0.406 | -19975% |\r\n\r\nBecause of the size of relation, it is expected to enter full-scan for\r\nthe 128MB shared_buffers setting. And there was no regression.\r\nSimilar to previous test results, the recovery time was constant\r\nfor all shared_buffers setting with the patches applied.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Mon, 14 Dec 2020 03:00:12 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\n> Attached are the final updated patches.\r\n\r\nLooks good, and the patch remains ready for committer. (Personally, I wanted the code comment to touch upon the TOAST and FSM/VM for the reader, because we couldn't think of those possibilities and took some time to find why the optimization path wasn't taken.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 14 Dec 2020 04:22:50 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hello Kirk,\r\n\r\nI noticed you have pushed a new version for your patch which has some changes on TRUNCATE on TOAST relation. \r\nAlthough you've done performance test for your changed part. I'd like to do a double check for your patch(hope you don't mind).\r\nBelow is the updated recovery performance test results for your new patch. All seems good.\r\n\r\n*TOAST relation with PLAIN strategy like integer : \r\n1. Recovery after VACUUM test results(average of 15 times)\r\nshared_buffers\t\tmaster(sec) patched(sec) %reg=((patched-master)/patched)\r\n--------------------------------------------------------------------------------------\r\n128M\t\t\t2.111 \t\t\t1.604 \t\t\t-24%\r\n10G\t\t\t57.135 \t\t\t1.878 \t\t\t-97%\r\n20G\t\t\t167.122 \t\t1.932 \t\t\t-99%\r\n\r\n2. Recovery after TRUNCATE test results(average of 15 times)\r\nshared_buffers \tmaster(sec) patched(sec) %reg=((patched-master)/patched)\r\n--------------------------------------------------------------------------------------\r\n128M\t\t\t2.326 \t\t\t1.718 \t\t\t-26%\r\n10G\t\t\t82.397 \t\t\t1.738 \t\t\t-98%\r\n20G\t\t\t169.275 \t\t1.718 \t\t\t-99%\r\n\r\n*TOAST relation with NON-PLAIN strategy like text/varchar: \r\n1. Recovery after VACUUM test results(average of 15 times)\r\nshared_buffers\t\tmaster(sec) patched(sec) %reg=((patched-master)/patched)\r\n--------------------------------------------------------------------------------------\r\n128M\t\t\t3.174 \t\t\t2.493 \t\t\t-21%\r\n10G\t\t\t72.716 \t\t\t2.246 \t\t\t-97%\r\n20G\t\t\t163.660 \t\t2.474 \t\t\t-98%\r\n\r\n2. Recovery after TRUNCATE test results(average of 15 times): Although it looks like there are some improvements after patch applied. I think that's because of the average calculation. TRUNCATE results should be similar between master and patched because they all do full scan.\r\nshared_buffers \tmaster(sec) patched(sec) %reg=((patched-master)/patched)\r\n--------------------------------------------------------------------------------------\r\n128M\t\t\t4.978 \t\t\t4.958 \t\t\t0%\r\n10G\t\t\t97.048 \t\t\t88.751 \t\t\t-9%\r\n20G\t\t\t183.230 \t\t173.226 \t\t-5% \r\n\r\n[Machine spec]\r\nCPU : 40 processors (Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz)\r\nMemory: 128G\r\nOS: CentOS 8\r\n\r\n[Failover test data]\r\nTotal table Size: 600M\r\nTable: 10000 tables (1000 rows per table)\r\n\r\n[Configure in postgresql.conf]\r\nautovacuum = off\r\nwal_level = replica\r\nmax_wal_senders = 5\r\nmax_locks_per_transaction = 10000\r\n\r\nIf you have any questions on my test results, please let me know.\r\n\r\nRegards\r\nTang\r\n\n\n", "msg_date": "Fri, 18 Dec 2020 07:45:33 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Nov 19, 2020 at 12:37 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Andres Freund <andres@anarazel.de>\n>\n> > Smaller comment:\n> >\n> > +static void\n> > +FindAndDropRelFileNodeBuffers(RelFileNode rnode, ForkNumber *forkNum,\n> > int nforks,\n> > + BlockNumber\n> > *nForkBlocks, BlockNumber *firstDelBlock)\n> > ...\n> > + /* Check that it is in the buffer pool. If not, do nothing.\n> > */\n> > + LWLockAcquire(bufPartitionLock, LW_SHARED);\n> > + buf_id = BufTableLookup(&bufTag, bufHash);\n> > ...\n> > + bufHdr = GetBufferDescriptor(buf_id);\n> > +\n> > + buf_state = LockBufHdr(bufHdr);\n> > +\n> > + if (RelFileNodeEquals(bufHdr->tag.rnode, rnode) &&\n> > + bufHdr->tag.forkNum == forkNum[i] &&\n> > + bufHdr->tag.blockNum >= firstDelBlock[i])\n> > + InvalidateBuffer(bufHdr); /* releases\n> > spinlock */\n> > + else\n> > + UnlockBufHdr(bufHdr, buf_state);\n> >\n> > I'm a bit confused about the check here. We hold a buffer partition lock, and\n> > have done a lookup in the mapping table. Why are we then rechecking the\n> > relfilenode/fork/blocknum? And why are we doing so holding the buffer header\n> > lock, which is essentially a spinlock, so should only ever be held for very short\n> > portions?\n> >\n> > This looks like it's copying logic from DropRelFileNodeBuffers() etc, but there\n> > the situation is different: We haven't done a buffer mapping lookup, and we\n> > don't hold a partition lock!\n>\n> That's because the buffer partition lock is released immediately after the hash table has been looked up. As an aside, InvalidateBuffer() requires the caller to hold the buffer header spinlock and doesn't hold the buffer partition lock.\n>\n\nThis answers the second part of the question but what about the first\npart (We hold a buffer partition lock, and have done a lookup in the\nmapping table. Why are we then rechecking the\nrelfilenode/fork/blocknum?)\n\nI think we don't need such a check, rather we can have an Assert\ncorresponding to that if-condition in the patch. I understand it is\nsafe to compare relfilenode/fork/blocknum but it might confuse readers\nof the code.\n\nI have started doing minor edits to the patch especially planning to\nwrite a theory why is this optimization safe and here is what I can\ncome up with: \"To remove all the pages of the specified relation forks\nfrom the buffer pool, we need to scan the entire buffer pool but we\ncan optimize it by finding the buffers from BufMapping table provided\nwe know the exact size of each fork of the relation. The exact size is\nrequired to ensure that we don't leave any buffer for the relation\nbeing dropped as otherwise the background writer or checkpointer can\nlead to a PANIC error while flushing buffers corresponding to files\nthat don't exist.\n\nTo know the exact size, we rely on the size cached for each fork by us\nduring recovery which limits the optimization to recovery and on\nstandbys but we can easily extend it once we have shared cache for\nrelation size.\n\nIn recovery, we cache the value returned by the first lseek(SEEK_END)\nand the future writes keeps the cached value up-to-date. See\nsmgrextend. It is possible that the value of the first lseek is\nsmaller than the actual number of existing blocks in the file due to\nbuggy Linux kernels that might not have accounted for the recent\nwrite. But that should be fine because there must not be any buffers\nafter that file size.\n\nXXX We would make the extra lseek call for the unoptimized paths but\nthat is okay because we do it just for the first fork and we anyway\nhave to scan the entire buffer pool the cost of which is so high that\nthe extra lseek call won't make any visible difference. However, we\ncan use InRecovery flag to avoid the additional cost but that doesn't\nseem worth it.\"\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Dec 2020 18:55:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> This answers the second part of the question but what about the first\r\n> part (We hold a buffer partition lock, and have done a lookup in th\r\n> mapping table. Why are we then rechecking the\r\n> relfilenode/fork/blocknum?)\r\n> \r\n> I think we don't need such a check, rather we can have an Assert\r\n> corresponding to that if-condition in the patch. I understand it is\r\n> safe to compare relfilenode/fork/blocknum but it might confuse readers\r\n> of the code.\r\n\r\nHmm, you're right. I thought someone else could steal the found buffer and use it for another block because the buffer mapping lwlock is released without pinning the buffer or acquiring the buffer header spinlock. However, in this case (replay of TRUNCATE during recovery), nobody steals the buffer: bgwriter or checkpointer doesn't use a buffer for a new block, and the client backend waits for AccessExclusive lock.\r\n\r\n\r\n> I have started doing minor edits to the patch especially planning to\r\n> write a theory why is this optimization safe and here is what I can\r\n> come up with:\r\n\r\nThank you, that's fluent and easier to understand.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Tue, 22 Dec 2020 01:42:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 22 Dec 2020 01:42:55 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > This answers the second part of the question but what about the first\n> > part (We hold a buffer partition lock, and have done a lookup in th\n> > mapping table. Why are we then rechecking the\n> > relfilenode/fork/blocknum?)\n> > \n> > I think we don't need such a check, rather we can have an Assert\n> > corresponding to that if-condition in the patch. I understand it is\n> > safe to compare relfilenode/fork/blocknum but it might confuse readers\n> > of the code.\n> \n> Hmm, you're right. I thought someone else could steal the found\n> buffer and use it for another block because the buffer mapping\n> lwlock is released without pinning the buffer or acquiring the\n> buffer header spinlock. However, in this case (replay of TRUNCATE\n> during recovery), nobody steals the buffer: bgwriter or checkpointer\n> doesn't use a buffer for a new block, and the client backend waits\n> for AccessExclusive lock.\n\nMmm. If that is true, doesn't the unoptimized path also need the\nrechecking?\n\nThe AEL doesn't work for a buffer block. No new block can be allocted\nfor the relation but still BufferAlloc can steal the block for other\nrelations since the AEL doesn't work for each buffer block. Am I\nstill missing something?\n\n\n> > I have started doing minor edits to the patch especially planning to\n> > write a theory why is this optimization safe and here is what I can\n> > come up with:\n> \n> Thank you, that's fluent and easier to understand.\n\n+1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 22 Dec 2020 11:37:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 22, 2020 at 7:13 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > This answers the second part of the question but what about the first\n> > part (We hold a buffer partition lock, and have done a lookup in th\n> > mapping table. Why are we then rechecking the\n> > relfilenode/fork/blocknum?)\n> >\n> > I think we don't need such a check, rather we can have an Assert\n> > corresponding to that if-condition in the patch. I understand it is\n> > safe to compare relfilenode/fork/blocknum but it might confuse readers\n> > of the code.\n>\n> Hmm, you're right. I thought someone else could steal the found buffer and use it for another block because the buffer mapping lwlock is released without pinning the buffer or acquiring the buffer header spinlock.\n>\n\nOkay, I see your point.\n\n> However, in this case (replay of TRUNCATE during recovery), nobody steals the buffer: bgwriter or checkpointer doesn't use a buffer for a new block, and the client backend waits for AccessExclusive lock.\n>\n>\n\nWhy would all client backends wait for AccessExclusive lock on this\nrelation? Say, a client needs a buffer for some other relation and\nthat might evict this buffer after we release the lock on the\npartition. In StrategyGetBuffer, it is important to either have a pin\non the buffer or the buffer header itself must be locked to avoid\ngetting picked as victim buffer. Am I missing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Dec 2020 08:08:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 22 Dec 2020 08:08:10 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Dec 22, 2020 at 7:13 AM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n> >\n> > From: Amit Kapila <amit.kapila16@gmail.com>\n> > > This answers the second part of the question but what about the first\n> > > part (We hold a buffer partition lock, and have done a lookup in th\n> > > mapping table. Why are we then rechecking the\n> > > relfilenode/fork/blocknum?)\n> > >\n> > > I think we don't need such a check, rather we can have an Assert\n> > > corresponding to that if-condition in the patch. I understand it is\n> > > safe to compare relfilenode/fork/blocknum but it might confuse readers\n> > > of the code.\n> >\n> > Hmm, you're right. I thought someone else could steal the found buffer and use it for another block because the buffer mapping lwlock is released without pinning the buffer or acquiring the buffer header spinlock.\n> >\n> \n> Okay, I see your point.\n> \n> > However, in this case (replay of TRUNCATE during recovery), nobody steals the buffer: bgwriter or checkpointer doesn't use a buffer for a new block, and the client backend waits for AccessExclusive lock.\n> >\n> >\n\nI understood that you are thinking that the rechecking is useless.\n\n> Why would all client backends wait for AccessExclusive lock on this\n> relation? Say, a client needs a buffer for some other relation and\n> that might evict this buffer after we release the lock on the\n> partition. In StrategyGetBuffer, it is important to either have a pin\n> on the buffer or the buffer header itself must be locked to avoid\n> getting picked as victim buffer. Am I missing something?\n\nI think exactly like that. If we acquire the bufHdr lock before\nreleasing the partition lock, that steal doesn't happen but it doesn't\nseem good as a locking protocol.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 22 Dec 2020 11:42:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> Why would all client backends wait for AccessExclusive lock on this\r\n> relation? Say, a client needs a buffer for some other relation and\r\n> that might evict this buffer after we release the lock on the\r\n> partition. In StrategyGetBuffer, it is important to either have a pin\r\n> on the buffer or the buffer header itself must be locked to avoid\r\n> getting picked as victim buffer. Am I missing something?\r\n\r\nOuch, right. (The year-end business must be making me crazy...)\r\n\r\nSo, there are two choices here:\r\n\r\n1) The current patch.\r\n2) Acquire the buffer header spinlock before releasing the buffer mapping lwlock, and eliminate the buffer tag comparison as follows:\r\n\r\n BufTableLookup();\r\n LockBufHdr();\r\n LWLockRelease();\r\n InvalidateBuffer();\r\n\r\nI think both are okay. If I must choose either, I kind of prefer 1), because LWLockRelease() could take longer time to wake up other processes waiting on the lwlock, which is not very good to do while holding a spinlock.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Tue, 22 Dec 2020 02:48:22 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 22, 2020 at 8:18 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > Why would all client backends wait for AccessExclusive lock on this\n> > relation? Say, a client needs a buffer for some other relation and\n> > that might evict this buffer after we release the lock on the\n> > partition. In StrategyGetBuffer, it is important to either have a pin\n> > on the buffer or the buffer header itself must be locked to avoid\n> > getting picked as victim buffer. Am I missing something?\n>\n> Ouch, right. (The year-end business must be making me crazy...)\n>\n> So, there are two choices here:\n>\n> 1) The current patch.\n> 2) Acquire the buffer header spinlock before releasing the buffer mapping lwlock, and eliminate the buffer tag comparison as follows:\n>\n> BufTableLookup();\n> LockBufHdr();\n> LWLockRelease();\n> InvalidateBuffer();\n>\n> I think both are okay. If I must choose either, I kind of prefer 1), because LWLockRelease() could take longer time to wake up other processes waiting on the lwlock, which is not very good to do while holding a spinlock.\n>\n>\n\nI also prefer (1). I will add some comments about the locking protocol\nin the next version of the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Dec 2020 08:24:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 22, 2020 at 8:12 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 22 Dec 2020 08:08:10 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n>\n> > Why would all client backends wait for AccessExclusive lock on this\n> > relation? Say, a client needs a buffer for some other relation and\n> > that might evict this buffer after we release the lock on the\n> > partition. In StrategyGetBuffer, it is important to either have a pin\n> > on the buffer or the buffer header itself must be locked to avoid\n> > getting picked as victim buffer. Am I missing something?\n>\n> I think exactly like that. If we acquire the bufHdr lock before\n> releasing the partition lock, that steal doesn't happen but it doesn't\n> seem good as a locking protocol.\n>\n\nRight, so let's keep the code as it is but I feel it is better to add\nsome comments explaining the rationale behind this code.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Dec 2020 08:27:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 22 Dec 2020 02:48:22 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > Why would all client backends wait for AccessExclusive lock on this\n> > relation? Say, a client needs a buffer for some other relation and\n> > that might evict this buffer after we release the lock on the\n> > partition. In StrategyGetBuffer, it is important to either have a pin\n> > on the buffer or the buffer header itself must be locked to avoid\n> > getting picked as victim buffer. Am I missing something?\n> \n> Ouch, right. (The year-end business must be making me crazy...)\n> \n> So, there are two choices here:\n> \n> 1) The current patch.\n> 2) Acquire the buffer header spinlock before releasing the buffer mapping lwlock, and eliminate the buffer tag comparison as follows:\n> \n> BufTableLookup();\n> LockBufHdr();\n> LWLockRelease();\n> InvalidateBuffer();\n> \n> I think both are okay. If I must choose either, I kind of prefer 1), because LWLockRelease() could take longer time to wake up other processes waiting on the lwlock, which is not very good to do while holding a spinlock.\n\nI like, as said before, the current patch.\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 22 Dec 2020 12:00:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> Mmm. If that is true, doesn't the unoptimized path also need the\n> rechecking?\n\nYes, the traditional processing does the recheck after acquiring the buffer header spinlock.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n", "msg_date": "Tue, 22 Dec 2020 03:03:23 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Monday, December 21, 2020 10:25 PM, Amit Kapila wrote:\r\n> I have started doing minor edits to the patch especially planning to write a\r\n> theory why is this optimization safe and here is what I can come up with: \r\n> \"To\r\n> remove all the pages of the specified relation forks from the buffer pool, we\r\n> need to scan the entire buffer pool but we can optimize it by finding the\r\n> buffers from BufMapping table provided we know the exact size of each fork\r\n> of the relation. The exact size is required to ensure that we don't leave any\r\n> buffer for the relation being dropped as otherwise the background writer or\r\n> checkpointer can lead to a PANIC error while flushing buffers corresponding\r\n> to files that don't exist.\r\n> \r\n> To know the exact size, we rely on the size cached for each fork by us during\r\n> recovery which limits the optimization to recovery and on standbys but we\r\n> can easily extend it once we have shared cache for relation size.\r\n> \r\n> In recovery, we cache the value returned by the first lseek(SEEK_END) and\r\n> the future writes keeps the cached value up-to-date. See smgrextend. It is\r\n> possible that the value of the first lseek is smaller than the actual number of\r\n> existing blocks in the file due to buggy Linux kernels that might not have\r\n> accounted for the recent write. But that should be fine because there must\r\n> not be any buffers after that file size.\r\n> \r\n> XXX We would make the extra lseek call for the unoptimized paths but that is\r\n> okay because we do it just for the first fork and we anyway have to scan the\r\n> entire buffer pool the cost of which is so high that the extra lseek call won't\r\n> make any visible difference. However, we can use InRecovery flag to avoid the\r\n> additional cost but that doesn't seem worth it.\"\r\n> \r\n> Thoughts?\r\n\r\n+1 \r\nThank you very much for expanding the comments to carefully explain the\r\nreason on why the optimization is safe. I was also struggling to explain it completely\r\nbut your description also covers the possibility of extending the optimization in the\r\nfuture once we have shared cache for rel size. So I like this addition.\r\n\r\n(Also, it seems that we have concluded to retain the locking mechanism of the \r\nexisting patch based from the recent email exchanges. Both the traditional path and\r\nthe optimized path do the rechecking. So there seems to be no problem, I'm definitely\r\nfine with it.)\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Tue, 22 Dec 2020 06:25:25 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 22, 2020 at 8:30 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 22 Dec 2020 02:48:22 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\n> > From: Amit Kapila <amit.kapila16@gmail.com>\n> > > Why would all client backends wait for AccessExclusive lock on this\n> > > relation? Say, a client needs a buffer for some other relation and\n> > > that might evict this buffer after we release the lock on the\n> > > partition. In StrategyGetBuffer, it is important to either have a pin\n> > > on the buffer or the buffer header itself must be locked to avoid\n> > > getting picked as victim buffer. Am I missing something?\n> >\n> > Ouch, right. (The year-end business must be making me crazy...)\n> >\n> > So, there are two choices here:\n> >\n> > 1) The current patch.\n> > 2) Acquire the buffer header spinlock before releasing the buffer mapping lwlock, and eliminate the buffer tag comparison as follows:\n> >\n> > BufTableLookup();\n> > LockBufHdr();\n> > LWLockRelease();\n> > InvalidateBuffer();\n> >\n> > I think both are okay. If I must choose either, I kind of prefer 1), because LWLockRelease() could take longer time to wake up other processes waiting on the lwlock, which is not very good to do while holding a spinlock.\n>\n> I like, as said before, the current patch.\n>\n\nAttached, please find the updated patch with the following\nmodifications, (a) updated comments at various places especially to\ntell why this is a safe optimization, (b) merged the patch for\nextending the smgrnblocks and vacuum optimization patch, (c) made\nminor cosmetic changes and ran pgindent, and (d) updated commit\nmessage. BTW, this optimization will help not only vacuum but also\ntruncate when it is done in the same transaction in which the relation\nis created. I would like to see certain tests to ensure that the\nvalue we choose for BUF_DROP_FULL_SCAN_THRESHOLD is correct. I see\nthat some testing has been done earlier [1] for this threshold but I\nam not still able to conclude. The criteria to find the right\nthreshold should be what is the maximum size of relation to be\ntruncated above which we don't get benefit with this optimization.\n\nOne idea could be to remove \"nBlocksToInvalidate <\nBUF_DROP_FULL_SCAN_THRESHOLD\" part of check \"if (cached &&\nnBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\" so that it always\nuse optimized path for the tests. Then use the relation size as\nNBuffers/128, NBuffers/256, NBuffers/512 for different values of\nshared buffers as 128MB, 1GB, 20GB, 100GB.\n\nApart from tests, do let me know if you are happy with the changes in\nthe patch? Next, I'll look into DropRelFileNodesAllBuffers()\noptimization patch.\n\n[1] - https://www.postgresql.org/message-id/OSBPR01MB234176B1829AECFE9FDDFCC2EFE90%40OSBPR01MB2341.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 22 Dec 2020 14:55:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 22, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Apart from tests, do let me know if you are happy with the changes in\n> the patch? Next, I'll look into DropRelFileNodesAllBuffers()\n> optimization patch.\n>\n\nReview of v35-0004-Optimize-DropRelFileNodesAllBuffers-in-recovery [1]\n========================================================\n1.\nDropRelFileNodesAllBuffers()\n{\n..\n+buffer_full_scan:\n+ pfree(block);\n+ nodes = palloc(sizeof(RelFileNode) * n); /* non-local relations */\n+ for (i = 0; i < n; i++)\n+ nodes[i] = smgr_reln[i]->smgr_rnode.node;\n+\n..\n}\n\nHow is it correct to assign nodes array directly from smgr_reln? There\nis no one-to-one correspondence. If you see the code before patch, the\npassed array can have mixed of temp and non-temp relation information.\n\n2.\n+ for (i = 0; i < n; i++)\n {\n- pfree(nodes);\n+ for (j = 0; j <= MAX_FORKNUM; j++)\n+ {\n+ /*\n+ * Assign InvalidblockNumber to a block if a relation\n+ * fork does not exist, so that we can skip it later\n+ * when dropping the relation buffers.\n+ */\n+ if (!smgrexists(smgr_reln[i], j))\n+ {\n+ block[i][j] = InvalidBlockNumber;\n+ continue;\n+ }\n+\n+ /* Get the number of blocks for a relation's fork */\n+ block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);\n\nSimilar to above, how can we assume smgr_reln array has all non-local\nrelations? Have we tried the case with mix of temp and non-temp\nrelations?\n\nIn this code, I am slightly worried about the additional cost of each\ntime checking smgrexists. Consider a case where there are many\nrelations and only one or few of them have not cached the information,\nin such a case we will pay the cost of smgrexists for many relations\nwithout even going to the optimized path. Can we avoid that in some\nway or at least reduce its usage to only when it is required? One idea\ncould be that we first check if the nblocks information is cached and\nif so then we don't need to call smgrnblocks, otherwise, check if it\nexists. For this, we need an API like smgrnblocks_cahced, something we\ndiscussed earlier but preferred the current API. Do you have any\nbetter ideas?\n\n\n[1] - https://www.postgresql.org/message-id/OSBPR01MB2341882F416A282C3F7D769DEFC70%40OSBPR01MB2341.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Dec 2020 17:41:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, December 22, 2020 6:25 PM, Amit Kapila wrote: \r\n> Attached, please find the updated patch with the following modifications, (a)\r\n> updated comments at various places especially to tell why this is a safe\r\n> optimization, (b) merged the patch for extending the smgrnblocks and\r\n> vacuum optimization patch, (c) made minor cosmetic changes and ran\r\n> pgindent, and (d) updated commit message. BTW, this optimization will help\r\n> not only vacuum but also truncate when it is done in the same transaction in\r\n> which the relation is created. I would like to see certain tests to ensure that\r\n> the value we choose for BUF_DROP_FULL_SCAN_THRESHOLD is correct. I\r\n> see that some testing has been done earlier [1] for this threshold but I am not\r\n> still able to conclude. The criteria to find the right threshold should be what is\r\n> the maximum size of relation to be truncated above which we don't get\r\n> benefit with this optimization.\r\n> \r\n> One idea could be to remove \"nBlocksToInvalidate <\r\n> BUF_DROP_FULL_SCAN_THRESHOLD\" part of check \"if (cached &&\r\n> nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\" so that it\r\n> always use optimized path for the tests. Then use the relation size as\r\n> NBuffers/128, NBuffers/256, NBuffers/512 for different values of shared\r\n> buffers as 128MB, 1GB, 20GB, 100GB.\r\n\r\nAlright. I will also repeat the tests with the different threshold settings, \r\nand thank you for the tip.\r\n\r\n> Apart from tests, do let me know if you are happy with the changes in the\r\n> patch? Next, I'll look into DropRelFileNodesAllBuffers() optimization patch.\r\n\r\nThank you, Amit.\r\nThat looks more neat, combining the previous patches 0002-0003, so I am +1\r\nwith the changes because of the clearer explanations for the threshold and\r\noptimization path in DropRelFileNodeBuffers. Thanks for cleaning my patch sets.\r\nHope we don't forget the 0001 patch's assertion in smgrextend() to ensure that we\r\ndo it safely too and that we are not InRecovery.\r\n\r\n> [1] -\r\n> https://www.postgresql.org/message-id/OSBPR01MB234176B1829AECFE9\r\n> FDDFCC2EFE90%40OSBPR01MB2341.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Wed, 23 Dec 2020 01:00:35 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Dec 23, 2020 at 6:30 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Tuesday, December 22, 2020 6:25 PM, Amit Kapila wrote:\n>\n> > Apart from tests, do let me know if you are happy with the changes in the\n> > patch? Next, I'll look into DropRelFileNodesAllBuffers() optimization patch.\n>\n> Thank you, Amit.\n> That looks more neat, combining the previous patches 0002-0003, so I am +1\n> with the changes because of the clearer explanations for the threshold and\n> optimization path in DropRelFileNodeBuffers. Thanks for cleaning my patch sets.\n> Hope we don't forget the 0001 patch's assertion in smgrextend() to ensure that we\n> do it safely too and that we are not InRecovery.\n>\n\nI think the 0001 is mostly for test purposes but we will see once the\nmain patches are ready.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Dec 2020 07:11:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tue, Dec 22, 2020 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 22, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Apart from tests, do let me know if you are happy with the changes in\n> > the patch? Next, I'll look into DropRelFileNodesAllBuffers()\n> > optimization patch.\n> >\n>\n> Review of v35-0004-Optimize-DropRelFileNodesAllBuffers-in-recovery [1]\n> ========================================================\n>\n> In this code, I am slightly worried about the additional cost of each\n> time checking smgrexists. Consider a case where there are many\n> relations and only one or few of them have not cached the information,\n> in such a case we will pay the cost of smgrexists for many relations\n> without even going to the optimized path. Can we avoid that in some\n> way or at least reduce its usage to only when it is required? One idea\n> could be that we first check if the nblocks information is cached and\n> if so then we don't need to call smgrnblocks, otherwise, check if it\n> exists. For this, we need an API like smgrnblocks_cahced, something we\n> discussed earlier but preferred the current API. Do you have any\n> better ideas?\n>\n\nOne more idea which is not better than what I mentioned above is that\nwe completely avoid calling smgrexists and rely on smgrnblocks. It\nwill throw an error in case the particular fork doesn't exist and we\ncan use try .. catch to handle it. I just mentioned it as it came\nacross my mind but I don't think it is better than the previous one.\n\nOne more thing about patch:\n+ /* Get the number of blocks for a relation's fork */\n+ block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);\n+\n+ if (!cached)\n+ goto buffer_full_scan;\n\nWhy do we need to use goto here? We can simply break from the loop and\nthen check if (cached && nBlocksToInvalidate <\nBUF_DROP_FULL_SCAN_THRESHOLD). I think we should try to avoid goto if\npossible without much complexity.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Dec 2020 09:27:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> + /* Get the number of blocks for a relation's fork */\r\n> + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);\r\n> +\r\n> + if (!cached)\r\n> + goto buffer_full_scan;\r\n> \r\n> Why do we need to use goto here? We can simply break from the loop and\r\n> then check if (cached && nBlocksToInvalidate <\r\n> BUF_DROP_FULL_SCAN_THRESHOLD). I think we should try to avoid goto if\r\n> possible without much complexity.\r\n\r\nThat's because two for loops are nested -- breaking there only exits the inner loop. (I thought the same as you at first... And I understand some people have alergy to goto, I think modest use of goto makes the code readable.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n", "msg_date": "Wed, 23 Dec 2020 04:22:19 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Wed, 23 Dec 2020 04:22:19 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > + /* Get the number of blocks for a relation's fork */\n> > + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);\n> > +\n> > + if (!cached)\n> > + goto buffer_full_scan;\n> > \n> > Why do we need to use goto here? We can simply break from the loop and\n> > then check if (cached && nBlocksToInvalidate <\n> > BUF_DROP_FULL_SCAN_THRESHOLD). I think we should try to avoid goto if\n> > possible without much complexity.\n> \n> That's because two for loops are nested -- breaking there only exits the inner loop. (I thought the same as you at first... And I understand some people have alergy to goto, I think modest use of goto makes the code readable.)\n\nI don't strongly oppose to goto's but in this case the outer loop can\nbreak on the same condition with the inner loop, since cached is true\nwhenever the inner loop runs to the end. It is needed to initialize\nthe variable cache with true, instead of false, though.\n\nThe same pattern is seen in the tree.\n\nRegards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 23 Dec 2020 14:12:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi,\nIt is possible to come out of the nested loop without goto.\n\n+ bool cached = true;\n...\n+ * to that fork during recovery.\n+ */\n+ for (i = 0; i < n && cached; i++)\n...\n+ if (!cached)\n+. break;\n\nHere I changed the initial value for cached to true so that we enter the\nouter loop.\nIn place of the original goto, we break out of inner loop and exit outer\nloop.\n\nCheers\n\nOn Tue, Dec 22, 2020 at 8:22 PM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > + /* Get the number of blocks for a relation's fork */\n> > + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);\n> > +\n> > + if (!cached)\n> > + goto buffer_full_scan;\n> >\n> > Why do we need to use goto here? We can simply break from the loop and\n> > then check if (cached && nBlocksToInvalidate <\n> > BUF_DROP_FULL_SCAN_THRESHOLD). I think we should try to avoid goto if\n> > possible without much complexity.\n>\n> That's because two for loops are nested -- breaking there only exits the\n> inner loop. (I thought the same as you at first... And I understand some\n> people have alergy to goto, I think modest use of goto makes the code\n> readable.)\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n>\n>\n>\n\nHi,It is possible to come out of the nested loop without goto.+   bool        cached = true;...+    * to that fork during recovery.+    */+   for (i = 0; i < n && cached; i++)...+           if (!cached)+.              break;Here I changed the initial value for cached to true so that we enter the outer loop.In place of the original goto, we break out of inner loop and exit outer loop.CheersOn Tue, Dec 22, 2020 at 8:22 PM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:From: Amit Kapila <amit.kapila16@gmail.com>\n> + /* Get the number of blocks for a relation's fork */\n> + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);\n> +\n> + if (!cached)\n> + goto buffer_full_scan;\n> \n> Why do we need to use goto here? We can simply break from the loop and\n> then check if (cached && nBlocksToInvalidate <\n> BUF_DROP_FULL_SCAN_THRESHOLD). I think we should try to avoid goto if\n> possible without much complexity.\n\nThat's because two for loops are nested -- breaking there only exits the inner loop.  (I thought the same as you at first... And I understand some people have alergy to goto, I think modest use of goto makes the code readable.)\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Tue, 22 Dec 2020 21:51:59 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Tuesday, December 22, 2020 9:11 PM, Amit Kapila wrote:\r\n> On Tue, Dec 22, 2020 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > Next, I'll look into DropRelFileNodesAllBuffers()\r\n> > optimization patch.\r\n> >\r\n> \r\n> Review of v35-0004-Optimize-DropRelFileNodesAllBuffers-in-recovery [1]\r\n> =================================================\r\n> =======\r\n> 1.\r\n> DropRelFileNodesAllBuffers()\r\n> {\r\n> ..\r\n> +buffer_full_scan:\r\n> + pfree(block);\r\n> + nodes = palloc(sizeof(RelFileNode) * n); /* non-local relations */\r\n> +for (i = 0; i < n; i++) nodes[i] = smgr_reln[i]->smgr_rnode.node;\r\n> +\r\n> ..\r\n> }\r\n> \r\n> How is it correct to assign nodes array directly from smgr_reln? There is no\r\n> one-to-one correspondence. If you see the code before patch, the passed\r\n> array can have mixed of temp and non-temp relation information.\r\n\r\nYou are right. I mistakenly removed the array node that should have been\r\nallocated for non-local relations. So I fixed that by doing:\r\n\r\n\tSMgrRelation\t*rels;\r\n\r\n\trels = palloc(sizeof(SMgrRelation) * nnodes);\t/* non-local relations */\r\n\r\n\t/* If it's a local relation, it's localbuf.c's problem. */\r\n\tfor (i = 0; i < nnodes; i++)\r\n\t{\r\n\t\tif (RelFileNodeBackendIsTemp(smgr_reln[i]->smgr_rnode))\r\n\t\t{\r\n\t\t\tif (smgr_reln[i]->smgr_rnode.backend == MyBackendId)\r\n\t\t\t\tDropRelFileNodeAllLocalBuffers(smgr_reln[i]->smgr_rnode.node);\r\n\t\t}\r\n\t\telse\r\n\t\t\trels[n++] = smgr_reln[i];\r\n\t}\r\n...\r\n\tif (n == 0)\r\n\t{\r\n\t\tpfree(rels);\r\n\t\treturn;\r\n\t}\r\n...\r\n//traditional path:\r\n\r\n\tpfree(block);\r\n\tnodes = palloc(sizeof(RelFileNode) * n); /* non-local relations */\r\n\tfor (i = 0; i < n; i++)\r\n\t\tnodes[i] = rels[i]->smgr_rnode.node;\r\n\r\n> 2.\r\n> + for (i = 0; i < n; i++)\r\n> {\r\n> - pfree(nodes);\r\n> + for (j = 0; j <= MAX_FORKNUM; j++)\r\n> + {\r\n> + /*\r\n> + * Assign InvalidblockNumber to a block if a relation\r\n> + * fork does not exist, so that we can skip it later\r\n> + * when dropping the relation buffers.\r\n> + */\r\n> + if (!smgrexists(smgr_reln[i], j))\r\n> + {\r\n> + block[i][j] = InvalidBlockNumber;\r\n> + continue;\r\n> + }\r\n> +\r\n> + /* Get the number of blocks for a relation's fork */ block[i][j] =\r\n> + smgrnblocks(smgr_reln[i], j, &cached);\r\n> \r\n> Similar to above, how can we assume smgr_reln array has all non-local\r\n> relations? Have we tried the case with mix of temp and non-temp relations?\r\n\r\nSimilar to above reply.\r\n\r\n> In this code, I am slightly worried about the additional cost of each time\r\n> checking smgrexists. Consider a case where there are many relations and only\r\n> one or few of them have not cached the information, in such a case we will\r\n> pay the cost of smgrexists for many relations without even going to the\r\n> optimized path. Can we avoid that in some way or at least reduce its usage to\r\n> only when it is required? One idea could be that we first check if the nblocks\r\n> information is cached and if so then we don't need to call smgrnblocks,\r\n> otherwise, check if it exists. For this, we need an API like\r\n> smgrnblocks_cahced, something we discussed earlier but preferred the\r\n> current API. Do you have any better ideas?\r\n\r\nRight. I understand the point that let's say there are 100 relations, and\r\nthe first 99 non-local relations happen to enter the optimization path, but the last\r\none does not, calling smgrexist() would be too costly and waste of time in that case.\r\nThe only solution I could think of as you mentioned is to reintroduce the new API\r\nwhich we discussed before: smgrnblocks_cached().\r\nIt's possible that we call smgrexist() only if smgrnblocks_cached() returns\r\nInvalidBlockNumber.\r\nSo if everyone agrees, we can add that API smgrnblocks_cached() which will\r\nInclude the \"cached\" flag parameter, and remove the additional flag modifications\r\nfrom smgrnblocks(). In this case, both DropRelFileNodeBuffers() and\r\nDropRelFileNodesAllBuffers() will use the new API.\r\n\r\nThoughts?\r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Wed, 23 Dec 2020 07:37:38 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Dec 23, 2020 at 1:07 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Tuesday, December 22, 2020 9:11 PM, Amit Kapila wrote:\n>\n> > In this code, I am slightly worried about the additional cost of each time\n> > checking smgrexists. Consider a case where there are many relations and only\n> > one or few of them have not cached the information, in such a case we will\n> > pay the cost of smgrexists for many relations without even going to the\n> > optimized path. Can we avoid that in some way or at least reduce its usage to\n> > only when it is required? One idea could be that we first check if the nblocks\n> > information is cached and if so then we don't need to call smgrnblocks,\n> > otherwise, check if it exists. For this, we need an API like\n> > smgrnblocks_cahced, something we discussed earlier but preferred the\n> > current API. Do you have any better ideas?\n>\n> Right. I understand the point that let's say there are 100 relations, and\n> the first 99 non-local relations happen to enter the optimization path, but the last\n> one does not, calling smgrexist() would be too costly and waste of time in that case.\n> The only solution I could think of as you mentioned is to reintroduce the new API\n> which we discussed before: smgrnblocks_cached().\n> It's possible that we call smgrexist() only if smgrnblocks_cached() returns\n> InvalidBlockNumber.\n> So if everyone agrees, we can add that API smgrnblocks_cached() which will\n> Include the \"cached\" flag parameter, and remove the additional flag modifications\n> from smgrnblocks(). In this case, both DropRelFileNodeBuffers() and\n> DropRelFileNodesAllBuffers() will use the new API.\n>\n\nYeah, let's do it that way unless anyone has a better idea to suggest.\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Dec 2020 14:21:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Dec 23, 2020 at 10:42 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 23 Dec 2020 04:22:19 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in\n> > From: Amit Kapila <amit.kapila16@gmail.com>\n> > > + /* Get the number of blocks for a relation's fork */\n> > > + block[i][j] = smgrnblocks(smgr_reln[i], j, &cached);\n> > > +\n> > > + if (!cached)\n> > > + goto buffer_full_scan;\n> > >\n> > > Why do we need to use goto here? We can simply break from the loop and\n> > > then check if (cached && nBlocksToInvalidate <\n> > > BUF_DROP_FULL_SCAN_THRESHOLD). I think we should try to avoid goto if\n> > > possible without much complexity.\n> >\n> > That's because two for loops are nested -- breaking there only exits the inner loop. (I thought the same as you at first... And I understand some people have alergy to goto, I think modest use of goto makes the code readable.)\n>\n> I don't strongly oppose to goto's but in this case the outer loop can\n> break on the same condition with the inner loop, since cached is true\n> whenever the inner loop runs to the end. It is needed to initialize\n> the variable cache with true, instead of false, though.\n>\n\n+1. I think it is better to avoid goto here as it can be done without\nintroducing any complexity or making code any less readable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 23 Dec 2020 14:27:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, December 23, 2020 5:57 PM (GMT+9), Amit Kapila wrote:\r\n> >\r\n> > At Wed, 23 Dec 2020 04:22:19 +0000, \"tsunakawa.takay@fujitsu.com\"\r\n> > <tsunakawa.takay@fujitsu.com> wrote in\r\n> > > From: Amit Kapila <amit.kapila16@gmail.com>\r\n> > > > + /* Get the number of blocks for a relation's fork */ block[i][j]\r\n> > > > + = smgrnblocks(smgr_reln[i], j, &cached);\r\n> > > > +\r\n> > > > + if (!cached)\r\n> > > > + goto buffer_full_scan;\r\n> > > >\r\n> > > > Why do we need to use goto here? We can simply break from the loop\r\n> > > > and then check if (cached && nBlocksToInvalidate <\r\n> > > > BUF_DROP_FULL_SCAN_THRESHOLD). I think we should try to avoid\r\n> goto\r\n> > > > if possible without much complexity.\r\n> > >\r\n> > > That's because two for loops are nested -- breaking there only exits\r\n> > > the inner loop. (I thought the same as you at first... And I\r\n> > > understand some people have alergy to goto, I think modest use of\r\n> > > goto makes the code readable.)\r\n> >\r\n> > I don't strongly oppose to goto's but in this case the outer loop can\r\n> > break on the same condition with the inner loop, since cached is true\r\n> > whenever the inner loop runs to the end. It is needed to initialize\r\n> > the variable cache with true, instead of false, though.\r\n> >\r\n> \r\n> +1. I think it is better to avoid goto here as it can be done without\r\n> introducing any complexity or making code any less readable.\r\n\r\nI also do not mind, so I have removed the goto and followed the advice\r\nof all reviewers. It works fine in the latest attached patch 0003.\r\n\r\nAttached herewith are the sets of patches. 0002 & 0003 have the following\r\nchanges:\r\n\r\n1. I have removed the modifications in smgrnblocks(). So the modifications of \r\nother functions that uses smgrnblocks() in the previous patch versions were\r\nalso reverted.\r\n2. Introduced a new API smgrnblocks_cached() instead which returns either\r\na cached size for the specified fork or an InvalidBlockNumber.\r\nSince InvalidBlockNumber is used, I think it is logical not to use the additional\r\nboolean parameter \"cached\" in the function as it will be redundant.\r\nAlthough in 0003, I only used the \"cached\" as a Boolean variable to do the trick\r\nof not using goto.\r\nThis function is called both in DropRelFileNodeBuffers() and DropRelFileNodesAllBuffers().\r\n3. Modified some minor comments from the patch and commit logs.\r\n\r\nIt compiles. Passes the regression tests too.\r\nYour feedbacks are definitely welcome.\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Wed, 23 Dec 2020 12:57:24 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>\r\ncompiles. Passes the regression tests too.\r\n> Your feedbacks are definitely welcome.\r\n\r\nThe code looks correct and has become further compact. Remains ready for committer.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 23 Dec 2020 13:58:42 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Amit, Kirk\r\n\r\n>One idea could be to remove \"nBlocksToInvalidate <\r\n>BUF_DROP_FULL_SCAN_THRESHOLD\" part of check \"if (cached &&\r\n>nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\" so that it always\r\n>use optimized path for the tests. Then use the relation size as\r\n>NBuffers/128, NBuffers/256, NBuffers/512 for different values of\r\n>shared buffers as 128MB, 1GB, 20GB, 100GB.\r\n\r\nI followed your idea to remove check and use different relation size for different shared buffers as 128M,1G,20G,50G(my environment can't support 100G, so I choose 50G).\r\nAccording to results, all three thresholds can get optimized, even NBuffers/128 when shared_buffers > 128M.\r\nIMHO, I think NBuffers/128 is the maximum relation size we can get optimization in the three thresholds, Please let me know if I made something wrong. \r\n\r\nRecovery after vacuum test results as below ' Optimized percentage' and ' Optimization details(unit: second)' shows:\r\n(512),(256),(128): means relation size is NBuffers/512, NBuffers/256, NBuffers/128\r\n%reg: means (patched(512)- master(512))/ master(512)\r\n\r\nOptimized percentage:\r\nshared_buffers\t%reg(512)\t%reg(256)\t%reg(128)\r\n-----------------------------------------------------------------\r\n128M\t\t0%\t\t-1%\t\t-1%\r\n1G \t\t-65%\t\t-49%\t\t-62%\r\n20G \t\t-98%\t\t-98%\t\t-98%\r\n50G \t\t-99%\t\t-99%\t\t-99%\r\n\r\nOptimization details(unit: second):\r\nshared_buffers\tmaster(512)\tpatched(512)\tmaster(256)\tpatched(256)\tmaster(128)\tpatched(128)\r\n-----------------------------------------------------------------------------------------------------------------------------\r\n128M\t\t0.108\t\t0.108\t\t0.109\t\t0.108\t\t0.109\t\t0.108\r\n1G\t\t0.310 \t\t0.107 \t\t0.410 \t\t0.208 \t\t0.811 \t\t0.309\r\n20G \t\t94.493 \t\t1.511 \t\t188.777 \t3.014 \t\t380.633 \t6.020\r\n50G\t\t537.978\t\t3.815\t\t867.453\t\t7.524\t\t1559.076\t15.541\r\n\r\nTest prepare:\r\nBelow is test table amount for different shared buffers. Each table size is 8k, so I use table amount = NBuffers/(512 or 256 or 128):\r\nshared_buffers\tNBuffers\tNBuffers/512\tNBuffers/256\tNBuffers/128\r\n-------------------------------------------------------------------------------------------\r\n128M\t\t16384\t\t32\t\t64\t\t128\r\n1G\t\t131072\t\t256\t\t512\t\t1024\r\n20G\t\t2621440\t 5120\t\t10240\t\t20480\r\n50G\t\t6553600\t 12800\t\t25600\t\t51200\r\n\r\nBesides, I also did single table performance test.\r\nStill, NBuffers/128 is the max relation size which we can get optimization.\r\n\r\nOptimized percentage:\r\nshared_buffers\t%reg(512)\t%reg(256)\t%reg(128)\r\n-----------------------------------------------------------------\r\n128M\t\t0%\t\t0%\t\t-1%\r\n1G \t\t0%\t\t1%\t\t0%\r\n20G \t\t0%\t\t-24%\t\t-25%\r\n50G \t\t0%\t\t-24%\t\t-20%\r\n\r\nOptimization details(unit: second):\r\nshared_buffers\tmaster(512)\tpatched(512)\tmaster(256)\tpatched(256)\tmaster(128)\tpatched(128)\r\n-----------------------------------------------------------------------------------------------------------------------------\r\n128M\t\t0.107\t\t0.107\t\t0.108\t\t0.108\t\t0.108\t\t0.107\r\n1G\t\t0.108 \t\t0.108 \t\t0.107 \t\t0.108 \t\t0.108 \t\t0.108\r\n20G\t\t0.208 \t\t0.208 \t\t0.409 \t\t0.309 \t\t0.409 \t\t0.308\r\n50G\t\t0.309 \t\t0.308 \t\t0.408 \t\t0.309 \t\t0.509 \t\t0.408\r\n\r\nAny question on my test results is welcome.\r\n\r\nRegards,\r\nTang\r\n\n\n", "msg_date": "Thu, 24 Dec 2020 09:01:37 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, Dec 24, 2020 at 2:31 PM Tang, Haiying\n<tanghy.fnst@cn.fujitsu.com> wrote:\n>\n> Hi Amit, Kirk\n>\n> >One idea could be to remove \"nBlocksToInvalidate <\n> >BUF_DROP_FULL_SCAN_THRESHOLD\" part of check \"if (cached &&\n> >nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\" so that it always\n> >use optimized path for the tests. Then use the relation size as\n> >NBuffers/128, NBuffers/256, NBuffers/512 for different values of\n> >shared buffers as 128MB, 1GB, 20GB, 100GB.\n>\n> I followed your idea to remove check and use different relation size for different shared buffers as 128M,1G,20G,50G(my environment can't support 100G, so I choose 50G).\n> According to results, all three thresholds can get optimized, even NBuffers/128 when shared_buffers > 128M.\n> IMHO, I think NBuffers/128 is the maximum relation size we can get optimization in the three thresholds, Please let me know if I made something wrong.\n>\n\nBut how can we conclude NBuffers/128 is the maximum relation size?\nBecause the maximum size would be where the performance is worse than\nthe master, no? I guess we need to try by NBuffers/64, NBuffers/32,\n.... till we get the threshold where master performs better.\n\n> Recovery after vacuum test results as below ' Optimized percentage' and ' Optimization details(unit: second)' shows:\n> (512),(256),(128): means relation size is NBuffers/512, NBuffers/256, NBuffers/128\n> %reg: means (patched(512)- master(512))/ master(512)\n>\n> Optimized percentage:\n> shared_buffers %reg(512) %reg(256) %reg(128)\n> -----------------------------------------------------------------\n> 128M 0% -1% -1%\n> 1G -65% -49% -62%\n> 20G -98% -98% -98%\n> 50G -99% -99% -99%\n>\n> Optimization details(unit: second):\n> shared_buffers master(512) patched(512) master(256) patched(256) master(128) patched(128)\n> -----------------------------------------------------------------------------------------------------------------------------\n> 128M 0.108 0.108 0.109 0.108 0.109 0.108\n> 1G 0.310 0.107 0.410 0.208 0.811 0.309\n> 20G 94.493 1.511 188.777 3.014 380.633 6.020\n> 50G 537.978 3.815 867.453 7.524 1559.076 15.541\n>\n\nI think we should find a better way to display these numbers because\nin cases like where master takes 537.978s and patch takes 3.815s, it\nis clear that patch has reduced the time by more than 100 times\nwhereas in your table it shows 99%.\n\n> Test prepare:\n> Below is test table amount for different shared buffers. Each table size is 8k,\n>\n\nTable size should be more than 8k to get all this data because 8k\nmeans just one block. I guess either it is a typo or some other\nmistake.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 24 Dec 2020 17:41:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, December 24, 2020 6:02 PM JST, Tang, Haiying wrote:\r\n> Hi Amit, Kirk\r\n> \r\n> >One idea could be to remove \"nBlocksToInvalidate <\r\n> >BUF_DROP_FULL_SCAN_THRESHOLD\" part of check \"if (cached &&\r\n> >nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\" so that it\r\n> always\r\n> >use optimized path for the tests. Then use the relation size as\r\n> >NBuffers/128, NBuffers/256, NBuffers/512 for different values of shared\r\n> >buffers as 128MB, 1GB, 20GB, 100GB.\r\n> \r\n> I followed your idea to remove check and use different relation size for\r\n> different shared buffers as 128M,1G,20G,50G(my environment can't support\r\n> 100G, so I choose 50G).\r\n> According to results, all three thresholds can get optimized, even\r\n> NBuffers/128 when shared_buffers > 128M.\r\n> IMHO, I think NBuffers/128 is the maximum relation size we can get\r\n> optimization in the three thresholds, Please let me know if I made something\r\n> wrong.\r\n \r\n\r\nHello Tang,\r\nThank you very much again for testing. Perhaps there is a confusing part in the\r\npresented table where you indicated master(512), master(256), master(128). \r\nBecause the master is not supposed to use the BUF_DROP_FULL_SCAN_THRESHOLD\r\nand just execute the existing default full scan of NBuffers.\r\nOr I may have misunderstood something?\r\n\r\n> Recovery after vacuum test results as below ' Optimized percentage' and '\r\n> Optimization details(unit: second)' shows:\r\n> (512),(256),(128): means relation size is NBuffers/512, NBuffers/256,\r\n> NBuffers/128\r\n> %reg: means (patched(512)- master(512))/ master(512)\r\n> \r\n> Optimized percentage:\r\n> shared_buffers%reg(512)%reg(256)%reg(128)\r\n> -----------------------------------------------------------------\r\n> 128M0%-1%-1%\r\n> 1G -65%-49%-62%\r\n> 20G -98%-98%-98%\r\n> 50G -99%-99%-99%\r\n> \r\n> Optimization details(unit: second):\r\n> shared_buffersmaster(512)patched(512)master(256)patched(256)master(12\r\n> 8)patched(128)\r\n> -------------------------------------------------------------------------------------\r\n> ----------------------------------------\r\n> 128M0.1080.1080.1090.1080.1090.108\r\n> 1G0.310 0.107 0.410 0.208 0.811 0.309\r\n> 20G 94.493 1.511 188.777 3.014 380.633 6.020\r\n> 50G537.9783.815867.4537.5241559.07615.541\r\n> \r\n> Test prepare:\r\n> Below is test table amount for different shared buffers. Each table size is 8k,\r\n> so I use table amount = NBuffers/(512 or 256 or 128):\r\n> shared_buffersNBuffersNBuffers/512NBuffers/256NBuffers/128\r\n> -------------------------------------------------------------------------------------\r\n> ------\r\n> 128M163843264128\r\n> 1G1310722565121024\r\n> 20G2621440 51201024020480\r\n> 50G6553600 128002560051200\r\n> \r\n> Besides, I also did single table performance test.\r\n> Still, NBuffers/128 is the max relation size which we can get optimization.\r\n> \r\n> Optimized percentage:\r\n> shared_buffers%reg(512)%reg(256)%reg(128)\r\n> -----------------------------------------------------------------\r\n> 128M0%0%-1%\r\n> 1G 0%1%0%\r\n> 20G 0%-24%-25%\r\n> 50G 0%-24%-20%\r\n> \r\n> Optimization details(unit: second):\r\n> shared_buffersmaster(512)patched(512)master(256)patched(256)master(12\r\n> 8)patched(128)\r\n> -------------------------------------------------------------------------------------\r\n> ----------------------------------------\r\n> 128M0.1070.1070.1080.1080.1080.107\r\n> 1G0.108 0.108 0.107 0.108 0.108 0.108\r\n> 20G0.208 0.208 0.409 0.309 0.409 0.308\r\n> 50G0.309 0.308 0.408 0.309 0.509 0.408\r\n\r\nI will also post results from my machine in the next email.\r\nAdding what Amit mentioned that we should also test for NBuffers/64, etc.\r\nuntil we determine which of the threshold performs worse than master.\r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Thu, 24 Dec 2020 13:29:53 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Dec 23, 2020 at 6:27 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n>\n> It compiles. Passes the regression tests too.\n> Your feedbacks are definitely welcome.\n>\n\nThanks, the patches look good to me now. I have slightly edited the\npatches for comments, commit messages, and removed the duplicate\ncode/check in smgrnblocks. I have changed the order of patches (moved\nAssert related patch to last because as mentioned earlier, I am not\nsure if we want to commit it.). We might still have to change the scan\nthreshold value based on your and Tang-San's results.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 24 Dec 2020 19:37:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Kirk,\r\n\r\n\r\n>Perhaps there is a confusing part in the presented table where you indicated master(512), master(256), master(128). \r\n>Because the master is not supposed to use the BUF_DROP_FULL_SCAN_THRESHOLD and just execute the existing default full scan of NBuffers.\r\n>Or I may have misunderstood something?\r\n\r\nSorry for your confusion, I didn't make it clear. I didn't use BUF_DROP_FULL_SCAN_THRESHOLD for master. \r\nMaster(512) means the test table amount in master is same with patched(512), so does master(256) and master(128).\r\nI meant to mark 512/256/128 to distinguish results in master for the three threshold(applied in patches) .\r\n\r\nRegards\r\nTang\r\n\n\n", "msg_date": "Fri, 25 Dec 2020 02:23:21 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Amit,\r\n\r\n>But how can we conclude NBuffers/128 is the maximum relation size?\r\n>Because the maximum size would be where the performance is worse than \r\n>the master, no? I guess we need to try by NBuffers/64, NBuffers/32, \r\n>.... till we get the threshold where master performs better.\r\n\r\nYou are right, we should keep on testing until no optimization.\r\n\r\n>I think we should find a better way to display these numbers because in \r\n>cases like where master takes 537.978s and patch takes 3.815s\r\n\r\nYeah, I think we can change the %reg formula from (patched- master)/ master to (patched- master)/ patched.\r\n\r\n>Table size should be more than 8k to get all this data because 8k means \r\n>just one block. I guess either it is a typo or some other mistake.\r\n\r\n8k here is the relation size, not data size. \r\nFor example, when I tested recovery performance of 400M relation size, I used 51200 tables(8k per table).\r\nPlease let me know if you think this is not appropriate.\r\n\r\nRegards\r\nTang\r\n\r\n-----Original Message-----\r\nFrom: Amit Kapila <amit.kapila16@gmail.com> \r\nSent: Thursday, December 24, 2020 9:11 PM\r\nTo: Tang, Haiying/唐 海英 <tanghy.fnst@cn.fujitsu.com>\r\nCc: Tsunakawa, Takayuki/綱川 貴之 <tsunakawa.takay@fujitsu.com>; Jamison, Kirk/ジャミソン カーク <k.jamison@fujitsu.com>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; Andres Freund <andres@anarazel.de>; Tom Lane <tgl@sss.pgh.pa.us>; Thomas Munro <thomas.munro@gmail.com>; Robert Haas <robertmhaas@gmail.com>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; pgsql-hackers <pgsql-hackers@postgresql.org>\r\nSubject: Re: [Patch] Optimize dropping of relation buffers using dlist\r\n\r\nOn Thu, Dec 24, 2020 at 2:31 PM Tang, Haiying <tanghy.fnst@cn.fujitsu.com> wrote:\r\n>\r\n> Hi Amit, Kirk\r\n>\r\n> >One idea could be to remove \"nBlocksToInvalidate < \r\n> >BUF_DROP_FULL_SCAN_THRESHOLD\" part of check \"if (cached && \r\n> >nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\" so that it \r\n> >always use optimized path for the tests. Then use the relation size \r\n> >as NBuffers/128, NBuffers/256, NBuffers/512 for different values of \r\n> >shared buffers as 128MB, 1GB, 20GB, 100GB.\r\n>\r\n> I followed your idea to remove check and use different relation size for different shared buffers as 128M,1G,20G,50G(my environment can't support 100G, so I choose 50G).\r\n> According to results, all three thresholds can get optimized, even NBuffers/128 when shared_buffers > 128M.\r\n> IMHO, I think NBuffers/128 is the maximum relation size we can get optimization in the three thresholds, Please let me know if I made something wrong.\r\n>\r\n\r\nBut how can we conclude NBuffers/128 is the maximum relation size?\r\nBecause the maximum size would be where the performance is worse than the master, no? I guess we need to try by NBuffers/64, NBuffers/32, .... till we get the threshold where master performs better.\r\n\r\n> Recovery after vacuum test results as below ' Optimized percentage' and ' Optimization details(unit: second)' shows:\r\n> (512),(256),(128): means relation size is NBuffers/512, NBuffers/256, \r\n> NBuffers/128\r\n> %reg: means (patched(512)- master(512))/ master(512)\r\n>\r\n> Optimized percentage:\r\n> shared_buffers %reg(512) %reg(256) %reg(128)\r\n> -----------------------------------------------------------------\r\n> 128M 0% -1% -1%\r\n> 1G -65% -49% -62%\r\n> 20G -98% -98% -98%\r\n> 50G -99% -99% -99%\r\n>\r\n> Optimization details(unit: second):\r\n> shared_buffers master(512) patched(512) master(256) patched(256) master(128) patched(128)\r\n> -----------------------------------------------------------------------------------------------------------------------------\r\n> 128M 0.108 0.108 0.109 0.108 0.109 0.108\r\n> 1G 0.310 0.107 0.410 0.208 0.811 0.309\r\n> 20G 94.493 1.511 188.777 3.014 380.633 6.020\r\n> 50G 537.978 3.815 867.453 7.524 1559.076 15.541\r\n>\r\n\r\nI think we should find a better way to display these numbers because in cases like where master takes 537.978s and patch takes 3.815s, it is clear that patch has reduced the time by more than 100 times whereas in your table it shows 99%.\r\n\r\n> Test prepare:\r\n> Below is test table amount for different shared buffers. Each table \r\n> size is 8k,\r\n>\r\n\r\nTable size should be more than 8k to get all this data because 8k means just one block. I guess either it is a typo or some other mistake.\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.\r\n\r\n\r\n\n\n", "msg_date": "Fri, 25 Dec 2020 03:58:11 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Dec 25, 2020 at 9:28 AM Tang, Haiying\n<tanghy.fnst@cn.fujitsu.com> wrote:\n>\n> Hi Amit,\n>\n> >But how can we conclude NBuffers/128 is the maximum relation size?\n> >Because the maximum size would be where the performance is worse than\n> >the master, no? I guess we need to try by NBuffers/64, NBuffers/32,\n> >.... till we get the threshold where master performs better.\n>\n> You are right, we should keep on testing until no optimization.\n>\n> >I think we should find a better way to display these numbers because in\n> >cases like where master takes 537.978s and patch takes 3.815s\n>\n> Yeah, I think we can change the %reg formula from (patched- master)/ master to (patched- master)/ patched.\n>\n> >Table size should be more than 8k to get all this data because 8k means\n> >just one block. I guess either it is a typo or some other mistake.\n>\n> 8k here is the relation size, not data size.\n> For example, when I tested recovery performance of 400M relation size, I used 51200 tables(8k per table).\n> Please let me know if you think this is not appropriate.\n>\n\nI think one table with a varying amount of data is sufficient for the\nvacuum test. I think with more number of tables there is a greater\nchance of variation. We have previously used multiple tables in one of\nthe tests because of the Truncate operation (which uses\nDropRelFileNodesAllBuffers that takes multiple relations as input) and\nthat is not true for Vacuum operation which I suppose you are testing\nhere.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 25 Dec 2020 10:01:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Amit,\r\n\r\n>I think one table with a varying amount of data is sufficient for the vacuum test. \r\n>I think with more number of tables there is a greater chance of variation. \r\n>We have previously used multiple tables in one of the tests because of the \r\n>Truncate operation (which uses DropRelFileNodesAllBuffers that takes multiple relations as input) \r\n>and that is not true for Vacuum operation which I suppose you are testing here.\r\n\r\nThanks for your advice and kindly explanation. I'll continue the threshold test with one single table.\r\n\r\nRegards,\r\nTang\r\n\n\n", "msg_date": "Fri, 25 Dec 2020 05:41:09 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Amit,\r\n\r\n>I think one table with a varying amount of data is sufficient for the vacuum test. \r\n>I think with more number of tables there is a greater chance of variation. \r\n>We have previously used multiple tables in one of the tests because of \r\n>the Truncate operation (which uses DropRelFileNodesAllBuffers that \r\n>takes multiple relations as input) and that is not true for Vacuum operation which I suppose you are testing here.\r\n\r\nI retested performance on single table for several times, the table size is varying with the BUF_DROP_FULL_SCAN_THRESHOLD for different shared buffers.\r\nWhen shared_buffers is below 20G, there were no significant changes between master(HEAD) and patched.\r\nAnd according to the results compared between 20G and 100G, we can get optimization up to NBuffers/128, but there is no benefit from NBuffers/256.\r\nI've tested many times, most times the same results came out, I don't know why. But If I used 5 tables(each table size is set as BUF_DROP_FULL_SCAN_THRESHOLD), then we can get benefit from it(NBuffers/256).\r\n\r\nHere is my test results for single table. If you have any question or suggestion, kindly let me know.\r\n\r\n%reg= (patched- master(HEAD))/ patched\r\nOptimized percentage:\r\nshared_buffers\t%reg(NBuffers/512)\t%reg(NBuffers/256)\t%reg(NBuffers/128)\t%reg(NBuffers/64)\t%reg(NBuffers/32)\t%reg(NBuffers/16)\t%reg(NBuffers/8)\t%reg(NBuffers/4)\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n128M\t\t\t0%\t\t\t0%\t\t\t-1%\t\t\t0%\t\t\t1%\t\t\t0%\t\t\t0%\t\t\t0%\r\n1G\t\t\t-1%\t\t\t0%\t\t\t-1%\t\t\t0%\t\t\t0%\t\t\t0%\t\t\t0%\t\t\t0%\r\n20G\t\t\t0%\t\t\t0%\t\t\t-33%\t\t\t0%\t\t\t0%\t\t\t-13%\t\t\t0%\t\t\t0%\r\n100G\t\t\t-32%\t\t\t0%\t\t\t-49%\t\t\t0%\t\t\t10%\t\t\t30%\t\t\t0%\t\t\t6%\r\n\r\nResult details(unit: second):\r\npatched\t (sec)\t\t\t\t\t\r\nshared_buffers\tNBuffers/512\tNBuffers/256\tNBuffers/128\tNBuffers/64\tNBuffers/32\tNBuffers/16\tNBuffers/8\tNBuffers/4\r\n128M\t\t0.107\t\t0.107\t\t0.107\t\t0.107\t\t0.108\t\t0.107\t\t0.108\t\t0.208\r\n1G\t\t0.107\t\t0.107\t\t0.107 \t\t0.108 \t\t0.208 \t\t0.208 \t\t0.308 \t\t0.409 \r\n20G\t\t0.208 \t\t0.308 \t\t0.308 \t\t0.409 \t\t0.609 \t\t0.808 \t\t1.511 \t\t2.713 \r\n100G\t\t0.309 \t\t0.408 \t\t0.609 \t\t1.010 \t\t2.011 \t\t5.017 \t\t6.620 \t\t13.931\r\n\r\nmaster(HEAD) (sec)\t\t\t\t\t\r\nshared_buffers\tNBuffers/512\tNBuffers/256\tNBuffers/128\tNBuffers/64\tNBuffers/32\tNBuffers/16\tNBuffers/8\tNBuffers/4\r\n128M\t\t0.107\t\t0.107\t\t0.108\t\t0.107\t\t0.107\t\t0.107\t\t0.108\t\t0.208\r\n1G\t\t0.108 \t\t0.107 \t\t0.108 \t\t0.108 \t\t0.208 \t\t0.207 \t\t0.308 \t\t0.409 \r\n20G\t\t0.208 \t\t0.309 \t\t0.409 \t\t0.409 \t\t0.609 \t\t0.910 \t\t1.511 \t\t2.712 \r\n100G\t\t0.408 \t\t0.408 \t\t0.909 \t\t1.010 \t\t1.811 \t\t3.515 \t\t6.619 \t\t13.032\r\n\r\nRegards\r\nTang\r\n\r\n\n\n", "msg_date": "Mon, 28 Dec 2020 08:15:15 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Amit,\r\n\r\nIn last mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e6257182564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),\r\nI've sent you the performance test results(run only 1 time) on single table. Here is my the retested results(average by 15 times) which I think is more accurate.\r\n\r\nIn terms of 20G and 100G, the optimization on 100G is linear, but 20G is nonlinear(also include test results on shared buffers of 50G/60G), so it's a little difficult to decide the threshold from the two for me. \r\nIf just consider 100G, I think NBuffers/32 is the optimized max relation size. But I don't know how to judge for 20G. If you have any suggestion, kindly let me know.\r\n\r\n#%reg\t\t\t128M\t1G\t20G\t100G\r\n---------------------------------------------------------------\r\n%reg(NBuffers/512)\t 0%\t-1%\t-5%\t-26%\r\n%reg(NBuffers/256)\t 0%\t 0%\t 5%\t-20%\r\n%reg(NBuffers/128)\t-1%\t-1%\t-10%\t-16%\r\n%reg(NBuffers/64)\t-1%\t 0%\t 0%\t -8%\t\r\n%reg(NBuffers/32)\t 0%\t 0%\t-2%\t-4%\r\n%reg(NBuffers/16)\t 0%\t 0%\t-6%\t 4%\r\n%reg(NBuffers/8)\t 1%\t 0%\t 2%\t-2%\r\n%reg(NBuffers/4)\t 0%\t 0%\t 2%\t 2%\r\n\r\nOptimization details(unit: second):\r\npatched\t (sec)\t\t\t\t\t\r\nshared_buffers\tNBuffers/512\tNBuffers/256\tNBuffers/128\tNBuffers/64\tNBuffers/32\tNBuffers/16\tNBuffers/8\tNBuffers/4\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------\r\n128M\t\t0.107\t\t0.107\t\t0.107\t\t0.107\t\t0.107\t\t0.107\t\t0.108\t\t0.208\r\n1G\t\t0.107\t\t0.108\t\t0.107 \t\t0.108 \t\t0.208 \t\t0.208 \t\t0.308 \t\t0.409 \r\n20G\t\t0.199 \t\t0.299 \t\t0.317 \t\t0.408 \t\t0.591 \t\t0.900 \t\t1.561 \t\t2.866 \r\n100G\t\t0.318 \t\t0.381 \t\t0.645 \t\t0.992 \t\t1.913 \t\t3.640 \t\t6.615 \t\t13.389\r\n\r\nmaster(HEAD) (sec)\t\t\t\t\t\r\nshared_buffers\tNBuffers/512\tNBuffers/256\tNBuffers/128\tNBuffers/64\tNBuffers/32\tNBuffers/16\tNBuffers/8\tNBuffers/4\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------\r\n128M\t\t0.107\t\t0.107\t\t0.108\t\t0.108\t\t0.107\t\t0.107\t\t0.107\t\t0.208\r\n1G\t\t0.108 \t\t0.108 \t\t0.108 \t\t0.108 \t\t0.208 \t\t0.207 \t\t0.308 \t\t0.409 \r\n20G\t\t0.208 \t\t0.283 \t\t0.350 \t\t0.408 \t\t0.601 \t\t0.955 \t\t1.529 \t\t2.806 \r\n100G\t\t0.400 \t\t0.459 \t\t0.751 \t\t1.068 \t\t1.984 \t\t3.506 \t\t6.735 \t\t13.101\r\n\r\nRegards\r\nTang\r\n\n\n", "msg_date": "Wed, 30 Dec 2020 05:57:52 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Dec 30, 2020 at 11:28 AM Tang, Haiying\n<tanghy.fnst@cn.fujitsu.com> wrote:\n>\n> Hi Amit,\n>\n> In last mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e6257182564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),\n> I've sent you the performance test results(run only 1 time) on single table. Here is my the retested results(average by 15 times) which I think is more accurate.\n>\n> In terms of 20G and 100G, the optimization on 100G is linear, but 20G is nonlinear(also include test results on shared buffers of 50G/60G), so it's a little difficult to decide the threshold from the two for me.\n> If just consider 100G, I think NBuffers/32 is the optimized max relation size. But I don't know how to judge for 20G. If you have any suggestion, kindly let me know.\n>\n\nConsidering these results NBuffers/64 seems a good threshold as beyond\nthat there is no big advantage. BTW, it is not clear why the advantage\nfor single table is not as big as multiple tables with the Truncate\ncommand. Can you share your exact test steps for any one of the tests?\nAlso, did you change autovacumm = off for these tests, if not then the\nresults might not be reliable because before you run the test via\nVacuum command autovacuum would have done that work?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 30 Dec 2020 17:28:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wednesday, December 30, 2020 8:58 PM, Amit Kapila wrote:\r\n> On Wed, Dec 30, 2020 at 11:28 AM Tang, Haiying\r\n> <tanghy.fnst@cn.fujitsu.com> wrote:\r\n> >\r\n> > Hi Amit,\r\n> >\r\n> > In last\r\n> >\r\n> mail(https://www.postgresql.org/message-id/66851e198f6b41eda59e625718\r\n> 2\r\n> > 564b6%40G08CNEXMBPEKD05.g08.fujitsu.local),\r\n> > I've sent you the performance test results(run only 1 time) on single table.\r\n> Here is my the retested results(average by 15 times) which I think is more\r\n> accurate.\r\n> >\r\n> > In terms of 20G and 100G, the optimization on 100G is linear, but 20G is\r\n> nonlinear(also include test results on shared buffers of 50G/60G), so it's a\r\n> little difficult to decide the threshold from the two for me.\r\n> > If just consider 100G, I think NBuffers/32 is the optimized max relation size.\r\n> But I don't know how to judge for 20G. If you have any suggestion, kindly let\r\n> me know.\r\n> >\r\n> \r\n> Considering these results NBuffers/64 seems a good threshold as beyond\r\n> that there is no big advantage. BTW, it is not clear why the advantage for\r\n> single table is not as big as multiple tables with the Truncate command. Can\r\n> you share your exact test steps for any one of the tests?\r\n> Also, did you change autovacumm = off for these tests, if not then the results\r\n> might not be reliable because before you run the test via Vacuum command\r\n> autovacuum would have done that work?\r\n\r\nHappy new year. The V38 LGTM.\r\nApologies for a bit of delay on posting the test results, but since it's the\r\nstart of commitfest, here it goes and the results were interesting.\r\n\r\nI executed a VACUUM test using the same approach that Tsunakawa-san did in [1],\r\nbut only this time, the total time that DropRelFileNodeBuffers() took.\r\nI used only a single relation, tried with various sizes using the values of threshold:\r\nNBuffers/512..NBuffers/1, as advised by Amit.\r\n\r\nExample of relation sizes for NBuffers/512.\r\n100GB shared_buffers: 200 MB \r\n20GB shared_buffers: 40 MB\r\n1GB shared_buffers: 2 MB\r\n128MB shared_buffers: 0.25 MB\r\n\r\nThe regression, which means the patch performs worse than master, only happens\r\nfor relation size NBuffers/2 and below for all shared_buffers. The fastest\r\nperformance on a single relation was using the relation size NBuffers/512.\r\n\r\n[VACUUM Recovery Performance on Single Relation]\r\nLegend: P_XXX (Patch, NBuffers/XXX relation size),\r\n M_XXX (Master, Nbuffers/XXX relation size)\r\nUnit: seconds\r\n\r\n| Rel Size | 100 GB s_b | 20 GB s_b | 1 GB s_b | 128 MB s_b | \r\n|----------|------------|------------|------------|------------| \r\n| P_512 | 0.012594 | 0.001989 | 0.000081 | 0.000012 | \r\n| M_512 | 0.208757 | 0.046212 | 0.002013 | 0.000295 | \r\n| P_256 | 0.026311 | 0.004416 | 0.000129 | 0.000021 | \r\n| M_256 | 0.241017 | 0.047234 | 0.002363 | 0.000298 | \r\n| P_128 | 0.044684 | 0.009784 | 0.000290 | 0.000042 | \r\n| M_128 | 0.253588 | 0.047952 | 0.002454 | 0.000319 | \r\n| P_64 | 0.065806 | 0.017444 | 0.000521 | 0.000075 | \r\n| M_64 | 0.268311 | 0.050361 | 0.002730 | 0.000339 | \r\n| P_32 | 0.121441 | 0.033431 | 0.001646 | 0.000112 | \r\n| M_32 | 0.285254 | 0.061486 | 0.003640 | 0.000364 | \r\n| P_16 | 0.255503 | 0.065492 | 0.001663 | 0.000144 | \r\n| M_16 | 0.377013 | 0.081613 | 0.003731 | 0.000372 | \r\n| P_8 | 0.560616 | 0.109509 | 0.005954 | 0.000465 | \r\n| M_8 | 0.692596 | 0.112178 | 0.006667 | 0.000553 | \r\n| P_4 | 1.109437 | 0.162924 | 0.011229 | 0.000861 | \r\n| M_4 | 1.162125 | 0.178764 | 0.011635 | 0.000935 | \r\n| P_2 | 2.202231 | 0.317832 | 0.020783 | 0.002646 | \r\n| M_2 | 1.583959 | 0.306269 | 0.015705 | 0.002021 | \r\n| P_1 | 3.080032 | 0.632747 | 0.032183 | 0.002660 | \r\n| M_1 | 2.705485 | 0.543970 | 0.030658 | 0.001941 | \r\n\r\n%reg = (Patched/Master)/Patched\r\n\r\n| %reg_relsize | 100 GB s_b | 20 GB s_b | 1 GB s_b | 128 MB s_b | \r\n|--------------|------------|------------|------------|------------| \r\n| %reg_512 | -1557.587% | -2223.006% | -2385.185% | -2354.167% | \r\n| %reg_256 | -816.041% | -969.691% | -1731.783% | -1319.048% | \r\n| %reg_128 | -467.514% | -390.123% | -747.008% | -658.333% | \r\n| %reg_64 | -307.727% | -188.704% | -423.992% | -352.000% | \r\n| %reg_32 | -134.891% | -83.920% | -121.097% | -225.970% | \r\n| %reg_16 | -47.557% | -24.614% | -124.279% | -157.390% | \r\n| %reg_8 | -23.542% | -2.437% | -11.967% | -19.010% | \r\n| %reg_4 | -4.749% | -9.722% | -3.608% | -8.595% | \r\n| %reg_2 | 28.075% | 3.638% | 24.436% | 23.615% | \r\n| %reg_1 | 12.160% | 14.030% | 4.739% | 27.010% | \r\n\r\nSince our goal is to get the approximate threshold where the cost for\r\nfinding to be invalidated buffers gets higher in optimized path than\r\nthe traditional path:\r\nA. Traditional Path\r\n 1. For each shared_buffers, compare the relfilenode.\r\n 2. LockBufHdr()\r\n 3. Compare block number, InvalidateBuffers() if it's the target.\r\nB. Optimized Path\r\n 1. For each block in rleation, LWLockAcquire(), BufTableLookup(),\r\n and LWLockRelease().\r\n 2-3. Same as traditional path.\r\n\r\nSo we have to get the difference in #1, where the number of buffers\r\nand the check for each number of to be invalidated buffers differ.\r\nThe cost of optimized path will get higher than the traditional path\r\nat some threshold.\r\n\r\nNBuffers * traditional_cost_for_each_buf_check < \r\n InvalidatedBuffers * optimized_cost_for_each_buf_check\r\n\r\nSo what we want to know as the threshold value is the InvalidatedBuffers.\r\nNBuffers * traditional / optimized < InvalidatedBuffers.\r\n\r\nExample for 100GB shared_buffers for rel_size NBuffers/512:\r\n 100000(MB) * 0.208757 (s) / 0.012594 (s) = 1,657,587 MB,\r\n which is still above the value of 100,000 MB.\r\n\r\n| s_b | 100000 | 20000 | 1000 | 128 | \r\n|--------------|-----------|---------|--------|-------| \r\n| NBuffers/512 | 1,657,587 | 464,601 | 24,852 | 3,141 | \r\n| NBuffers/256 | 916,041 | 213,938 | 18,318 | 1,816 | \r\n| NBuffers/128 | 567,514 | 98,025 | 8,470 | 971 | \r\n| NBuffers/64 | 407,727 | 57,741 | 5,240 | 579 | \r\n| NBuffers/32 | 234,891 | 36,784 | 2,211 | 417 | \r\n| NBuffers/16 | 147,557 | 24,923 | 2,243 | 329 | \r\n| NBuffers/8 | 123,542 | 20,487 | 1,120 | 152 | \r\n| NBuffers/4 | 104,749 | 21,944 | 1,036 | 139 | \r\n| NBuffers/2 | 71,925 | 19,272 | 756 | 98 | \r\n| NBuffers/1 | 87,840 | 17,194 | 953 | 93 | \r\n\r\nAlthough the above table shows that NBuffers/2 would be the\r\nthreshold, I know that the cost would vary depending on the machine\r\nspecs. I think I can suggest the threshold and pick one from among\r\nNBuffers/2, NBuffers/4 or NBuffers/8, because their values are closer\r\nto the InvalidatedBuffers.\r\n\r\n\r\n[postgesql.conf]\r\nshared_buffers = 100GB #20GB,1GB,128MB\r\nautovacuum = off\r\nfull_page_writes = off\r\ncheckpoint_timeout = 30min\r\nmax_locks_per_transaction = 10000\r\n\r\n[Machine Specs Used]\r\nIntel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz\r\n8 CPUs, 256GB Memory\r\nXFS, RHEL7.2\r\n\r\nKindly let me know if you have comments regarding the results.\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n[1] https://www.postgresql.org/message-id/TYAPR01MB2990C4EFE63F066F83D2A603FEE70%40TYAPR01MB2990.jpnprd01.prod.outlook.com\r\n", "msg_date": "Sat, 2 Jan 2021 14:17:49 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Sat, Jan 2, 2021 at 7:47 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Happy new year. The V38 LGTM.\n> Apologies for a bit of delay on posting the test results, but since it's the\n> start of commitfest, here it goes and the results were interesting.\n>\n> I executed a VACUUM test using the same approach that Tsunakawa-san did in [1],\n> but only this time, the total time that DropRelFileNodeBuffers() took.\n>\n\nPlease specify the exact steps like did you deleted all the rows from\na table or some of it or none before performing Vacuum? How did you\nmeasure this time, did you removed the cached check? It would be\nbetter if you share the scripts and or the exact steps so that the\nsame can be used by others to reproduce.\n\n> I used only a single relation, tried with various sizes using the values of threshold:\n> NBuffers/512..NBuffers/1, as advised by Amit.\n>\n> Example of relation sizes for NBuffers/512.\n> 100GB shared_buffers: 200 MB\n> 20GB shared_buffers: 40 MB\n> 1GB shared_buffers: 2 MB\n> 128MB shared_buffers: 0.25 MB\n>\n..\n>\n> Although the above table shows that NBuffers/2 would be the\n> threshold, I know that the cost would vary depending on the machine\n> specs. I think I can suggest the threshold and pick one from among\n> NBuffers/2, NBuffers/4 or NBuffers/8, because their values are closer\n> to the InvalidatedBuffers.\n>\n\nHmm, in the tests done by Tang, the results indicate that in some\ncases the patched version is slower at even NBuffers/32, so not sure\nif we can go to values shown by you unless she is doing something\nwrong. I think the difference in results could be because both of you\nare using different techniques to measure the timings, so it might be\nbetter if both of you can share scripts or exact steps used to measure\nthe time and other can use the same technique and see if we are\ngetting consistent results.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 3 Jan 2021 19:04:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Amit,\r\n\r\nSorry for my late reply. Here are my answers for your earlier questions.\r\n\r\n>BTW, it is not clear why the advantage for single table is not as big as multiple tables with the Truncate command\r\nI guess it's the amount of table blocks caused this difference. For single table I tested the amount of block is threshold.\r\nFor multiple tables I test the amount of block is a value(like: one or dozens or hundreds) which far below threshold.\r\nThe closer table blocks to the threshold, the less advantage raised.\r\n\r\nI tested below 3 situations of 50 tables when shared buffers=20G / 100G.\r\n1. For multiple tables which had one or dozens or hundreds blocks(far below threshold) per table, we got significant improve, like [1]. \r\n2. For multiple tables which has half threshold blocks per table, advantage become less, like [2].\r\n3. For multiple tables which has threshold blocks per table, advantage become more less, like [3].\r\n\r\n[1]. 247 blocks per table\r\ns_b\tmaster\t\tpatched\t\t%reg((patched-master)/patched)\r\n----------------------------------------------------\r\n20GB \t1.109\t\t0.108\t\t-927%\r\n100GB\t3.113\t\t0.108\t\t-2782%\r\n\r\n[2]. NBuffers/256/2 blocks per table\r\ns_b\tmaster\t\tpatched\t\t%reg\r\n----------------------------------------------------\r\n20GB \t2.012\t\t1.210\t\t-66%\r\n100GB\t10.226\t\t6.4\t\t-60%\r\n\r\n[3]. NBuffers/256 blocks per table\r\ns_b\tmaster\t\tpatched\t\t%reg\r\n----------------------------------------------------\r\n20GB \t3.868\t\t2.412\t\t-60%\r\n100GB\t14.977\t\t10.591\t\t-41%\r\n\r\n>Can you share your exact test steps for any one of the tests? Also, did you change autovacumm = off for these tests?\r\nYes, I configured streaming replication environment as Kirk did before.\r\nautovacumm = off. \r\nfull_page_writes = off. \r\ncheckpoint_timeout = 30min\r\n\r\nTest steps: \r\ne.g. shared_buffers=20G, NBuffers/512, table blocks= 20*1024*1024/8/512=5120 . table size(kB)= 20*1024*1024/512=40960kB \r\n1. (Master) create table test(id int, v_ch varchar, v_ch1 varchar); \r\n2. (Master) insert about 40MB data to table.\r\n3. (Master) delete from table (all rows of table) \r\n4. (Standby) To test with failover, pause the WAL replay on standby server.\r\n SELECT pg_wal_replay_pause();\r\n5. (Master) VACUUM;\r\n6. (Master) Stop primary server. pg_ctl stop -D $PGDATA -w 7. (Standby) Resume wal replay and promote standby. (get the recovery time from this step)\r\n\r\nRegards\r\nTang\r\n\n\n", "msg_date": "Mon, 4 Jan 2021 03:31:01 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Sunday, January 3, 2021 10:35 PM (JST), Amit Kapila wrote:\r\n> On Sat, Jan 2, 2021 at 7:47 PM k.jamison@fujitsu.com\r\n> <k.jamison@fujitsu.com> wrote:\r\n> >\r\n> > Happy new year. The V38 LGTM.\r\n> > Apologies for a bit of delay on posting the test results, but since\r\n> > it's the start of commitfest, here it goes and the results were interesting.\r\n> >\r\n> > I executed a VACUUM test using the same approach that Tsunakawa-san\r\n> > did in [1], but only this time, the total time that DropRelFileNodeBuffers()\r\n> took.\r\n> >\r\n> \r\n> Please specify the exact steps like did you deleted all the rows from a table or\r\n> some of it or none before performing Vacuum? How did you measure this\r\n> time, did you removed the cached check? It would be better if you share the\r\n> scripts and or the exact steps so that the same can be used by others to\r\n> reproduce.\r\n\r\nBasically, I used the TimestampDifference function in DropRelFileNodeBuffers().\r\nI also executed DELETE before VACUUM.\r\nI also removed nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD\r\nAnd used the threshold as the relation size.\r\n\r\n> Hmm, in the tests done by Tang, the results indicate that in some cases the\r\n> patched version is slower at even NBuffers/32, so not sure if we can go to\r\n> values shown by you unless she is doing something wrong. I think the\r\n> difference in results could be because both of you are using different\r\n> techniques to measure the timings, so it might be better if both of you can\r\n> share scripts or exact steps used to measure the time and other can use the\r\n> same technique and see if we are getting consistent results.\r\n\r\nRight, since we want consistent results, please disregard the approach that I did.\r\nI will resume the test similar to Tang, because she also executed the original failover\r\ntest which I have been doing before.\r\nTo avoid confusion and to check if the results from mine and Tang are consistent,\r\nI also did the recovery/failover test for VACUUM on single relation, which I will send\r\nin a separate email after this.\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Wed, 6 Jan 2021 10:03:42 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, January 6, 2021 7:04 PM (JST), I wrote:\r\n> I will resume the test similar to Tang, because she also executed the original\r\n> failover test which I have been doing before.\r\n> To avoid confusion and to check if the results from mine and Tang are\r\n> consistent, I also did the recovery/failover test for VACUUM on single relation,\r\n> which I will send in a separate email after this.\r\n\r\nA. Test to find the right THRESHOLD\r\n\r\nSo below are the procedures and results of the VACUUM recovery performance\r\ntest on single relation.\r\nI followed the advice below and applied the supplementary patch on top of V39:\r\n Test-for-threshold.patch\r\nThis will ensure that we'll always enter the optimized path.\r\nWe're gonna use the threshold then as the relation size.\r\n\r\n> >One idea could be to remove \"nBlocksToInvalidate < \r\n> >BUF_DROP_FULL_SCAN_THRESHOLD\" part of check \"if (cached && \r\n> >nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)\" so that it \r\n> >always use optimized path for the tests. Then use the relation size \r\n> >as NBuffers/128, NBuffers/256, NBuffers/512 for different values of \r\n> >shared buffers as 128MB, 1GB, 20GB, 100GB.\r\n\r\nEach relation size is NBuffers/XXX, so I used the attached \"rel.sh\" script\r\nto test from NBuffers/512 until NBuffers/8 relation size per shared_buffers.\r\nI did not go further beyond 8 because it took too much time, and I could\r\nalready observe significant results until that.\r\n\r\n[Vacuum Recovery Performance on Single Relation]\r\n1. Setup synchronous streaming replication. I used the configuration\r\n written at the bottom of this email.\r\n2. [Primary] Create 1 table. (rel.sh create)\r\n3. [Primary] Insert data of NBuffers/XXX size. Make sure to use the correct\r\n size for the set shared_buffers by commenting out the right size in \"insert\"\r\n of rel.sh script. (rel.sh insert)\r\n4. [Primary] Delete table. (rel.sh delete)\r\n5. [Standby] Optional: To double-check that DELETE is reflected on standby.\r\n SELECT count(*) FROM tableXXX;\r\n Make sure it returns 0.\r\n6. [Standby] Pause WAL replay. (rel.sh pause)\r\n (This script will execute SELECT pg_wal_replay_pause(); .)\r\n7. [Primary] VACUUM the single relation. (rel.sh vacuum)\r\n8. [Primary] After the vacuum finishes, stop the server. (rel.sh stop)\r\n (The script will execute pg_ctl stop -D $PGDATA -w -mi)\r\n9. [Standby] Resume WAL replay and promote the standby.\r\n (rel.sh resume)\r\n It basically prints a timestamp when resuming WAL replay,\r\n and prints another timestamp when the promotion is done.\r\n Compute the time difference.\r\n\r\n[Results for VACUUM on single relation]\r\nAverage of 5 runs.\r\n\r\n1. % REGRESSION\r\n% Regression: (patched - master)/master\r\n\r\n| rel_size | 128MB | 1GB | 20GB | 100GB | \r\n|----------|--------|--------|--------|----------| \r\n| NB/512 | 0.000% | 0.000% | 0.000% | -32.680% | \r\n| NB/256 | 0.000% | 0.000% | 0.000% | 0.000% | \r\n| NB/128 | 0.000% | 0.000% | 0.000% | -16.502% | \r\n| NB/64 | 0.000% | 0.000% | 0.000% | -9.841% | \r\n| NB/32 | 0.000% | 0.000% | 0.000% | -6.219% | \r\n| NB/16 | 0.000% | 0.000% | 0.000% | 3.323% | \r\n| NB/8 | 0.000% | 0.000% | 0.000% | 8.178% |\r\n\r\nFor 100GB shared_buffers, we can observe regression\r\nbeyond NBuffers/32. So with this, we can conclude\r\nthat NBuffers/32 is the right threshold.\r\nFor NBuffers/16 and beyond, the patched performs\r\nworse than master. In other words, the cost of for finding\r\nto be invalidated buffers gets higher in the optimized path\r\nthan the traditional path.\r\n\r\nSo in attached V39 patches, I have updated the threshold\r\nBUF_DROP_FULL_SCAN_THRESHOLD to NBuffers/32.\r\n\r\n2. [PATCHED]\r\nUnits: Seconds\r\n\r\n| rel_size | 128MB | 1GB | 20GB | 100GB | \r\n|----------|-------|-------|-------|-------| \r\n| NB/512 | 0.106 | 0.106 | 0.106 | 0.206 | \r\n| NB/256 | 0.106 | 0.106 | 0.106 | 0.306 | \r\n| NB/128 | 0.106 | 0.106 | 0.206 | 0.506 | \r\n| NB/64 | 0.106 | 0.106 | 0.306 | 0.907 | \r\n| NB/32 | 0.106 | 0.106 | 0.406 | 1.508 | \r\n| NB/16 | 0.106 | 0.106 | 0.706 | 3.109 | \r\n| NB/8 | 0.106 | 0.106 | 1.307 | 6.614 |\r\n\r\n3. MASTER\r\nUnits: Seconds\r\n\r\n| rel_size | 128MB | 1GB | 20GB | 100GB | \r\n|----------|-------|-------|-------|-------| \r\n| NB/512 | 0.106 | 0.106 | 0.106 | 0.306 | \r\n| NB/256 | 0.106 | 0.106 | 0.106 | 0.306 | \r\n| NB/128 | 0.106 | 0.106 | 0.206 | 0.606 | \r\n| NB/64 | 0.106 | 0.106 | 0.306 | 1.006 | \r\n| NB/32 | 0.106 | 0.106 | 0.406 | 1.608 | \r\n| NB/16 | 0.106 | 0.106 | 0.706 | 3.009 | \r\n| NB/8 | 0.106 | 0.106 | 1.307 | 6.114 |\r\n\r\nI used the following configurations:\r\n[postgesql.conf]\r\nshared_buffers = 100GB #20GB,1GB,128MB\r\nautovacuum = off\r\nfull_page_writes = off\r\ncheckpoint_timeout = 30min\r\nmax_locks_per_transaction = 10000\r\nmax_wal_size = 20GB\r\n\r\n# For streaming replication from primary. Don't uncomment on Standby.\r\nsynchronous_commit = remote_write\r\nsynchronous_standby_names = 'walreceiver'\r\n\r\n# For Standby. Don't uncomment on Primary.\r\n# hot_standby = on\r\n#primary_conninfo = 'host=... user=... port=... application_name=walreceiver'\r\n\r\n----------\r\nB. Regression Test using the NBuffers/32 Threshold (V39 Patches)\r\n\r\nFor this one, we do NOT need the supplementary Test-for-threshold.patch.\r\nApply only the V39 patches.\r\nBut instead of using \"rel.sh\" test script, please use the attached \"test.sh\".\r\nSimilar to the tests I did before for 1000 relations, I executed the recovery\r\nperformance test, now with the threshold NBuffers/32.\r\nThe configuration setting in postgresql.conf is similar to the test above.\r\n\r\nEach relation has 1 block, 8kB size. Total of 1000 relations.\r\n\r\nTest procedures is almost similar to A, so I'll just summarize it,\r\n1. Setup synchronous streaming replication and config settings.\r\n2. [Primary] test.sh create\r\n (The test.sh script will create 1000 tables)\r\n3. [Primary] test.sh insert\r\n4. [Primary] test.sh delete (Skip step 4-5 for TRUNCATE test)\r\n5. [Standby] Optional for VACUUM test: To double-check that DELETE\r\n is reflected on standby. SELECT count(*) FROM tableXXX;\r\n Make sure it returns 0.\r\n6. [Standby] test.sh pause\r\n7. [Primary] \"test.sh vacuum\" for VACUUM test\r\n \"test,sh truncate\" for TRUNCATE test\r\n8. [Primary] If #7 is done, test.sh stop\r\n9. [Standby] If primary is fully stopped, run \"test.sh resume\".\r\n Compute the time difference.\r\n\r\n[Results for VACUUM Recovery Performance for 1000 relations]\r\nUnit is in seconds. Average of 5 executions.\r\n% regression = (patched-master)/master\r\n\r\n| s_b | Master | Patched | %reg | \r\n|--------|--------|---------|---------| \r\n| 128 MB | 0.306 | 0.306 | 0.00% | \r\n| 1 GB | 0.506 | 0.306 | -39.53% | \r\n| 20 GB | 14.522 | 0.306 | -97.89% | \r\n| 100 GB | 66.564 | 0.306 | -99.54% |\r\n\r\n[Results for TRUNCATE Recovery Performance for 1000 relations]\r\nUnit is in seconds. Average of 5 executions.\r\n% regression = (patched-master)/master\r\n\r\n| s_b | Master | Patched | %reg | \r\n|--------|--------|---------|---------| \r\n| 128 MB | 0.206 | 0.206 | 0.00% | \r\n| 1 GB | 0.506 | 0.206 | -59.29% | \r\n| 20 GB | 16.476 | 0.206 | -98.75% | \r\n| 100 GB | 88.261 | 0.206 | -99.77% |\r\n\r\nThe results for the patched were constant for all shared_buffers\r\nsettings for both TRUNCATE and VACUUM.\r\nThat means we can gain huge performance benefits with the patch.\r\n\r\nThe performance benefits have been tested a lot so there's no question\r\nabout that. So I think the final decision for value of threshold would come\r\nif the results will be consistent with others. For now, in my test results,\r\nthe threshold NBuffers/32 is what I concluded. It's already indicated in\r\nthe attached V39 patch set.\r\n\r\n[Specs Used]\r\nIntel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz\r\n8 CPUs, 256GB Memory\r\nXFS, RHEL7.2, latest Postgres(Head version)\r\n\r\nFeedbacks are definitely welcome. \r\nAnd if you want to test, I have already indicated the detailed steps\r\nincluding the scripts I used. Have fun testing!\r\n\r\nRegards,\r\nKirk Jamison", "msg_date": "Wed, 6 Jan 2021 13:13:03 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "Hi Kirk,\r\n\r\n>And if you want to test, I have already indicated the detailed steps including the scripts I used. Have fun testing!\r\n\r\nThank you for your sharing of test steps and scripts. I'd like take a look at them and redo some of the tests using my machine. I'll send my test reults in a separate email after this.\r\n\r\nRegards,\r\nTang\r\n\n\n", "msg_date": "Wed, 6 Jan 2021 15:03:35 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": ">I'd like take a look at them and redo some of the tests using my machine. I'll send my test reults in a separate email after this.\r\n\r\nI did the same tests with Kirk's scripts using the latest patch on my own machine. The results look pretty good and similar with Kirk's. \r\n\r\naverage of 5 runs.\r\n\r\n[VACUUM failover test for 1000 relations] Unit is second, %reg=(patched-master)/ master\r\n\r\n| s_b\t\t| Master\t| Patched\t| %reg\t\t| \r\n|--------------|---------------|--------------|--------------| \r\n| 128 MB\t| 0.408\t\t| 0.308 \t| -24.44% \t| \r\n| 1 GB\t\t| 0.809\t\t| 0.308 \t| -61.94%\t| \r\n| 20 GB\t\t| 12.529 \t| 0.308 \t| -97.54%\t| \r\n| 100 GB \t| 59.310 \t| 0.369 \t| -99.38%\t|\r\n\r\n[TRUNCATE failover test for 1000 relations] Unit is second, %reg=(patched-master)/ master\r\n\r\n| s_b\t\t| Master\t| Patched\t| %reg\t\t| \r\n|--------------|---------------|--------------|--------------| \r\n| 128 MB\t| 0.287\t\t| 0.207 \t| -27.91% \t| \r\n| 1 GB\t\t| 0.688\t\t| 0.208 \t| -69.84%\t| \r\n| 20 GB\t\t| 12.449 \t| 0.208 \t| -98.33%\t| \r\n| 100 GB \t| 61.800 \t| 0.207 \t| -99.66%\t|\r\n\r\nBesides, I did the test for threshold value again. (I rechecked my test process and found out that I forgot to check the data synchronization state on standby which may introduce some NOISE to my results.)\r\nThe following results show we can't get optimize over NBuffers/32 just like Kirk's test results, so I do approve with Kirk on the threshold value.\r\n\r\n%regression:\r\n| rel_size |128MB|1GB|20GB| 100GB |\r\n|----------|----|----|----|-------| \r\n| NB/512 | 0% | 0% | 0% | -48% | \r\n| NB/256 | 0% | 0% | 0% | -33% | \r\n| NB/128 | 0% | 0% | 0% | -9% | \r\n| NB/64 | 0% | 0% | 0% | -5% | \r\n| NB/32 | 0% | 0% |-4% | -3% | \r\n| NB/16 | 0% | 0% |-4% | 1% | \r\n| NB/8 | 1% | 0% | 1% | 3% |\r\n\r\nOptimization details(unit: second):\r\npatched:\r\nshared_buffers\tNBuffers/512\tNBuffers/256\tNBuffers/128\tNBuffers/64\tNBuffers/32\tNBuffers/16\tNBuffers/8\r\n-------------------------------------------------------------------------------------------------------------------------------------\r\n128M\t\t0.107 \t\t0.107 \t\t0.107 \t\t0.106 \t\t0.107 \t\t0.107 \t\t0.107 \r\n1G\t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \r\n20G\t\t0.107 \t\t0.108 \t\t0.207 \t\t0.307 \t\t0.442 \t\t0.876 \t\t1.577 \r\n100G\t\t0.208 \t\t0.308 \t\t0.559 \t\t1.060 \t\t1.961 \t\t4.567 \t\t7.922 \r\n\r\nmaster:\r\nshared_buffers\tNBuffers/512\tNBuffers/256\tNBuffers/128\tNBuffers/64\tNBuffers/32\tNBuffers/16\tNBuffers/8\r\n-------------------------------------------------------------------------------------------------------------------------------------\r\n128M\t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.106 \r\n1G\t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \t\t0.107 \r\n20G\t\t0.107 \t\t0.107 \t\t0.208 \t\t0.308 \t\t0.457 \t\t0.910 \t\t1.560 \r\n100G\t\t0.308 \t\t0.409 \t\t0.608 \t\t1.110 \t\t2.011 \t\t4.516 \t\t7.721 \r\n\r\n[Specs]\r\nCPU : 40 processors (Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz)\r\nMemory: 128G\r\nOS: CentOS 8\r\n\r\nAny question to my test is welcome.\r\n\r\nRegards,\r\nTang\r\n\r\n\n\n", "msg_date": "Thu, 7 Jan 2021 03:57:44 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Jan 6, 2021 at 6:43 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> [Results for VACUUM on single relation]\n> Average of 5 runs.\n>\n> 1. % REGRESSION\n> % Regression: (patched - master)/master\n>\n> | rel_size | 128MB | 1GB | 20GB | 100GB |\n> |----------|--------|--------|--------|----------|\n> | NB/512 | 0.000% | 0.000% | 0.000% | -32.680% |\n> | NB/256 | 0.000% | 0.000% | 0.000% | 0.000% |\n> | NB/128 | 0.000% | 0.000% | 0.000% | -16.502% |\n> | NB/64 | 0.000% | 0.000% | 0.000% | -9.841% |\n> | NB/32 | 0.000% | 0.000% | 0.000% | -6.219% |\n> | NB/16 | 0.000% | 0.000% | 0.000% | 3.323% |\n> | NB/8 | 0.000% | 0.000% | 0.000% | 8.178% |\n>\n> For 100GB shared_buffers, we can observe regression\n> beyond NBuffers/32. So with this, we can conclude\n> that NBuffers/32 is the right threshold.\n> For NBuffers/16 and beyond, the patched performs\n> worse than master. In other words, the cost of for finding\n> to be invalidated buffers gets higher in the optimized path\n> than the traditional path.\n>\n> So in attached V39 patches, I have updated the threshold\n> BUF_DROP_FULL_SCAN_THRESHOLD to NBuffers/32.\n>\n\nThanks for the detailed tests. NBuffers/32 seems like an appropriate\nvalue for the threshold based on these results. I would like to\nslightly modify part of the commit message in the first patch as below\n[1], otherwise, I am fine with the changes. Unless you or anyone else\nhas any more comments, I am planning to push the 0001 and 0002\nsometime next week.\n\n[1]\n\"The recovery path of DropRelFileNodeBuffers() is optimized so that\nscanning of the whole buffer pool can be avoided when the number of\nblocks to be truncated in a relation is below a certain threshold. For\nsuch cases, we find the buffers by doing lookups in BufMapping table.\nThis improves the performance by more than 100 times in many cases\nwhen several small tables (tested with 1000 relations) are truncated\nand where the server is configured with a large value of shared\nbuffers (greater than 100GB).\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 7 Jan 2021 14:06:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Thu, January 7, 2021 5:36 PM (JST), Amit Kapila wrote:\r\n> \r\n> On Wed, Jan 6, 2021 at 6:43 PM k.jamison@fujitsu.com\r\n> <k.jamison@fujitsu.com> wrote:\r\n> >\r\n> > [Results for VACUUM on single relation]\r\n> > Average of 5 runs.\r\n> >\r\n> > 1. % REGRESSION\r\n> > % Regression: (patched - master)/master\r\n> >\r\n> > | rel_size | 128MB | 1GB | 20GB | 100GB |\r\n> > |----------|--------|--------|--------|----------|\r\n> > | NB/512 | 0.000% | 0.000% | 0.000% | -32.680% |\r\n> > | NB/256 | 0.000% | 0.000% | 0.000% | 0.000% |\r\n> > | NB/128 | 0.000% | 0.000% | 0.000% | -16.502% |\r\n> > | NB/64 | 0.000% | 0.000% | 0.000% | -9.841% |\r\n> > | NB/32 | 0.000% | 0.000% | 0.000% | -6.219% |\r\n> > | NB/16 | 0.000% | 0.000% | 0.000% | 3.323% |\r\n> > | NB/8 | 0.000% | 0.000% | 0.000% | 8.178% |\r\n> >\r\n> > For 100GB shared_buffers, we can observe regression\r\n> > beyond NBuffers/32. So with this, we can conclude\r\n> > that NBuffers/32 is the right threshold.\r\n> > For NBuffers/16 and beyond, the patched performs\r\n> > worse than master. In other words, the cost of for finding\r\n> > to be invalidated buffers gets higher in the optimized path\r\n> > than the traditional path.\r\n> >\r\n> > So in attached V39 patches, I have updated the threshold\r\n> > BUF_DROP_FULL_SCAN_THRESHOLD to NBuffers/32.\r\n> >\r\n> \r\n> Thanks for the detailed tests. NBuffers/32 seems like an appropriate\r\n> value for the threshold based on these results. I would like to\r\n> slightly modify part of the commit message in the first patch as below\r\n> [1], otherwise, I am fine with the changes. Unless you or anyone else\r\n> has any more comments, I am planning to push the 0001 and 0002\r\n> sometime next week.\r\n> \r\n> [1]\r\n> \"The recovery path of DropRelFileNodeBuffers() is optimized so that\r\n> scanning of the whole buffer pool can be avoided when the number of\r\n> blocks to be truncated in a relation is below a certain threshold. For\r\n> such cases, we find the buffers by doing lookups in BufMapping table.\r\n> This improves the performance by more than 100 times in many cases\r\n> when several small tables (tested with 1000 relations) are truncated\r\n> and where the server is configured with a large value of shared\r\n> buffers (greater than 100GB).\"\r\n\r\nThank you for taking a look at the results of the tests. And it's also \r\nconsistent with the results from Tang too.\r\nThe commit message LGTM.\r\n\r\nRegards,\r\nKirk Jamison\r\n", "msg_date": "Thu, 7 Jan 2021 09:25:22 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Thu, 7 Jan 2021 09:25:22 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in \n> On Thu, January 7, 2021 5:36 PM (JST), Amit Kapila wrote:\n> > \n> > On Wed, Jan 6, 2021 at 6:43 PM k.jamison@fujitsu.com\n> > <k.jamison@fujitsu.com> wrote:\n> > >\n> > > [Results for VACUUM on single relation]\n> > > Average of 5 runs.\n> > >\n> > > 1. % REGRESSION\n> > > % Regression: (patched - master)/master\n> > >\n> > > | rel_size | 128MB | 1GB | 20GB | 100GB |\n> > > |----------|--------|--------|--------|----------|\n> > > | NB/512 | 0.000% | 0.000% | 0.000% | -32.680% |\n> > > | NB/256 | 0.000% | 0.000% | 0.000% | 0.000% |\n> > > | NB/128 | 0.000% | 0.000% | 0.000% | -16.502% |\n> > > | NB/64 | 0.000% | 0.000% | 0.000% | -9.841% |\n> > > | NB/32 | 0.000% | 0.000% | 0.000% | -6.219% |\n> > > | NB/16 | 0.000% | 0.000% | 0.000% | 3.323% |\n> > > | NB/8 | 0.000% | 0.000% | 0.000% | 8.178% |\n> > >\n> > > For 100GB shared_buffers, we can observe regression\n> > > beyond NBuffers/32. So with this, we can conclude\n> > > that NBuffers/32 is the right threshold.\n> > > For NBuffers/16 and beyond, the patched performs\n> > > worse than master. In other words, the cost of for finding\n> > > to be invalidated buffers gets higher in the optimized path\n> > > than the traditional path.\n> > >\n> > > So in attached V39 patches, I have updated the threshold\n> > > BUF_DROP_FULL_SCAN_THRESHOLD to NBuffers/32.\n> > >\n> > \n> > Thanks for the detailed tests. NBuffers/32 seems like an appropriate\n> > value for the threshold based on these results. I would like to\n> > slightly modify part of the commit message in the first patch as below\n> > [1], otherwise, I am fine with the changes. Unless you or anyone else\n> > has any more comments, I am planning to push the 0001 and 0002\n> > sometime next week.\n> > \n> > [1]\n> > \"The recovery path of DropRelFileNodeBuffers() is optimized so that\n> > scanning of the whole buffer pool can be avoided when the number of\n> > blocks to be truncated in a relation is below a certain threshold. For\n> > such cases, we find the buffers by doing lookups in BufMapping table.\n> > This improves the performance by more than 100 times in many cases\n> > when several small tables (tested with 1000 relations) are truncated\n> > and where the server is configured with a large value of shared\n> > buffers (greater than 100GB).\"\n> \n> Thank you for taking a look at the results of the tests. And it's also \n> consistent with the results from Tang too.\n> The commit message LGTM.\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 08 Jan 2021 10:33:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 7 Jan 2021 09:25:22 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in:\n> > > Thanks for the detailed tests. NBuffers/32 seems like an appropriate\n> > > value for the threshold based on these results. I would like to\n> > > slightly modify part of the commit message in the first patch as below\n> > > [1], otherwise, I am fine with the changes. Unless you or anyone else\n> > > has any more comments, I am planning to push the 0001 and 0002\n> > > sometime next week.\n> > >\n> > > [1]\n> > > \"The recovery path of DropRelFileNodeBuffers() is optimized so that\n> > > scanning of the whole buffer pool can be avoided when the number of\n> > > blocks to be truncated in a relation is below a certain threshold. For\n> > > such cases, we find the buffers by doing lookups in BufMapping table.\n> > > This improves the performance by more than 100 times in many cases\n> > > when several small tables (tested with 1000 relations) are truncated\n> > > and where the server is configured with a large value of shared\n> > > buffers (greater than 100GB).\"\n> >\n> > Thank you for taking a look at the results of the tests. And it's also\n> > consistent with the results from Tang too.\n> > The commit message LGTM.\n>\n> +1.\n>\n\nI have pushed the 0001.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Jan 2021 08:49:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Tue, 12 Jan 2021 08:49:53 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 7 Jan 2021 09:25:22 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in:\n> > > > Thanks for the detailed tests. NBuffers/32 seems like an appropriate\n> > > > value for the threshold based on these results. I would like to\n> > > > slightly modify part of the commit message in the first patch as below\n> > > > [1], otherwise, I am fine with the changes. Unless you or anyone else\n> > > > has any more comments, I am planning to push the 0001 and 0002\n> > > > sometime next week.\n> > > >\n> > > > [1]\n> > > > \"The recovery path of DropRelFileNodeBuffers() is optimized so that\n> > > > scanning of the whole buffer pool can be avoided when the number of\n> > > > blocks to be truncated in a relation is below a certain threshold. For\n> > > > such cases, we find the buffers by doing lookups in BufMapping table.\n> > > > This improves the performance by more than 100 times in many cases\n> > > > when several small tables (tested with 1000 relations) are truncated\n> > > > and where the server is configured with a large value of shared\n> > > > buffers (greater than 100GB).\"\n> > >\n> > > Thank you for taking a look at the results of the tests. And it's also\n> > > consistent with the results from Tang too.\n> > > The commit message LGTM.\n> >\n> > +1.\n> >\n> \n> I have pushed the 0001.\n\nThank you for commiting this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 Jan 2021 11:09:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, Jan 13, 2021 at 7:39 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 12 Jan 2021 08:49:53 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Thu, 7 Jan 2021 09:25:22 +0000, \"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com> wrote in:\n> > > > > Thanks for the detailed tests. NBuffers/32 seems like an appropriate\n> > > > > value for the threshold based on these results. I would like to\n> > > > > slightly modify part of the commit message in the first patch as below\n> > > > > [1], otherwise, I am fine with the changes. Unless you or anyone else\n> > > > > has any more comments, I am planning to push the 0001 and 0002\n> > > > > sometime next week.\n> > > > >\n> > > > > [1]\n> > > > > \"The recovery path of DropRelFileNodeBuffers() is optimized so that\n> > > > > scanning of the whole buffer pool can be avoided when the number of\n> > > > > blocks to be truncated in a relation is below a certain threshold. For\n> > > > > such cases, we find the buffers by doing lookups in BufMapping table.\n> > > > > This improves the performance by more than 100 times in many cases\n> > > > > when several small tables (tested with 1000 relations) are truncated\n> > > > > and where the server is configured with a large value of shared\n> > > > > buffers (greater than 100GB).\"\n> > > >\n> > > > Thank you for taking a look at the results of the tests. And it's also\n> > > > consistent with the results from Tang too.\n> > > > The commit message LGTM.\n> > >\n> > > +1.\n> > >\n> >\n> > I have pushed the 0001.\n>\n> Thank you for commiting this.\n>\n\nPushed 0002 as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 13 Jan 2021 10:45:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Wed, January 13, 2021 2:15 PM (JST), Amit Kapila wrote:\r\n> On Wed, Jan 13, 2021 at 7:39 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Tue, 12 Jan 2021 08:49:53 +0530, Amit Kapila\r\n> > <amit.kapila16@gmail.com> wrote in\r\n> > > On Fri, Jan 8, 2021 at 7:03 AM Kyotaro Horiguchi\r\n> > > <horikyota.ntt@gmail.com> wrote:\r\n> > > >\r\n> > > > At Thu, 7 Jan 2021 09:25:22 +0000, \"k.jamison@fujitsu.com\"\r\n> <k.jamison@fujitsu.com> wrote in:\r\n> > > > > > Thanks for the detailed tests. NBuffers/32 seems like an\r\n> > > > > > appropriate value for the threshold based on these results. I\r\n> > > > > > would like to slightly modify part of the commit message in\r\n> > > > > > the first patch as below [1], otherwise, I am fine with the\r\n> > > > > > changes. Unless you or anyone else has any more comments, I am\r\n> > > > > > planning to push the 0001 and 0002 sometime next week.\r\n> > > > > >\r\n> > > > > > [1]\r\n> > > > > > \"The recovery path of DropRelFileNodeBuffers() is optimized so\r\n> > > > > > that scanning of the whole buffer pool can be avoided when the\r\n> > > > > > number of blocks to be truncated in a relation is below a\r\n> > > > > > certain threshold. For such cases, we find the buffers by doing\r\n> lookups in BufMapping table.\r\n> > > > > > This improves the performance by more than 100 times in many\r\n> > > > > > cases when several small tables (tested with 1000 relations)\r\n> > > > > > are truncated and where the server is configured with a large\r\n> > > > > > value of shared buffers (greater than 100GB).\"\r\n> > > > >\r\n> > > > > Thank you for taking a look at the results of the tests. And\r\n> > > > > it's also consistent with the results from Tang too.\r\n> > > > > The commit message LGTM.\r\n> > > >\r\n> > > > +1.\r\n> > > >\r\n> > >\r\n> > > I have pushed the 0001.\r\n> >\r\n> > Thank you for commiting this.\r\n> >\r\n> \r\n> Pushed 0002 as well.\r\n> \r\n\r\nThank you very much for committing those two patches, and for everyone here\r\nwho contributed in the simplifying the approaches, code reviews, testing, etc.\r\n\r\nI compile with the --enable-coverage and check if the newly-added code and updated\r\nparts were covered by tests.\r\nYes, the lines were hit including the updated lines of DropRelFileNodeBuffers(),\r\nDropRelFileNodesAllBuffers(), smgrdounlinkall(), smgrnblocks().\r\nNewly added APIs were covered too: FindAndDropRelFileNodeBuffers() and\r\nsmgrnblocks_cached(). \r\nHowever, the parts where UnlockBufHdr(bufHdr, buf_state); is called is not hit.\r\nBut I noticed that exists as well in previously existing functions in bufmgr.c.\r\n\r\nThank you very much again.\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n", "msg_date": "Wed, 13 Jan 2021 05:25:28 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "While rebasing CF #2933 (which drops the _cached stuff and makes this\noptimisation always available, woo), I happened to notice that we're\nsumming the size of many relations and forks into a variable\nnBlocksToInvalidate of type BlockNumber. That could overflow.\n\n\n", "msg_date": "Fri, 12 Mar 2021 12:27:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Mar 12, 2021 at 4:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> While rebasing CF #2933 (which drops the _cached stuff and makes this\n> optimisation always available, woo), I happened to notice that we're\n> summing the size of many relations and forks into a variable\n> nBlocksToInvalidate of type BlockNumber. That could overflow.\n>\n\nI also think so. I think we have two ways to address that: (a) check\nimmediately after each time we add blocks to nBlocksToInvalidate to\nsee if it crosses the threshold value BUF_DROP_FULL_SCAN_THRESHOLD and\nif so, then just break the loop; (b) change the variable type to\nuint64.\n\nAny better ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Mar 2021 09:50:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Mar 12, 2021 at 5:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> uint64\n\n+1\n\n\n", "msg_date": "Fri, 12 Mar 2021 17:27:47 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Thomas Munro <thomas.munro@gmail.com>\r\n> On Fri, Mar 12, 2021 at 5:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > uint64\r\n> \r\n> +1\r\n\r\n+1\r\nI'll send a patch later.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 12 Mar 2021 04:30:41 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Thomas Munro <thomas.munro@gmail.com>\r\n> > uint64\r\n> \r\n> +1\r\n\r\nThank you, the patch is attached (we tend to forget how large our world is... 64-bit) We're sorry to cause you trouble.\r\n\r\n\t\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Fri, 12 Mar 2021 05:26:02 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "At Fri, 12 Mar 2021 05:26:02 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Thomas Munro <thomas.munro@gmail.com>\n> > > uint64\n> > \n> > +1\n> \n> Thank you, the patch is attached (we tend to forget how large our world is... 64-bit) We're sorry to cause you trouble.\n\nBUF_DROP_FULL_SCAN_THRESHOLD cannot be larger than the size of int\nsince Nbuffer is an int. but nBlocksToInvalidate being uint32 looks\nsomewhat too tight. So +1 for changing it to uint64.\n\nWe need fill all block[file][fork] array in DropRelFileNodesAllBuffers\nso we cannot bailing out from the counting loop. We could do that\nDropRelFileNodesAllBuffers but that doesn't seem effective so much.\n\nSo I vote for uint64 and not bailing out.\n\nAbout the patch, it would be better to change the type of\nBUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current value\ndoesn't harm.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 Mar 2021 15:10:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> About the patch, it would be better to change the type of\n> BUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current\n> value\n> doesn't harm.\n\nOK, attached, to be prepared for the distant future when NBuffers becomes 64-bit.\n\n\t\nRegards\nTakayuki Tsunakawa", "msg_date": "Fri, 12 Mar 2021 06:36:52 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "> From: Tsunakawa, Takayuki/綱川 貴之 <tsunakawa.takay@fujitsu.com>\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > About the patch, it would be better to change the type of\n> > BUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current\n> value\n> > doesn't harm.\n> \n> OK, attached, to be prepared for the distant future when NBuffers becomes\n> 64-bit.\n\nThank you, Tsunakawa-san, for sending the quick fix. (I failed to notice to my thread.)\n\nRegards,\nKirk Jamison\n\n\n", "msg_date": "Fri, 12 Mar 2021 06:55:49 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Patch] Optimize dropping of relation buffers using dlist" }, { "msg_contents": "On Fri, Mar 12, 2021 at 12:07 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > About the patch, it would be better to change the type of\n> > BUF_DROP_FULL_SCAN_THRESHOLD to uint64, even though the current\n> > value\n> > doesn't harm.\n>\n> OK, attached, to be prepared for the distant future when NBuffers becomes 64-bit.\n>\n\nThanks for the patch. Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Mar 2021 17:04:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Optimize dropping of relation buffers using dlist" } ]
[ { "msg_contents": "This patch adds const qualifiers to internal range type APIs. It \ndoesn't require any new casts or remove any old ones.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 28 Oct 2019 10:01:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Add const qualifiers to internal range type APIs" }, { "msg_contents": "On Mon, Oct 28, 2019 at 5:01 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This patch adds const qualifiers to internal range type APIs. It\n> doesn't require any new casts or remove any old ones.\n\nJust out of curiosity, what is the motivation for this?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 09:05:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add const qualifiers to internal range type APIs" }, { "msg_contents": "On 2019-10-28 14:05, Robert Haas wrote:\n> On Mon, Oct 28, 2019 at 5:01 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> This patch adds const qualifiers to internal range type APIs. It\n>> doesn't require any new casts or remove any old ones.\n> \n> Just out of curiosity, what is the motivation for this?\n\nI don't remember. :-)\n\nI had this code lying around from earlier \"adventures in const\", \nprobably related to unconstify() and that work, and it seemed sensible \nand self-contained enough to finish up and submit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:48:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add const qualifiers to internal range type APIs" }, { "msg_contents": "On 2019-10-29 16:48:24 +0100, Peter Eisentraut wrote:\n> On 2019-10-28 14:05, Robert Haas wrote:\n> > Just out of curiosity, what is the motivation for this?\n> \n> I don't remember. :-)\n> \n> I had this code lying around from earlier \"adventures in const\", probably\n> related to unconstify() and that work, and it seemed sensible and\n> self-contained enough to finish up and submit.\n\n+1\n\n\n", "msg_date": "Tue, 29 Oct 2019 13:11:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add const qualifiers to internal range type APIs" }, { "msg_contents": "On 2019-10-29 21:11, Andres Freund wrote:\n> On 2019-10-29 16:48:24 +0100, Peter Eisentraut wrote:\n>> On 2019-10-28 14:05, Robert Haas wrote:\n>>> Just out of curiosity, what is the motivation for this?\n>>\n>> I don't remember. :-)\n>>\n>> I had this code lying around from earlier \"adventures in const\", probably\n>> related to unconstify() and that work, and it seemed sensible and\n>> self-contained enough to finish up and submit.\n> \n> +1\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 31 Oct 2019 07:49:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add const qualifiers to internal range type APIs" } ]
[ { "msg_contents": "As mentioned in [0], pg_upgrade currently does not preserve the version \nof collation objects created by initdb. Here is an attempt to fix that.\n\nThe way I deal with this here is by having the binary-upgrade mode in \npg_dump delete all the collations created by initdb and then dump out \nCREATE COLLATION commands with version information normally.\n\nI had originally imagined doing some kind of ALTER COLLATION (or perhaps \na direct UPDATE pg_collation) to update the version information, but \nthat doesn't really work because we don't know whether the collation \nobject with a given name in the new cluster is the same as the one in \nthe old cluster. So it seems more robust to just delete all existing \ncollations and create them from scratch.\n\nThoughts?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/CA+hUKGKDe98DFWKJoS7e4Z+Oamzc-1sZfpL3V3PPgi1uNvQ1tw@mail.gmail.com\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 28 Oct 2019 13:52:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Preserve versions of initdb-created collations in pg_upgrade" }, { "msg_contents": "On Tue, Oct 29, 2019 at 1:52 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> As mentioned in [0], pg_upgrade currently does not preserve the version\n> of collation objects created by initdb. Here is an attempt to fix that.\n>\n> The way I deal with this here is by having the binary-upgrade mode in\n> pg_dump delete all the collations created by initdb and then dump out\n> CREATE COLLATION commands with version information normally.\n\nThis seems to be basically OK.\n\nIt does mean that the target database has collation OIDs >=\nFirstNormalObjectId. That is, they don't look like initdb-created\nobjects, which is OK because they aren't, I'm just highlighting this\nto see if anyone else sees a problem with it. Suppose you pg_upgrade\nagain: now you'll dump these collations just as you did the first time\naround, because they look exactly like user-defined collations. It\nalso means that if you pg_upgrade to a target cluster created by a\nbuild without ICU we'll try to create ICU collations and that'll fail\n(\"ICU is not supported in this build\"), whereas before if had ICU\ncollations and didn't ever make use of them, you'd be able to do such\nan upgrade; again this doesn't seem like a major problem, it's just an\nobservation about an edge case. One more thing to note is if you\nupgrade from 12 to 13 on a glibc system, I think we'll automatically\npick up the *current* version when creating the collations in the target\nDB, which seems to be OK but it is a choice to default to assuming\nthat the database's indexes are not corrupted. Another observation is\nthat you finish up with different OIDs in each database you upgrade,\nwhich again doesn't seem like a problem in itself. It is slightly odd that\ntemplate1 finishes up with the old initdb's template1 collatoins, rather\nthan the new initdb's opinion of the current set of collations, but I am\nnot sure if it's a problem. I think it has to be like that, because you\nmight have created other stuff that depends on those collations in your\nsource template1 database, and so you have to preserve the versions.\n\n> I had originally imagined doing some kind of ALTER COLLATION (or perhaps\n> a direct UPDATE pg_collation) to update the version information, but\n> that doesn't really work because we don't know whether the collation\n> object with a given name in the new cluster is the same as the one in\n> the old cluster. So it seems more robust to just delete all existing\n> collations and create them from scratch.\n>\n> Thoughts?\n\nSeems to work as described with -E UTF-8, but it fails with clusters\nusing -E SQL_ASCII. That causes the pg_upgrade check to fail on\nmachines where that is the default encoding chosen by initdb (where\nunpatched master succeeds):\n\npg_restore: creating COLLATION \"pg_catalog.af-NA-x-icu\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 1700; 3456 12683 COLLATION af-NA-x-icu tmunro\npg_restore: error: could not execute query: ERROR: collation\n\"pg_catalog.af-NA-x-icu\" for encoding \"SQL_ASCII\" does not exist\nCommand was: ALTER COLLATION pg_catalog.\"af-NA-x-icu\" OWNER TO tmunro;\n\n\n", "msg_date": "Tue, 29 Oct 2019 15:33:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Preserve versions of initdb-created collations in pg_upgrade" }, { "msg_contents": "On 2019-10-29 03:33, Thomas Munro wrote:\n> Seems to work as described with -E UTF-8, but it fails with clusters\n> using -E SQL_ASCII. That causes the pg_upgrade check to fail on\n> machines where that is the default encoding chosen by initdb (where\n> unpatched master succeeds):\n> \n> pg_restore: creating COLLATION \"pg_catalog.af-NA-x-icu\"\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 1700; 3456 12683 COLLATION af-NA-x-icu tmunro\n> pg_restore: error: could not execute query: ERROR: collation\n> \"pg_catalog.af-NA-x-icu\" for encoding \"SQL_ASCII\" does not exist\n> Command was: ALTER COLLATION pg_catalog.\"af-NA-x-icu\" OWNER TO tmunro;\n\nThis could be addressed by using is_encoding_supported_by_icu() in \npg_dump to filter out collations with unsupported encodings.\n\nHowever, the more I look at this whole problem, I'm wondering whether it \nwouldn't be preferable to avoid this whole mess by just not creating any \ncollations in initdb. What do you think?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 21 Dec 2019 07:38:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Preserve versions of initdb-created collations in pg_upgrade" }, { "msg_contents": "On Sat, Dec 21, 2019 at 7:38 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-10-29 03:33, Thomas Munro wrote:\n> > Seems to work as described with -E UTF-8, but it fails with clusters\n> > using -E SQL_ASCII. That causes the pg_upgrade check to fail on\n> > machines where that is the default encoding chosen by initdb (where\n> > unpatched master succeeds):\n> >\n> > pg_restore: creating COLLATION \"pg_catalog.af-NA-x-icu\"\n> > pg_restore: while PROCESSING TOC:\n> > pg_restore: from TOC entry 1700; 3456 12683 COLLATION af-NA-x-icu tmunro\n> > pg_restore: error: could not execute query: ERROR: collation\n> > \"pg_catalog.af-NA-x-icu\" for encoding \"SQL_ASCII\" does not exist\n> > Command was: ALTER COLLATION pg_catalog.\"af-NA-x-icu\" OWNER TO tmunro;\n>\n> This could be addressed by using is_encoding_supported_by_icu() in\n> pg_dump to filter out collations with unsupported encodings.\n>\n> However, the more I look at this whole problem, I'm wondering whether it\n> wouldn't be preferable to avoid this whole mess by just not creating any\n> collations in initdb. What do you think?\n\nI think this problem goes away if we commit the per-object collation\nversion patch set[1]. It drops the collversion column, and Julien's\nrecent versions handle pg_upgrade quite well, as long as a collation\nby the same name exists in the target cluster. In that universe, if\ninitdb didn't create them, we'd have to tell people to create all\nnecessary collations manually before doing a pg_upgrade into it, and\nthat doesn't seem great. Admittedly there might be some weird cases\nwhere a collation is somehow completely different but has the same\nname.\n\n[1] https://www.postgresql.org/message-id/flat/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com\n\n\n", "msg_date": "Sat, 21 Dec 2019 21:01:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Preserve versions of initdb-created collations in pg_upgrade" }, { "msg_contents": "On 2019-12-21 09:01, Thomas Munro wrote:\n> I think this problem goes away if we commit the per-object collation\n> version patch set[1]. It drops the collversion column, and Julien's\n> recent versions handle pg_upgrade quite well, as long as a collation\n> by the same name exists in the target cluster. In that universe, if\n> initdb didn't create them, we'd have to tell people to create all\n> necessary collations manually before doing a pg_upgrade into it, and\n> that doesn't seem great. Admittedly there might be some weird cases\n> where a collation is somehow completely different but has the same\n> name.\n\nSetting this patch to Returned with Feedback.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Jan 2020 11:04:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Preserve versions of initdb-created collations in pg_upgrade" } ]
[ { "msg_contents": "Greetings hackers,\n\nBefore PG12, select strpos('test', '') returns 1 (empty substring found at\nfirst position of the string), whereas starting with PG12 it returns 0\n(empty substring not found).\n\nIs this behavior change intentional? If so, it doesn't seem to be\ndocumented in the release notes...\n\nFirst raised by Austin Drenski in\nhttps://github.com/npgsql/efcore.pg/pull/1068#issuecomment-546795826\n\nThanks,\n\nShay\n\nGreetings hackers,Before PG12, select \nstrpos('test', '') returns 1 (empty substring found at first position of\n the string), whereas starting with PG12 it returns 0 (empty substring \nnot found).Is this behavior change intentional? If so, it doesn't seem to be documented in the release notes...First raised by Austin Drenski in https://github.com/npgsql/efcore.pg/pull/1068#issuecomment-546795826Thanks,Shay", "msg_date": "Mon, 28 Oct 2019 16:02:30 +0100", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "strpos behavior change around empty substring in PG12" }, { "msg_contents": "On Mon, Oct 28, 2019 at 11:02 AM Shay Rojansky <roji@roji.org> wrote:\n> Before PG12, select strpos('test', '') returns 1 (empty substring found at first position of the string), whereas starting with PG12 it returns 0 (empty substring not found).\n>\n> Is this behavior change intentional? If so, it doesn't seem to be documented in the release notes...\n>\n> First raised by Austin Drenski in https://github.com/npgsql/efcore.pg/pull/1068#issuecomment-546795826\n\nIt looks to me like this got broken here:\n\ncommit 9556aa01c69a26ca726d8dda8e395acc7c1e30fc\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: Fri Jan 25 16:25:05 2019 +0200\n\n Use single-byte Boyer-Moore-Horspool search even with multibyte encodings.\n\nNot sure what happened exactly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:48:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: strpos behavior change around empty substring in PG12" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 28, 2019 at 11:02 AM Shay Rojansky <roji@roji.org> wrote:\n>> Before PG12, select strpos('test', '') returns 1 (empty substring found at first position of the string), whereas starting with PG12 it returns 0 (empty substring not found).\n\n> It looks to me like this got broken here:\n\n> commit 9556aa01c69a26ca726d8dda8e395acc7c1e30fc\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Fri Jan 25 16:25:05 2019 +0200\n> Use single-byte Boyer-Moore-Horspool search even with multibyte encodings.\n\n> Not sure what happened exactly.\n\nI think the problem is lack of clarity about the edge cases.\nThe patch added this short-circuit right at the top of text_position():\n\n+ if (VARSIZE_ANY_EXHDR(t1) < 1 || VARSIZE_ANY_EXHDR(t2) < 1)\n+ return 0;\n\nand as this example shows, that's the Wrong Thing. Fortunately,\nit also seems easily fixed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Oct 2019 11:57:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: strpos behavior change around empty substring in PG12" }, { "msg_contents": "On 28/10/2019 17:57, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Mon, Oct 28, 2019 at 11:02 AM Shay Rojansky <roji@roji.org> wrote:\n>>> Before PG12, select strpos('test', '') returns 1 (empty substring found at first position of the string), whereas starting with PG12 it returns 0 (empty substring not found).\n> \n>> It looks to me like this got broken here:\n> \n>> commit 9556aa01c69a26ca726d8dda8e395acc7c1e30fc\n>> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n>> Date: Fri Jan 25 16:25:05 2019 +0200\n>> Use single-byte Boyer-Moore-Horspool search even with multibyte encodings.\n> \n>> Not sure what happened exactly.\n> \n> I think the problem is lack of clarity about the edge cases.\n> The patch added this short-circuit right at the top of text_position():\n> \n> + if (VARSIZE_ANY_EXHDR(t1) < 1 || VARSIZE_ANY_EXHDR(t2) < 1)\n> + return 0;\n> \n> and as this example shows, that's the Wrong Thing. Fortunately,\n> it also seems easily fixed.\n\nTom fixed this in commit bd1ef5799b; thanks!\n\nTo be sure, I also checked the SQL standard for what POSITION('' IN \n'test') is supposed to return. It agrees that 1 is correct:\n\n > If CHAR_LENGTH(CVE1) is 0 (zero), then the result is 1 (one).\n\n- Heikki\n\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:11:13 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: strpos behavior change around empty substring in PG12" }, { "msg_contents": "Thanks for the quick turnaround!\n\nTom Lane <tgl@sss.pgh.pa.us> schrieb am Mo., 28. Okt. 2019, 16:57:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Oct 28, 2019 at 11:02 AM Shay Rojansky <roji@roji.org> wrote:\n> >> Before PG12, select strpos('test', '') returns 1 (empty substring found\n> at first position of the string), whereas starting with PG12 it returns 0\n> (empty substring not found).\n>\n> > It looks to me like this got broken here:\n>\n> > commit 9556aa01c69a26ca726d8dda8e395acc7c1e30fc\n> > Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> > Date: Fri Jan 25 16:25:05 2019 +0200\n> > Use single-byte Boyer-Moore-Horspool search even with multibyte\n> encodings.\n>\n> > Not sure what happened exactly.\n>\n> I think the problem is lack of clarity about the edge cases.\n> The patch added this short-circuit right at the top of text_position():\n>\n> + if (VARSIZE_ANY_EXHDR(t1) < 1 || VARSIZE_ANY_EXHDR(t2) < 1)\n> + return 0;\n>\n> and as this example shows, that's the Wrong Thing. Fortunately,\n> it also seems easily fixed.\n>\n> regards, tom lane\n>\n\nThanks for the quick turnaround!Tom Lane <tgl@sss.pgh.pa.us> schrieb am Mo., 28. Okt. 2019, 16:57:Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Oct 28, 2019 at 11:02 AM Shay Rojansky <roji@roji.org> wrote:\n>> Before PG12, select strpos('test', '') returns 1 (empty substring found at first position of the string), whereas starting with PG12 it returns 0 (empty substring not found).\n\n> It looks to me like this got broken here:\n\n> commit 9556aa01c69a26ca726d8dda8e395acc7c1e30fc\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date:   Fri Jan 25 16:25:05 2019 +0200\n>     Use single-byte Boyer-Moore-Horspool search even with multibyte encodings.\n\n> Not sure what happened exactly.\n\nI think the problem is lack of clarity about the edge cases.\nThe patch added this short-circuit right at the top of text_position():\n\n+   if (VARSIZE_ANY_EXHDR(t1) < 1 || VARSIZE_ANY_EXHDR(t2) < 1)\n+       return 0;\n\nand as this example shows, that's the Wrong Thing.  Fortunately,\nit also seems easily fixed.\n\n                        regards, tom lane", "msg_date": "Tue, 29 Oct 2019 15:27:11 +0100", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "Re: strpos behavior change around empty substring in PG12" } ]
[ { "msg_contents": "Hi,\n\nPostgreSQL 10 introduced extended statistics, allowing us to consider\ncorrelation between columns to improve estimates, and PostgreSQL 12\nadded support for MCV statistics. But we still had the limitation that\nwe only allowed using a single extended statistics per relation, i.e.\ngiven a table with two extended stats\n\n CREATE TABLE t (a int, b int, c int, d int);\n CREATE STATISTICS s1 (mcv) ON a, b FROM t;\n CREATE STATISTICS s2 (mcv) ON c, d FROM t;\n\nand a query\n\n SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1 AND d = 1;\n\nwe only ever used one of the statistics (and we considered them in a not\nparticularly well determined order).\n\nThis patch addresses this by using as many extended stats as possible,\nby adding a loop to statext_mcv_clauselist_selectivity(). In each step\nwe pick the \"best\" applicable statistics (in the sense of covering the\nmost attributes) and factor it into the oveall estimate.\n\nAll this happens where we'd originally consider applying a single MCV\nlist, i.e. before even considering the functional dependencies, so\nroughly like this:\n\n while ()\n {\n ... apply another MCV list ...\n }\n\n ... apply functional dependencies ...\n\n\nI've both in the loop, but I think that'd be wrong - the MCV list is\nexpected to contain more information about individual values (compared\nto functional deps, which are column-level).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 28 Oct 2019 16:20:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Using multiple extended statistics for estimates" }, { "msg_contents": "On Mon, Oct 28, 2019 at 04:20:48PM +0100, Tomas Vondra wrote:\n>Hi,\n>\n>PostgreSQL 10 introduced extended statistics, allowing us to consider\n>correlation between columns to improve estimates, and PostgreSQL 12\n>added support for MCV statistics. But we still had the limitation that\n>we only allowed using a single extended statistics per relation, i.e.\n>given a table with two extended stats\n>\n> CREATE TABLE t (a int, b int, c int, d int);\n> CREATE STATISTICS s1 (mcv) ON a, b FROM t;\n> CREATE STATISTICS s2 (mcv) ON c, d FROM t;\n>\n>and a query\n>\n> SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1 AND d = 1;\n>\n>we only ever used one of the statistics (and we considered them in a not\n>particularly well determined order).\n>\n>This patch addresses this by using as many extended stats as possible,\n>by adding a loop to statext_mcv_clauselist_selectivity(). In each step\n>we pick the \"best\" applicable statistics (in the sense of covering the\n>most attributes) and factor it into the oveall estimate.\n>\n>All this happens where we'd originally consider applying a single MCV\n>list, i.e. before even considering the functional dependencies, so\n>roughly like this:\n>\n> while ()\n> {\n> ... apply another MCV list ...\n> }\n>\n> ... apply functional dependencies ...\n>\n>\n>I've both in the loop, but I think that'd be wrong - the MCV list is\n>expected to contain more information about individual values (compared\n>to functional deps, which are column-level).\n>\n\nHere is a slightly polished v2 of the patch, the main difference being\nthat computing clause_attnums was moved to a separate function.\n\nThis is a fairly simple patch, and it's not entirely new functionality\n(applying multiple statistics was part of the very first patch seris,\nalthough of course in a very different form). So unless there are\nobjections, I'd like to get this committed sometime next week.\n\nThere's room for improvement, of course, for example when handling\noverlapping statistics. Consider a table with columns (a,b,c) and two\nextended statistics on (a,b) and (b,c), and query with one clause per\ncolumn\n\n SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1\n\nIn this case the patch does not help, because we apply (a,b) and then we\nhave just a single clause remaining. What we could do is still apply the\n(b,c) statistic, using the already-estimated clause on b as a condition.\nSo essentially we'd compute\n\n P(a=1 && b=1) * P(c=1 | b=1)\n\nBut that'll require larger changes, and I see it as an evolution of the\ncurrent patch.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 6 Nov 2019 20:54:40 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Wed, Nov 06, 2019 at 08:54:40PM +0100, Tomas Vondra wrote:\n>On Mon, Oct 28, 2019 at 04:20:48PM +0100, Tomas Vondra wrote:\n>>Hi,\n>>\n>>PostgreSQL 10 introduced extended statistics, allowing us to consider\n>>correlation between columns to improve estimates, and PostgreSQL 12\n>>added support for MCV statistics. But we still had the limitation that\n>>we only allowed using a single extended statistics per relation, i.e.\n>>given a table with two extended stats\n>>\n>> CREATE TABLE t (a int, b int, c int, d int);\n>> CREATE STATISTICS s1 (mcv) ON a, b FROM t;\n>> CREATE STATISTICS s2 (mcv) ON c, d FROM t;\n>>\n>>and a query\n>>\n>> SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1 AND d = 1;\n>>\n>>we only ever used one of the statistics (and we considered them in a not\n>>particularly well determined order).\n>>\n>>This patch addresses this by using as many extended stats as possible,\n>>by adding a loop to statext_mcv_clauselist_selectivity(). In each step\n>>we pick the \"best\" applicable statistics (in the sense of covering the\n>>most attributes) and factor it into the oveall estimate.\n>>\n>>All this happens where we'd originally consider applying a single MCV\n>>list, i.e. before even considering the functional dependencies, so\n>>roughly like this:\n>>\n>> while ()\n>> {\n>> ... apply another MCV list ...\n>> }\n>>\n>> ... apply functional dependencies ...\n>>\n>>\n>>I've both in the loop, but I think that'd be wrong - the MCV list is\n>>expected to contain more information about individual values (compared\n>>to functional deps, which are column-level).\n>>\n>\n>Here is a slightly polished v2 of the patch, the main difference being\n>that computing clause_attnums was moved to a separate function.\n>\n\nThis time with the attachment ;-)\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 6 Nov 2019 20:58:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "Hello.\n\nAt Wed, 6 Nov 2019 20:58:49 +0100, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in \n> >Here is a slightly polished v2 of the patch, the main difference being\n> >that computing clause_attnums was moved to a separate function.\n> >\n> \n> This time with the attachment ;-)\n\nThis patch is a kind of straight-forward, which repeats what the\nprevious statext_mcv_clauselist_selectivity did as long as remaining\nclauses matches any of MV-MCVs. Almost no regression in the cases\nwhere zero or just one MV-MCV applies to the given clause list.\n\nIt applies cleanly on the current master and seems working as\nexpected.\n\n\nI have some comments.\n\nCould we have description in the documentation on what multiple\nMV-MCVs are used in a query? And don't we need some regression tests?\n\n\n+/*\n+ * statext_mcv_clause_attnums\n+ *\t\tRecalculate attnums from compatible but not-yet-estimated clauses.\n\nIt returns attnums collected from multiple clause*s*. Is the name OK\nwith \"clause_attnums\"?\n\nThe comment says as if it checks the compatibility of each clause but\nthe work is done in the caller side. I'm not sure such strictness is\nrequired, but it might be better that the comment represents what\nexactly the function does.\n\n\n+ */\n+static Bitmapset *\n+statext_mcv_clause_attnums(int nclauses, Bitmapset **estimatedclauses,\n+\t\t\t\t\t\t Bitmapset **list_attnums)\n\nThe last two parameters are in the same type in notation but in\ndifferent actual types.. that is, one is a pointer to Bitmapset*, and\nanother is an array of Bitmaptset*. The code in the function itself\nsuggests that, but it would be helpful if a brief explanation of the\nparameters is seen in the function comment.\n\n+\t\t/*\n+\t\t * Recompute attnums in the remaining clauses (we simply use the bitmaps\n+\t\t * computed earlier, so that we don't have to inspect the clauses again).\n+\t\t */\n+\t\tclauses_attnums = statext_mcv_clause_attnums(list_length(clauses),\n\nCouldn't we avoid calling this function twice with the same parameters\nat the first round in the loop?\n\n+\t\tforeach(l, clauses)\n \t\t{\n-\t\t\tstat_clauses = lappend(stat_clauses, (Node *) lfirst(l));\n-\t\t\t*estimatedclauses = bms_add_member(*estimatedclauses, listidx);\n+\t\t\t/*\n+\t\t\t * If the clause is compatible with the selected statistics, mark it\n+\t\t\t * as estimated and add it to the list to estimate.\n+\t\t\t */\n+\t\t\tif (list_attnums[listidx] != NULL &&\n+\t\t\t\tbms_is_subset(list_attnums[listidx], stat->keys))\n+\t\t\t{\n+\t\t\t\tstat_clauses = lappend(stat_clauses, (Node *) lfirst(l));\n+\t\t\t\t*estimatedclauses = bms_add_member(*estimatedclauses, listidx);\n+\t\t\t}\n\nThe loop runs through all clauses every time. I agree that that is\nbetter than using a copy of the clauses to avoid to step on already\nestimated clauses, but maybe we need an Assertion that the listidx is\nnot a part of estimatedclauses to make sure no clauses are not\nestimated twice.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 Nov 2019 13:38:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Thu, Nov 07, 2019 at 01:38:20PM +0900, Kyotaro Horiguchi wrote:\n>Hello.\n>\n>At Wed, 6 Nov 2019 20:58:49 +0100, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in\n>> >Here is a slightly polished v2 of the patch, the main difference being\n>> >that computing clause_attnums was moved to a separate function.\n>> >\n>>\n>> This time with the attachment ;-)\n>\n>This patch is a kind of straight-forward, which repeats what the\n>previous statext_mcv_clauselist_selectivity did as long as remaining\n>clauses matches any of MV-MCVs. Almost no regression in the cases\n>where zero or just one MV-MCV applies to the given clause list.\n>\n>It applies cleanly on the current master and seems working as\n>expected.\n>\n>\n>I have some comments.\n>\n>Could we have description in the documentation on what multiple\n>MV-MCVs are used in a query? And don't we need some regression tests?\n>\n\nYes, regression tests are certainly needed - I though I've added them,\nbut it seems I failed to include them in the patch. Will fix.\n\nI agree it's probably worth mentioning we can consider multiple stats,\nbut I'm a bit hesitant to put the exact rules how we pick the \"best\"\nstatistic to the docs. It's not 100% deterministic and it's likely\nwe'll need to tweak it a bit in the future.\n\nI'd prefer showing the stats in EXPLAIN, but that's a separate patch.\n\n>\n>+/*\n>+ * statext_mcv_clause_attnums\n>+ *\t\tRecalculate attnums from compatible but not-yet-estimated clauses.\n>\n>It returns attnums collected from multiple clause*s*. Is the name OK\n>with \"clause_attnums\"?\n>\n>The comment says as if it checks the compatibility of each clause but\n>the work is done in the caller side. I'm not sure such strictness is\n>required, but it might be better that the comment represents what\n>exactly the function does.\n>\n\nBut the incompatible clauses have the pre-computed attnums set to NULL,\nso technically the comment is correct. But I'll clarify.\n\n>\n>+ */\n>+static Bitmapset *\n>+statext_mcv_clause_attnums(int nclauses, Bitmapset **estimatedclauses,\n>+\t\t\t\t\t\t Bitmapset **list_attnums)\n>\n>The last two parameters are in the same type in notation but in\n>different actual types.. that is, one is a pointer to Bitmapset*, and\n>another is an array of Bitmaptset*. The code in the function itself\n>suggests that, but it would be helpful if a brief explanation of the\n>parameters is seen in the function comment.\n>\n\nOK, will explain in a comment.\n\n>+\t\t/*\n>+\t\t * Recompute attnums in the remaining clauses (we simply use the bitmaps\n>+\t\t * computed earlier, so that we don't have to inspect the clauses again).\n>+\t\t */\n>+\t\tclauses_attnums = statext_mcv_clause_attnums(list_length(clauses),\n>\n>Couldn't we avoid calling this function twice with the same parameters\n>at the first round in the loop?\n>\n\nHmmm, yeah. That's a good point.\n\n>+\t\tforeach(l, clauses)\n> \t\t{\n>-\t\t\tstat_clauses = lappend(stat_clauses, (Node *) lfirst(l));\n>-\t\t\t*estimatedclauses = bms_add_member(*estimatedclauses, listidx);\n>+\t\t\t/*\n>+\t\t\t * If the clause is compatible with the selected statistics, mark it\n>+\t\t\t * as estimated and add it to the list to estimate.\n>+\t\t\t */\n>+\t\t\tif (list_attnums[listidx] != NULL &&\n>+\t\t\t\tbms_is_subset(list_attnums[listidx], stat->keys))\n>+\t\t\t{\n>+\t\t\t\tstat_clauses = lappend(stat_clauses, (Node *) lfirst(l));\n>+\t\t\t\t*estimatedclauses = bms_add_member(*estimatedclauses, listidx);\n>+\t\t\t}\n>\n>The loop runs through all clauses every time. I agree that that is\n>better than using a copy of the clauses to avoid to step on already\n>estimated clauses, but maybe we need an Assertion that the listidx is\n>not a part of estimatedclauses to make sure no clauses are not\n>estimated twice.\n>\n\nWell, we can't really operate on a smaller \"copy\" of the list anyway,\nbecause that would break the precalculation logic (the listidx value\nwould be incorrect for the new list), and tweaking it would be more\nexpensive than just iterating over all clauses. The assumption is that\nwe won't see extremely large number of clauses here.\n\nAdding an assert seems reasonable. And maybe a comment why we should not\nsee any already-estimated clauses here.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 7 Nov 2019 12:05:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On 11/6/19 11:58 AM, Tomas Vondra wrote:\n> On Wed, Nov 06, 2019 at 08:54:40PM +0100, Tomas Vondra wrote:\n>> On Mon, Oct 28, 2019 at 04:20:48PM +0100, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> PostgreSQL 10 introduced extended statistics, allowing us to consider\n>>> correlation between columns to improve estimates, and PostgreSQL 12\n>>> added support for MCV statistics. But we still had the limitation that\n>>> we only allowed using a single extended statistics per relation, i.e.\n>>> given a table with two extended stats\n>>>\n>>> �CREATE TABLE t (a int, b int, c int, d int);\n>>> �CREATE STATISTICS s1 (mcv) ON a, b FROM t;\n>>> �CREATE STATISTICS s2 (mcv) ON c, d FROM t;\n>>>\n>>> and a query\n>>>\n>>> �SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1 AND d = 1;\n>>>\n>>> we only ever used one of the statistics (and we considered them in a not\n>>> particularly well determined order).\n>>>\n>>> This patch addresses this by using as many extended stats as possible,\n>>> by adding a loop to statext_mcv_clauselist_selectivity(). In each step\n>>> we pick the \"best\" applicable statistics (in the sense of covering the\n>>> most attributes) and factor it into the oveall estimate.\n\nTomas,\n\nYour patch compiles and passes the regression tests for me on debian \nlinux under master.\n\nSince your patch does not include modified regression tests, I wrote a \ntest that I expected to improve under this new code, but running it both \nbefore and after applying your patch, there is no change. Please find \nthe modified test attached. Am I wrong to expect some change in this \ntest's output? If so, can you provide a test example that works \ndifferently under your patch?\n\nThanks!\n\n\n-- \nMark Dilger", "msg_date": "Sat, 9 Nov 2019 12:33:05 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On 11/9/19 12:33 PM, Mark Dilger wrote:\n> \n> \n> On 11/6/19 11:58 AM, Tomas Vondra wrote:\n>> On Wed, Nov 06, 2019 at 08:54:40PM +0100, Tomas Vondra wrote:\n>>> On Mon, Oct 28, 2019 at 04:20:48PM +0100, Tomas Vondra wrote:\n>>>> Hi,\n>>>>\n>>>> PostgreSQL 10 introduced extended statistics, allowing us to consider\n>>>> correlation between columns to improve estimates, and PostgreSQL 12\n>>>> added support for MCV statistics. But we still had the limitation that\n>>>> we only allowed using a single extended statistics per relation, i.e.\n>>>> given a table with two extended stats\n>>>>\n>>>> �CREATE TABLE t (a int, b int, c int, d int);\n>>>> �CREATE STATISTICS s1 (mcv) ON a, b FROM t;\n>>>> �CREATE STATISTICS s2 (mcv) ON c, d FROM t;\n>>>>\n>>>> and a query\n>>>>\n>>>> �SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1 AND d = 1;\n>>>>\n>>>> we only ever used one of the statistics (and we considered them in a \n>>>> not\n>>>> particularly well determined order).\n>>>>\n>>>> This patch addresses this by using as many extended stats as possible,\n>>>> by adding a loop to statext_mcv_clauselist_selectivity(). In each step\n>>>> we pick the \"best\" applicable statistics (in the sense of covering the\n>>>> most attributes) and factor it into the oveall estimate.\n> \n> Tomas,\n> \n> Your patch compiles and passes the regression tests for me on debian \n> linux under master.\n> \n> Since your patch does not include modified regression tests, I wrote a \n> test that I expected to improve under this new code, but running it both \n> before and after applying your patch, there is no change.\n\nOk, the attached test passes before applying your patch and fails \nafterward owing to the estimates improving and no longer matching the \nexpected output. To be clear, this confirms your patch working as expected.\n\nI haven't seen any crashes in several hours of running different tests, \nso I think it looks good.\n\n\n-- \nMark Dilger", "msg_date": "Sat, 9 Nov 2019 14:32:27 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Sat, Nov 09, 2019 at 12:33:05PM -0800, Mark Dilger wrote:\n>\n>\n>On 11/6/19 11:58 AM, Tomas Vondra wrote:\n>>On Wed, Nov 06, 2019 at 08:54:40PM +0100, Tomas Vondra wrote:\n>>>On Mon, Oct 28, 2019 at 04:20:48PM +0100, Tomas Vondra wrote:\n>>>>Hi,\n>>>>\n>>>>PostgreSQL 10 introduced extended statistics, allowing us to consider\n>>>>correlation between columns to improve estimates, and PostgreSQL 12\n>>>>added support for MCV statistics. But we still had the limitation that\n>>>>we only allowed using a single extended statistics per relation, i.e.\n>>>>given a table with two extended stats\n>>>>\n>>>>�CREATE TABLE t (a int, b int, c int, d int);\n>>>>�CREATE STATISTICS s1 (mcv) ON a, b FROM t;\n>>>>�CREATE STATISTICS s2 (mcv) ON c, d FROM t;\n>>>>\n>>>>and a query\n>>>>\n>>>>�SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1 AND d = 1;\n>>>>\n>>>>we only ever used one of the statistics (and we considered them in a not\n>>>>particularly well determined order).\n>>>>\n>>>>This patch addresses this by using as many extended stats as possible,\n>>>>by adding a loop to statext_mcv_clauselist_selectivity(). In each step\n>>>>we pick the \"best\" applicable statistics (in the sense of covering the\n>>>>most attributes) and factor it into the oveall estimate.\n>\n>Tomas,\n>\n>Your patch compiles and passes the regression tests for me on debian \n>linux under master.\n>\n\nThanks.\n\n>Since your patch does not include modified regression tests, I wrote a \n>test that I expected to improve under this new code, but running it \n>both before and after applying your patch, there is no change. Please \n>find the modified test attached. Am I wrong to expect some change in \n>this test's output? If so, can you provide a test example that works \n>differently under your patch?\n>\n\nThose queries are not improved by the patch, because we only support\nclauses \"Var op Const\" for now - your tests are using \"Var op Var\" so\nthat doesn't work.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 10 Nov 2019 18:33:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Sat, Nov 09, 2019 at 02:32:27PM -0800, Mark Dilger wrote:\n>\n>\n>On 11/9/19 12:33 PM, Mark Dilger wrote:\n>>\n>>\n>>On 11/6/19 11:58 AM, Tomas Vondra wrote:\n>>>On Wed, Nov 06, 2019 at 08:54:40PM +0100, Tomas Vondra wrote:\n>>>>On Mon, Oct 28, 2019 at 04:20:48PM +0100, Tomas Vondra wrote:\n>>>>>Hi,\n>>>>>\n>>>>>PostgreSQL 10 introduced extended statistics, allowing us to consider\n>>>>>correlation between columns to improve estimates, and PostgreSQL 12\n>>>>>added support for MCV statistics. But we still had the limitation that\n>>>>>we only allowed using a single extended statistics per relation, i.e.\n>>>>>given a table with two extended stats\n>>>>>\n>>>>>�CREATE TABLE t (a int, b int, c int, d int);\n>>>>>�CREATE STATISTICS s1 (mcv) ON a, b FROM t;\n>>>>>�CREATE STATISTICS s2 (mcv) ON c, d FROM t;\n>>>>>\n>>>>>and a query\n>>>>>\n>>>>>�SELECT * FROM t WHERE a = 1 AND b = 1 AND c = 1 AND d = 1;\n>>>>>\n>>>>>we only ever used one of the statistics (and we considered \n>>>>>them in a not\n>>>>>particularly well determined order).\n>>>>>\n>>>>>This patch addresses this by using as many extended stats as possible,\n>>>>>by adding a loop to statext_mcv_clauselist_selectivity(). In each step\n>>>>>we pick the \"best\" applicable statistics (in the sense of covering the\n>>>>>most attributes) and factor it into the oveall estimate.\n>>\n>>Tomas,\n>>\n>>Your patch compiles and passes the regression tests for me on debian \n>>linux under master.\n>>\n>>Since your patch does not include modified regression tests, I wrote \n>>a test that I expected to improve under this new code, but running \n>>it both before and after applying your patch, there is no change.\n>\n>Ok, the attached test passes before applying your patch and fails \n>afterward owing to the estimates improving and no longer matching the \n>expected output. To be clear, this confirms your patch working as \n>expected.\n>\n>I haven't seen any crashes in several hours of running different \n>tests, so I think it looks good.\n>\n\nYep, thanks for adding the tests. I'll include them into the patch.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 10 Nov 2019 18:34:28 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "Hi,\n\nhere's an updated patch, with some minor tweaks based on the review and\nadded tests (I ended up reworking those a bit, to make them more like\nthe existing ones).\n\nThere's also a new piece, dealing with functional dependencies. Until\nnow we did the same thing as for MCV lists - we picketd the \"best\"\nextended statistics (with functional dependencies built) and just used\nthat. At first I thought we might simply do the same loop as for MCV\nlists, but that does not really make sense because we might end up\napplying \"weaker\" dependency first.\n\nSay for example we have table with columns (a,b,c,d,e) and functional\ndependencies on (a,b,c,d) and (c,d,e) where all the dependencies on\n(a,b,c,d) are weaker than (c,d => e). In a query with clauses on all\nattributes this is guaranteed to apply all dependencies from the first\nstatistic first, which si clearly wrong.\n\nSo what this does instead is simply merging all the dependencies from\nall the relevant stats, and treating them as a single collection.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 13 Nov 2019 16:28:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On 11/13/19 7:28 AM, Tomas Vondra wrote:\n> Hi,\n> \n> here's an updated patch, with some minor tweaks based on the review and\n> added tests (I ended up reworking those a bit, to make them more like\n> the existing ones).\n\nThanks, Tomas, for the new patch set!\n\nAttached are my review comments so far, in the form of a patch applied \non top of yours.\n\n-- \nMark Dilger", "msg_date": "Wed, 13 Nov 2019 10:04:36 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Wed, Nov 13, 2019 at 10:04:36AM -0800, Mark Dilger wrote:\n>\n>\n>On 11/13/19 7:28 AM, Tomas Vondra wrote:\n>>Hi,\n>>\n>>here's an updated patch, with some minor tweaks based on the review and\n>>added tests (I ended up reworking those a bit, to make them more like\n>>the existing ones).\n>\n>Thanks, Tomas, for the new patch set!\n>\n>Attached are my review comments so far, in the form of a patch applied \n>on top of yours.\n>\n\nThanks.\n\n1) It's not clear to me why adding 'const' to the List parameters would\n be useful? Can you explain?\n\n2) I think you're right we can change find_strongest_dependency to do\n\n /* also skip weaker dependencies when attribute count matches */\n if (strongest->nattributes == dependency->nattributes &&\n strongest->degree >= dependency->degree)\n continue;\n\n That'll skip some additional dependencies, which seems OK.\n\n3) It's not clear to me what you mean by\n\n * TODO: Improve this code comment. Specifically, why would we\n * ignore that no rows will match? It seems that such a discovery\n * would allow us to return an estimate of 0 rows, and that would\n * be useful.\n\n added to dependencies_clauselist_selectivity. Are you saying we\n should also compute selectivity estimates for individual clauses and\n use Min() as a limit? Maybe, but that seems unrelated to the patch.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 Nov 2019 16:55:41 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "\n\nOn 11/14/19 7:55 AM, Tomas Vondra wrote:\n> On Wed, Nov 13, 2019 at 10:04:36AM -0800, Mark Dilger wrote:\n>>\n>>\n>> On 11/13/19 7:28 AM, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> here's an updated patch, with some minor tweaks based on the review and\n>>> added tests (I ended up reworking those a bit, to make them more like\n>>> the existing ones).\n>>\n>> Thanks, Tomas, for the new patch set!\n>>\n>> Attached are my review comments so far, in the form of a patch applied \n>> on top of yours.\n>>\n> \n> Thanks.\n> \n> 1) It's not clear to me why adding 'const' to the List parameters would\n>   be useful? Can you explain?\n\nWhen I first started reviewing the functions, I didn't know if those \nlists were intended to be modified by the function. Adding 'const' \nhelps document that the function does not intend to change them.\n\n> 2) I think you're right we can change find_strongest_dependency to do\n> \n>    /* also skip weaker dependencies when attribute count matches */\n>    if (strongest->nattributes == dependency->nattributes &&\n>        strongest->degree >= dependency->degree)\n>        continue;\n> \n>   That'll skip some additional dependencies, which seems OK.\n> \n> 3) It's not clear to me what you mean by\n> \n>     * TODO: Improve this code comment.  Specifically, why would we\n>     * ignore that no rows will match?  It seems that such a discovery\n>     * would allow us to return an estimate of 0 rows, and that would\n>     * be useful.\n> \n>   added to dependencies_clauselist_selectivity. Are you saying we\n>   should also compute selectivity estimates for individual clauses and\n>   use Min() as a limit? Maybe, but that seems unrelated to the patch.\n\nI mean that the comment right above that TODO is hard to understand. You \nseem to be saying that it is good and proper to only take the \nselectivity estimate from the final clause in the list, but then go on \nto say that other clauses might prove that no rows will match. So that \nimplies that by ignoring all but the last clause, we're ignoring such \nother clauses that prove no rows can match. But why would we be \nignoring those?\n\nI am not arguing that your code is wrong. I'm just critiquing the \nhard-to-understand phrasing of that code comment.\n\n-- \nMark Dilger\n\n\n", "msg_date": "Thu, 14 Nov 2019 10:23:44 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Thu, Nov 14, 2019 at 10:23:44AM -0800, Mark Dilger wrote:\n>\n>\n>On 11/14/19 7:55 AM, Tomas Vondra wrote:\n>>On Wed, Nov 13, 2019 at 10:04:36AM -0800, Mark Dilger wrote:\n>>>\n>>>\n>>>On 11/13/19 7:28 AM, Tomas Vondra wrote:\n>>>>Hi,\n>>>>\n>>>>here's an updated patch, with some minor tweaks based on the review and\n>>>>added tests (I ended up reworking those a bit, to make them more like\n>>>>the existing ones).\n>>>\n>>>Thanks, Tomas, for the new patch set!\n>>>\n>>>Attached are my review comments so far, in the form of a patch \n>>>applied on top of yours.\n>>>\n>>\n>>Thanks.\n>>\n>>1) It's not clear to me why adding 'const' to the List parameters would\n>> � be useful? Can you explain?\n>\n>When I first started reviewing the functions, I didn't know if those \n>lists were intended to be modified by the function. Adding 'const' \n>helps document that the function does not intend to change them.\n>\n\nHmmm, ok. I'll think about it, but we're not really using const* in this\nway very much I think - at least not in the surrounding code.\n\n>>2) I think you're right we can change find_strongest_dependency to do\n>>\n>> �� /* also skip weaker dependencies when attribute count matches */\n>> �� if (strongest->nattributes == dependency->nattributes &&\n>> ������ strongest->degree >= dependency->degree)\n>> ������ continue;\n>>\n>> � That'll skip some additional dependencies, which seems OK.\n>>\n>>3) It's not clear to me what you mean by\n>>\n>> ��� * TODO: Improve this code comment.� Specifically, why would we\n>> ��� * ignore that no rows will match?� It seems that such a discovery\n>> ��� * would allow us to return an estimate of 0 rows, and that would\n>> ��� * be useful.\n>>\n>> � added to dependencies_clauselist_selectivity. Are you saying we\n>> � should also compute selectivity estimates for individual clauses and\n>> � use Min() as a limit? Maybe, but that seems unrelated to the patch.\n>\n>I mean that the comment right above that TODO is hard to understand. \n>You seem to be saying that it is good and proper to only take the \n>selectivity estimate from the final clause in the list, but then go on \n>to say that other clauses might prove that no rows will match. So \n>that implies that by ignoring all but the last clause, we're ignoring \n>such other clauses that prove no rows can match. But why would we be \n>ignoring those?\n>\n>I am not arguing that your code is wrong. I'm just critiquing the \n>hard-to-understand phrasing of that code comment.\n>\n\nAha, I think I understand now - thanks for the explanation. You're right\nthe comment is trying to explain why just taking the last clause for a\ngiven attnum is fine. I'll try to make the comment clearer.\n\nFor the case with equal Const values that should be mostly obvious, i.e.\n\"a=1 AND a=1 AND a=1\" has the same selectivity as \"a=1\".\n\nThe case with different Const values is harder, unfortunately. It might\nseem obvious that \"a=1 AND a=2\" means there are no matching rows, but\nthat heavily relies on the semantics of the equality operator. And we\ncan't simply compare the Const values either, I'm afraid, because there\nare cases with cross-type operators like\n\n a = 1::int AND a = 1.0::numeric\n\nwhere the Consts are of different type, yet both conditions can be true.\n\nSo it would be pretty tricky to do this, and the current code does not\neven try to do that.\n\nInstead, it just assumes that it's mostly fine to overestimate, because\nthen at runtime we'll simply end up with 0 rows here.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 14 Nov 2019 21:04:20 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> For the case with equal Const values that should be mostly obvious, i.e.\n> \"a=1 AND a=1 AND a=1\" has the same selectivity as \"a=1\".\n\n> The case with different Const values is harder, unfortunately. It might\n> seem obvious that \"a=1 AND a=2\" means there are no matching rows, but\n> that heavily relies on the semantics of the equality operator. And we\n> can't simply compare the Const values either, I'm afraid, because there\n> are cases with cross-type operators like\n> a = 1::int AND a = 1.0::numeric\n> where the Consts are of different type, yet both conditions can be true.\n\nFWIW, there's code in predtest.c to handle exactly that, at least for\ntypes sharing a btree opfamily. Whether it's worth applying that logic\nhere is unclear, but note that we've had the ability to recognize\nredundant and contradictory clauses for a long time:\n\nregression=# explain select * from tenk1 where two = 1; \n QUERY PLAN \n------------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..470.00 rows=5000 width=244)\n Filter: (two = 1)\n(2 rows)\n\nregression=# explain select * from tenk1 where two = 1 and two = 1::bigint; \n QUERY PLAN \n------------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..470.00 rows=5000 width=244)\n Filter: (two = 1)\n(2 rows)\n\nregression=# explain select * from tenk1 where two = 1 and two = 2::bigint;\n QUERY PLAN \n---------------------------------------------------------------\n Result (cost=0.00..470.00 rows=1 width=244)\n One-Time Filter: false\n -> Seq Scan on tenk1 (cost=0.00..470.00 rows=1 width=244)\n Filter: (two = 1)\n(4 rows)\n\nIt falls down on\n\nregression=# explain select * from tenk1 where two = 1 and two = 2::numeric;\n QUERY PLAN \n-----------------------------------------------------------\n Seq Scan on tenk1 (cost=0.00..520.00 rows=25 width=244)\n Filter: ((two = 1) AND ((two)::numeric = '2'::numeric))\n(2 rows)\n\nbecause numeric isn't in the same opfamily, so these clauses can't be\ncompared easily.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 Nov 2019 15:16:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "\n\nOn 11/14/19 12:04 PM, Tomas Vondra wrote:\n> Aha, I think I understand now - thanks for the explanation. You're right\n> the comment is trying to explain why just taking the last clause for a\n> given attnum is fine. I'll try to make the comment clearer.\n> \n> For the case with equal Const values that should be mostly obvious, i.e.\n> \"a=1 AND a=1 AND a=1\" has the same selectivity as \"a=1\".\n> \n> The case with different Const values is harder, unfortunately. It might\n> seem obvious that \"a=1 AND a=2\" means there are no matching rows, but\n> that heavily relies on the semantics of the equality operator. And we\n> can't simply compare the Const values either, I'm afraid, because there\n> are cases with cross-type operators like\n> \n>  a = 1::int AND a = 1.0::numeric\n> \n> where the Consts are of different type, yet both conditions can be true.\n> \n> So it would be pretty tricky to do this, and the current code does not\n> even try to do that.\n> \n> Instead, it just assumes that it's mostly fine to overestimate, because\n> then at runtime we'll simply end up with 0 rows here.\n\nI'm unsure whether that could be a performance problem at runtime.\n\nI could imagine the planner short-circuiting additional planning when\nit finds a plan with zero rows, and so we'd save planner time if we\navoid overestimating. I don't recall if the planner does anything like\nthat, or if there are plans to implement such logic, but it might be\ngood not to rule it out. Tom's suggestion elsewhere in this thread to\nuse code in predtest.c sounds good to me.\n\nI don't know if you want to expand the scope of this particular patch to\ninclude that, though.\n\n-- \nMark Dilger\n\n\n", "msg_date": "Thu, 14 Nov 2019 13:17:02 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Thu, Nov 14, 2019 at 03:16:04PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> For the case with equal Const values that should be mostly obvious, i.e.\n>> \"a=1 AND a=1 AND a=1\" has the same selectivity as \"a=1\".\n>\n>> The case with different Const values is harder, unfortunately. It might\n>> seem obvious that \"a=1 AND a=2\" means there are no matching rows, but\n>> that heavily relies on the semantics of the equality operator. And we\n>> can't simply compare the Const values either, I'm afraid, because there\n>> are cases with cross-type operators like\n>> a = 1::int AND a = 1.0::numeric\n>> where the Consts are of different type, yet both conditions can be true.\n>\n>FWIW, there's code in predtest.c to handle exactly that, at least for\n>types sharing a btree opfamily. Whether it's worth applying that logic\n>here is unclear, but note that we've had the ability to recognize\n>redundant and contradictory clauses for a long time:\n>\n>regression=# explain select * from tenk1 where two = 1;\n> QUERY PLAN\n>------------------------------------------------------------\n> Seq Scan on tenk1 (cost=0.00..470.00 rows=5000 width=244)\n> Filter: (two = 1)\n>(2 rows)\n>\n>regression=# explain select * from tenk1 where two = 1 and two = 1::bigint;\n> QUERY PLAN\n>------------------------------------------------------------\n> Seq Scan on tenk1 (cost=0.00..470.00 rows=5000 width=244)\n> Filter: (two = 1)\n>(2 rows)\n>\n>regression=# explain select * from tenk1 where two = 1 and two = 2::bigint;\n> QUERY PLAN\n>---------------------------------------------------------------\n> Result (cost=0.00..470.00 rows=1 width=244)\n> One-Time Filter: false\n> -> Seq Scan on tenk1 (cost=0.00..470.00 rows=1 width=244)\n> Filter: (two = 1)\n>(4 rows)\n>\n>It falls down on\n>\n>regression=# explain select * from tenk1 where two = 1 and two = 2::numeric;\n> QUERY PLAN\n>-----------------------------------------------------------\n> Seq Scan on tenk1 (cost=0.00..520.00 rows=25 width=244)\n> Filter: ((two = 1) AND ((two)::numeric = '2'::numeric))\n>(2 rows)\n>\n>because numeric isn't in the same opfamily, so these clauses can't be\n>compared easily.\n>\n>\t\t\tregards, tom lane\n\nYeah, and this logic still works - the redundant clauses won't even get\nto the selectivity estimation, I think. So maybe the comment is not\nquite necessary, because the problem does not even exist ...\n\nMaybe we could do something about the cases that predtest.c can't solve,\nbut it's not clear if we can be much smarter for types with different\nopfamilies.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 14 Nov 2019 22:45:41 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Thu, Nov 14, 2019 at 01:17:02PM -0800, Mark Dilger wrote:\n>\n>\n>On 11/14/19 12:04 PM, Tomas Vondra wrote:\n>>Aha, I think I understand now - thanks for the explanation. You're right\n>>the comment is trying to explain why just taking the last clause for a\n>>given attnum is fine. I'll try to make the comment clearer.\n>>\n>>For the case with equal Const values that should be mostly obvious, i.e.\n>>\"a=1 AND a=1 AND a=1\" has the same selectivity as \"a=1\".\n>>\n>>The case with different Const values is harder, unfortunately. It might\n>>seem obvious that \"a=1 AND a=2\" means there are no matching rows, but\n>>that heavily relies on the semantics of the equality operator. And we\n>>can't simply compare the Const values either, I'm afraid, because there\n>>are cases with cross-type operators like\n>>\n>> �a = 1::int AND a = 1.0::numeric\n>>\n>>where the Consts are of different type, yet both conditions can be true.\n>>\n>>So it would be pretty tricky to do this, and the current code does not\n>>even try to do that.\n>>\n>>Instead, it just assumes that it's mostly fine to overestimate, because\n>>then at runtime we'll simply end up with 0 rows here.\n>\n>I'm unsure whether that could be a performance problem at runtime.\n>\n>I could imagine the planner short-circuiting additional planning when\n>it finds a plan with zero rows, and so we'd save planner time if we\n>avoid overestimating. I don't recall if the planner does anything like\n>that, or if there are plans to implement such logic, but it might be\n>good not to rule it out. Tom's suggestion elsewhere in this thread to\n>use code in predtest.c sounds good to me.\n>\n\nNo, AFAIK the planner does not do anything like that - it might probaly\ndo that if it could prove there are no such rows, but that's hardly the\ncase for estimates based on approximate information (i.e. statistics).\n\nIf could do that based on the predicate analysis in predtest.c mentioned\nby Tom, although I don't think it does anything beyond tweaking the row\nestimate to ~1 row.\n\n>I don't know if you want to expand the scope of this particular patch to\n>include that, though.\n>\n\nCertainly not. It's an interesting but surprisingly complicated problem,\nand this patch simply aims to add different improvement.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 14 Nov 2019 22:51:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "\n\nOn 11/14/19 12:04 PM, Tomas Vondra wrote:\n> On Thu, Nov 14, 2019 at 10:23:44AM -0800, Mark Dilger wrote:\n>>\n>>\n>> On 11/14/19 7:55 AM, Tomas Vondra wrote:\n>>> On Wed, Nov 13, 2019 at 10:04:36AM -0800, Mark Dilger wrote:\n>>>>\n>>>>\n>>>> On 11/13/19 7:28 AM, Tomas Vondra wrote:\n>>>>> Hi,\n>>>>>\n>>>>> here's an updated patch, with some minor tweaks based on the review \n>>>>> and\n>>>>> added tests (I ended up reworking those a bit, to make them more like\n>>>>> the existing ones).\n>>>>\n>>>> Thanks, Tomas, for the new patch set!\n>>>>\n>>>> Attached are my review comments so far, in the form of a patch \n>>>> applied on top of yours.\n>>>>\n>>>\n>>> Thanks.\n>>>\n>>> 1) It's not clear to me why adding 'const' to the List parameters would\n>>>   be useful? Can you explain?\n>>\n>> When I first started reviewing the functions, I didn't know if those \n>> lists were intended to be modified by the function.  Adding 'const' \n>> helps document that the function does not intend to change them.\n>>\n> \n> Hmmm, ok. I'll think about it, but we're not really using const* in this\n> way very much I think - at least not in the surrounding code.\n> \n>>> 2) I think you're right we can change find_strongest_dependency to do\n>>>\n>>>    /* also skip weaker dependencies when attribute count matches */\n>>>    if (strongest->nattributes == dependency->nattributes &&\n>>>        strongest->degree >= dependency->degree)\n>>>        continue;\n>>>\n>>>   That'll skip some additional dependencies, which seems OK.\n>>>\n>>> 3) It's not clear to me what you mean by\n>>>\n>>>     * TODO: Improve this code comment.  Specifically, why would we\n>>>     * ignore that no rows will match?  It seems that such a discovery\n>>>     * would allow us to return an estimate of 0 rows, and that would\n>>>     * be useful.\n>>>\n>>>   added to dependencies_clauselist_selectivity. Are you saying we\n>>>   should also compute selectivity estimates for individual clauses and\n>>>   use Min() as a limit? Maybe, but that seems unrelated to the patch.\n>>\n>> I mean that the comment right above that TODO is hard to understand. \n>> You seem to be saying that it is good and proper to only take the \n>> selectivity estimate from the final clause in the list, but then go on \n>> to say that other clauses might prove that no rows will match.  So \n>> that implies that by ignoring all but the last clause, we're ignoring \n>> such other clauses that prove no rows can match.  But why would we be \n>> ignoring those?\n>>\n>> I am not arguing that your code is wrong.  I'm just critiquing the \n>> hard-to-understand phrasing of that code comment.\n>>\n> \n> Aha, I think I understand now - thanks for the explanation. You're right\n> the comment is trying to explain why just taking the last clause for a\n> given attnum is fine. I'll try to make the comment clearer.\n\nAre you planning to submit a revised patch for this?\n\n\n\n-- \nMark Dilger\n\n\n", "msg_date": "Sat, 30 Nov 2019 15:01:31 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Sat, Nov 30, 2019 at 03:01:31PM -0800, Mark Dilger wrote:\n>\n>Are you planning to submit a revised patch for this?\n>\n\nYes, I'll submit a rebased version of this patch shortly. I got broken\nbecause of the recent fix in choose_best_statistics, shouldn't take long\nto update the patch. I do have a couple more related patches in the\nqueue, so I want to submit them all at once.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 1 Dec 2019 20:08:58 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Sun, Dec 01, 2019 at 08:08:58PM +0100, Tomas Vondra wrote:\n>On Sat, Nov 30, 2019 at 03:01:31PM -0800, Mark Dilger wrote:\n>>\n>>Are you planning to submit a revised patch for this?\n>>\n>\n>Yes, I'll submit a rebased version of this patch shortly. I got broken\n>because of the recent fix in choose_best_statistics, shouldn't take long\n>to update the patch. I do have a couple more related patches in the\n>queue, so I want to submit them all at once.\n>\n\nOK, here we go - these two patched allow applying multiple extended\nstatistics, both for MCV and functional dependencies. Functional\ndependencies are simply merged and then applied at once (so withouth\nchoose_best_statistics), statistics are considered in greedy manner by\ncalling choose_best_statistics in a loop.\n\nI do have some additional enhancements in the queue, but those are not\nfully baked yet, so I'll post them later in separate patches.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Dec 2019 18:15:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Thu, Dec 05, 2019 at 06:15:54PM +0100, Tomas Vondra wrote:\n>On Sun, Dec 01, 2019 at 08:08:58PM +0100, Tomas Vondra wrote:\n>>On Sat, Nov 30, 2019 at 03:01:31PM -0800, Mark Dilger wrote:\n>>>\n>>>Are you planning to submit a revised patch for this?\n>>>\n>>\n>>Yes, I'll submit a rebased version of this patch shortly. I got broken\n>>because of the recent fix in choose_best_statistics, shouldn't take long\n>>to update the patch. I do have a couple more related patches in the\n>>queue, so I want to submit them all at once.\n>>\n>\n>OK, here we go - these two patched allow applying multiple extended\n>statistics, both for MCV and functional dependencies. Functional\n>dependencies are simply merged and then applied at once (so withouth\n>choose_best_statistics), statistics are considered in greedy manner by\n>calling choose_best_statistics in a loop.\n>\n\nOK, this time with the patches actually attached ;-)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 5 Dec 2019 18:51:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "\n\nOn 12/5/19 9:51 AM, Tomas Vondra wrote:\n> On Thu, Dec 05, 2019 at 06:15:54PM +0100, Tomas Vondra wrote:\n>> On Sun, Dec 01, 2019 at 08:08:58PM +0100, Tomas Vondra wrote:\n>>> On Sat, Nov 30, 2019 at 03:01:31PM -0800, Mark Dilger wrote:\n>>>>\n>>>> Are you planning to submit a revised patch for this?\n>>>>\n>>>\n>>> Yes, I'll submit a rebased version of this patch shortly. I got broken\n>>> because of the recent fix in choose_best_statistics, shouldn't take long\n>>> to update the patch. I do have a couple more related patches in the\n>>> queue, so I want to submit them all at once.\n>>>\n>>\n>> OK, here we go - these two patched allow applying multiple extended\n>> statistics, both for MCV and functional dependencies. Functional\n>> dependencies are simply merged and then applied at once (so withouth\n>> choose_best_statistics), statistics are considered in greedy manner by\n>> calling choose_best_statistics in a loop.\n>>\n> \n> OK, this time with the patches actually attached ;-)\n\nThese look good to me. I added extra tests (not included in this email)\nto verify the code on more interesting test cases, such as partitioned\ntables and within joins. Your test cases are pretty trivial, just being\nselects from a single table.\n\nI'll go mark this \"ready for committer\".\n\n-- \nMark Dilger\n\n\n", "msg_date": "Mon, 9 Dec 2019 11:56:39 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On Mon, Dec 09, 2019 at 11:56:39AM -0800, Mark Dilger wrote:\n>\n>\n>On 12/5/19 9:51 AM, Tomas Vondra wrote:\n>>On Thu, Dec 05, 2019 at 06:15:54PM +0100, Tomas Vondra wrote:\n>>>On Sun, Dec 01, 2019 at 08:08:58PM +0100, Tomas Vondra wrote:\n>>>>On Sat, Nov 30, 2019 at 03:01:31PM -0800, Mark Dilger wrote:\n>>>>>\n>>>>>Are you planning to submit a revised patch for this?\n>>>>>\n>>>>\n>>>>Yes, I'll submit a rebased version of this patch shortly. I got broken\n>>>>because of the recent fix in choose_best_statistics, shouldn't take long\n>>>>to update the patch. I do have a couple more related patches in the\n>>>>queue, so I want to submit them all at once.\n>>>>\n>>>\n>>>OK, here we go - these two patched allow applying multiple extended\n>>>statistics, both for MCV and functional dependencies. Functional\n>>>dependencies are simply merged and then applied at once (so withouth\n>>>choose_best_statistics), statistics are considered in greedy manner by\n>>>calling choose_best_statistics in a loop.\n>>>\n>>\n>>OK, this time with the patches actually attached ;-)\n>\n>These look good to me. I added extra tests (not included in this email)\n>to verify the code on more interesting test cases, such as partitioned\n>tables and within joins. Your test cases are pretty trivial, just being\n>selects from a single table.\n>\n\nAdding such more complex tests seem like a good idea, maybe you'd like\nto share them?\n\n>I'll go mark this \"ready for committer\".\n>\n\nThanks for the review. I'll hold-off with the commit until the next CF,\nthough, just to give others a proper opportunity to look at it.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 9 Dec 2019 23:00:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "On 12/9/19 2:00 PM, Tomas Vondra wrote:\n>>\n>> These look good to me.  I added extra tests (not included in this email)\n>> to verify the code on more interesting test cases, such as partitioned\n>> tables and within joins.  Your test cases are pretty trivial, just being\n>> selects from a single table.\n>>\n> \n> Adding such more complex tests seem like a good idea, maybe you'd like\n> to share them?\n\nYou can find them attached. I did not include them in my earlier email\nbecause they seem a bit unrefined, taking too many lines of code for the\namount of coverage they provide. But you can prune them down and add\nthem to the patch if you like.\n\nThese only test the functional dependencies. If you want to include\nsomething like them in your commit, you might create similar tests for\nthe mcv statistics, too.\n\n-- \nMark Dilger", "msg_date": "Mon, 9 Dec 2019 17:18:28 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using multiple extended statistics for estimates" }, { "msg_contents": "Hi,\n\nI've pushed these two improvements after some minor improvements, mostly\nto comments. I ended up not using the extra tests, as it wasn't clear to\nme it's worth the extra duration.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 13 Jan 2020 01:24:15 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Using multiple extended statistics for estimates" } ]
[ { "msg_contents": "Hi,\n\nPlease find the attached patch having the fix for the typos and\ninconsistencies present in code.\nThe patch contains the following changes:\n1) attibute -> attribute\n2) efficent -> efficient\n3) becuase -> because\n4) fallthru -> fall through\n5) uncoming -> upcoming\n6) ans -> and\n7) requrested -> requested\n8) peforming -> performing\n9) heartbearts -> heartbeats\n10) parametrizing -> parameterizing\n11) uninit -> uninitialized\n12) bufgr -> bufmgr\n13) directi -> direct\n14) thead -> thread\n15) somthing -> something\n16) freek -> freak\n17) changesd -> changes\n\nLet me know your thoughts on the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Oct 2019 23:21:54 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Typos and inconsistencies in code" }, { "msg_contents": "On Mon, Oct 28, 2019 at 11:22 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> Please find the attached patch having the fix for the typos and\n> inconsistencies present in code.\n> The patch contains the following changes:\n> 1) attibute -> attribute\n> 2) efficent -> efficient\n> 3) becuase -> because\n> 4) fallthru -> fall through\n> 5) uncoming -> upcoming\n> 6) ans -> and\n> 7) requrested -> requested\n> 8) peforming -> performing\n> 9) heartbearts -> heartbeats\n> 10) parametrizing -> parameterizing\n> 11) uninit -> uninitialized\n> 12) bufgr -> bufmgr\n> 13) directi -> direct\n> 14) thead -> thread\n> 15) somthing -> something\n> 16) freek -> freak\n> 17) changesd -> changes\n>\n> Let me know your thoughts on the same.\n>\n\nFew comments:\n1.\n * The act of allocating pages to recycle may have invalidated the\n- * results of our previous btree reserch, so repeat it. (We could\n+ * results of our previous btree search, so repeat it. (We could\n * recheck whether any of our split-avoidance strategies that were\n\nI think the old comment meant \"btree research\" but you changed to \"btree search\"\n\n2.\n /* copy&pasted from .../src/backend/utils/adt/datetime.c\n- * and changesd struct pg_tm to struct tm\n+ * and changes struct pg_tm to struct tm\n */\nSeems like this comment meant \"Changed struct pg_tm to struct tm\"\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Oct 2019 09:19:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typos and inconsistencies in code" }, { "msg_contents": "On Tue, Oct 29, 2019 at 9:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Few comments:\n> 1.\n> * The act of allocating pages to recycle may have invalidated the\n> - * results of our previous btree reserch, so repeat it. (We could\n> + * results of our previous btree search, so repeat it. (We could\n> * recheck whether any of our split-avoidance strategies that were\n>\nFixed\n> I think the old comment meant \"btree research\" but you changed to \"btree search\"\n>\n> 2.\n> /* copy&pasted from .../src/backend/utils/adt/datetime.c\n> - * and changesd struct pg_tm to struct tm\n> + * and changes struct pg_tm to struct tm\n> */\n> Seems like this comment meant \"Changed struct pg_tm to struct tm\"\nFixed\nThanks for the review.\nI have attached the updated patch with the fixes.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 29 Oct 2019 17:27:20 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Typos and inconsistencies in code" }, { "msg_contents": "On Tue, Oct 29, 2019 at 05:27:20PM +0530, vignesh C wrote:\n> I have attached the updated patch with the fixes.\n\nThe changes in rangetypes_gist.c are not correct, the usual pattern to\nadd an \"s\" after the structure name is quite common when referring to\nmultiple elements. We could perhaps use \"put-your-struct entries\"\ninstead, but I have seen the pattern of HEAD quite a lot as well (see\nalso for example mcv.c with SortItem that is a file your patch\ntouches).\n\nA comment indentation was wrong in detoast.c, not the fault of this\npatch but I have fixed it at the same time.\n\nNote: there is room for refactoring in pgtypeslib with the pg_tm/tm\nbusiness..\n\nThe fixes in imath.c had better be submitted in upstream:\nhttps://github.com/creachadair/imath/blob/v1.29/imath.c\nSo I have raised an issue here:\nhttps://github.com/creachadair/imath/issues/43\n--\nMichael", "msg_date": "Wed, 30 Oct 2019 10:05:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Typos and inconsistencies in code" }, { "msg_contents": "On Wed, Oct 30, 2019 at 6:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 29, 2019 at 05:27:20PM +0530, vignesh C wrote:\n> > I have attached the updated patch with the fixes.\n>\n> The changes in rangetypes_gist.c are not correct, the usual pattern to\n> add an \"s\" after the structure name is quite common when referring to\n> multiple elements. We could perhaps use \"put-your-struct entries\"\n> instead, but I have seen the pattern of HEAD quite a lot as well (see\n> also for example mcv.c with SortItem that is a file your patch\n> touches).\n>\n> A comment indentation was wrong in detoast.c, not the fault of this\n> patch but I have fixed it at the same time.\n>\nThanks for pushing the changes Michael.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Oct 2019 09:20:16 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Typos and inconsistencies in code" } ]
[ { "msg_contents": "Hi,\n\nI've groused about this a few times, but to me it seems wrong that\nHashJoin and Hash are separate nodes. They're so tightly bound together\nthat keeping them separate just doesn't architecturally makes sense,\nimo. So I wrote a prototype.\n\nEvidence of being tightly bound together:\n- functions in nodeHash.h that take a HashJoinState etc\n- how many functions in nodeHash.h and nodeHashjoin.h are purely exposed\n so the other side can call them\n- there's basically no meaningful separation of concerns between code in\n nodeHash.c and nodeHashjoin.c\n- the Hash node doesn't really exist during most of the planning, it's\n kind of faked up in create_hashjoin_plan().\n- HashJoin knows that the inner node is always going to be a Hash node.\n- HashJoinState and HashState both have pointers to HashJoinTable, etc\n\nBesides violating some aesthetical concerns, I think it also causes\npractical issues:\n\n- code being in different translation code prevents the compiler from\n inlining etc. There's a lot of HOT calls going between both. For each\n new outer tuple we e.g. call, from nodeHashjoin.c separately into\n nodeHash.c for ExecHashGetHashValue(), ExecHashGetBucketAndBatch(),\n ExecHashGetSkewBucket(), ExecScanHashBucket(). They each have to\n do memory loads from HashJoinState/HashJoinTable, even though previous\n code *just* has done so.\n- a separate executor node, and all the ancillary data (slots,\n expression context, target lists etc) is far from free\n- instead of just applying an \"empty outer\" style optimization to both\n sides of the HashJoin, we have to choose. Once unified it's fairly\n easy to just use it on both.\n- generally, a lot of improvements are harder to develop because of the\n odd separation.\n\n\nDoes anybody have good arguments for keeping them separate? The only\nreal one I can see is that it's not a small change, and will make\nbugfixes etc a bit harder. Personally I think that's outweighed by the\ndisadvantages.\n\nAttached is a quick prototype that unifies them. It's not actually that hard,\nI think? Obviously this is far from ready, but I thought it'd be a good\nbasis to get a discussion started?\n\nComments on the prototype:\n\n- I've hacked EXPLAIN to still show the Hash node, to reduce the size of\n the diffs. I'm doubtful that that's the right approach (and I'm sure\n it's not the right approach to do so with the code I injected) - I\n think the Hash node in the explain doesn't really help users, and just\n makes the explain bigger (except for making it clearer which side is\n hashed)\n- currently I applied a very ugly hack to distinguish the parallel\n shm_toc key for the data previously in hash from the data previously\n in HashJoin. Clearly that'd need to be done properly.\n- obviously we'd have to work a lot more on comments, function ordering,\n docs etc. if we wanted to actually apply this.\n\nFWIW, it's much easier to look at the patch if you use\n--color-moved --color-moved-ws=allow-indentation-change\n\nas parameters, as that will color code that's moved without any changes\n(except for indentation), differently from modified code.\n\n\nOne thing I noticed is that create_hashjoin_plan() currently says:\n\n\t/*\n\t * Set Hash node's startup & total costs equal to total cost of input\n\t * plan; this only affects EXPLAIN display not decisions.\n\t */\n\tcopy_plan_costsize(&hash_plan->plan, inner_plan);\n\thash_plan->plan.startup_cost = hash_plan->plan.total_cost;\n\nwhich I don't think is actually true? We use that for:\n else if (HJ_FILL_OUTER(node) ||\n (outerNode->plan->startup_cost < hashNode->ps.plan->total_cost &&\n !node->hj_OuterNotEmpty))\n\nLeaving the inaccurate (outdated?) comment aside, it's not clear to me\nwhy we should ignore the cost of hashing?\n\nIt also seems like we ought actually charge the cost of hashing to the\nhash node, given that we actually apply some hashing cost\n(c.f. initial_cost_hashjoin).\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 28 Oct 2019 16:15:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "merging HashJoin and Hash nodes" }, { "msg_contents": "On Tue, Oct 29, 2019 at 12:15 PM Andres Freund <andres@anarazel.de> wrote:\n> I've groused about this a few times, but to me it seems wrong that\n> HashJoin and Hash are separate nodes. They're so tightly bound together\n> that keeping them separate just doesn't architecturally makes sense,\n> imo. So I wrote a prototype.\n>\n> Evidence of being tightly bound together:\n> - functions in nodeHash.h that take a HashJoinState etc\n> - how many functions in nodeHash.h and nodeHashjoin.h are purely exposed\n> so the other side can call them\n> - there's basically no meaningful separation of concerns between code in\n> nodeHash.c and nodeHashjoin.c\n> - the Hash node doesn't really exist during most of the planning, it's\n> kind of faked up in create_hashjoin_plan().\n> - HashJoin knows that the inner node is always going to be a Hash node.\n> - HashJoinState and HashState both have pointers to HashJoinTable, etc\n>\n> Besides violating some aesthetical concerns, I think it also causes\n> practical issues:\n>\n> - code being in different translation code prevents the compiler from\n> inlining etc. There's a lot of HOT calls going between both. For each\n> new outer tuple we e.g. call, from nodeHashjoin.c separately into\n> nodeHash.c for ExecHashGetHashValue(), ExecHashGetBucketAndBatch(),\n> ExecHashGetSkewBucket(), ExecScanHashBucket(). They each have to\n> do memory loads from HashJoinState/HashJoinTable, even though previous\n> code *just* has done so.\n> - a separate executor node, and all the ancillary data (slots,\n> expression context, target lists etc) is far from free\n> - instead of just applying an \"empty outer\" style optimization to both\n> sides of the HashJoin, we have to choose. Once unified it's fairly\n> easy to just use it on both.\n> - generally, a lot of improvements are harder to develop because of the\n> odd separation.\n\nI agree with all of that.\n\n> Does anybody have good arguments for keeping them separate? The only\n> real one I can see is that it's not a small change, and will make\n> bugfixes etc a bit harder. Personally I think that's outweighed by the\n> disadvantages.\n\nYeah, the ~260KB of churn you came up with is probably the reason I\ndidn't even think of suggesting something along these lines while\nworking on PHJ, though it did occur to me that the division was\nentirely artificial as I carefully smashed more holes in both\ndirections through that wall.\n\nTrying to think of a reason to keep Hash, I remembered Kohei KaiGai's\nspeculation about Hash nodes that are shared by different Hash Join\nnodes (in the context of a partition-wise join where each partition is\njoined against one table). But even if we were to try to do that, a\nHash node isn't necessary to share the hash table, so that's not an\nargument.\n\n> Attached is a quick prototype that unifies them. It's not actually that hard,\n> I think? Obviously this is far from ready, but I thought it'd be a good\n> basis to get a discussion started?\n\nI haven't looked at the patch yet but this yet but it sounds like a\ngood start from the description.\n\n> Comments on the prototype:\n>\n> - I've hacked EXPLAIN to still show the Hash node, to reduce the size of\n> the diffs. I'm doubtful that that's the right approach (and I'm sure\n> it's not the right approach to do so with the code I injected) - I\n> think the Hash node in the explain doesn't really help users, and just\n> makes the explain bigger (except for making it clearer which side is\n> hashed)\n\nYeah, I'm not sure why you'd want to show a Hash node to users if\nthere is no way to use it in any other context than a Hash Join.\n\nFWIW, Oracle, DB2 and SQL Server don't show an intermediate Hash node\nin their plans, and generally you just have to know which way around\nthe input relations are shown (from a quick a glance at some examples\nfound on the web, Oracle and SQL Server show hash relation above probe\nrelation, while DB2, PostgreSQL and MySQL show probe relation above\nhash relation). Curiously, MySQL 8 just added hash joins, and they do\nshow a Hash node (at least in FORMAT=TREE mode, which looks a bit like\nour EXPLAIN).\n\nThe fact that EXPLAIN doesn't label relations seems to be a separate\nconcern that applies equally to nestloop joins, and could perhaps be\naddressed with some more optional verbosity, not a fake node?\n\n\n", "msg_date": "Tue, 29 Oct 2019 14:00:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: merging HashJoin and Hash nodes" }, { "msg_contents": "On Tue, Oct 29, 2019 at 02:00:00PM +1300, Thomas Munro wrote:\n>On Tue, Oct 29, 2019 at 12:15 PM Andres Freund <andres@anarazel.de> wrote:\n>> I've groused about this a few times, but to me it seems wrong that\n>> HashJoin and Hash are separate nodes. They're so tightly bound together\n>> that keeping them separate just doesn't architecturally makes sense,\n>> imo. So I wrote a prototype.\n>>\n>> Evidence of being tightly bound together:\n>> - functions in nodeHash.h that take a HashJoinState etc\n>> - how many functions in nodeHash.h and nodeHashjoin.h are purely exposed\n>> so the other side can call them\n>> - there's basically no meaningful separation of concerns between code in\n>> nodeHash.c and nodeHashjoin.c\n>> - the Hash node doesn't really exist during most of the planning, it's\n>> kind of faked up in create_hashjoin_plan().\n>> - HashJoin knows that the inner node is always going to be a Hash node.\n>> - HashJoinState and HashState both have pointers to HashJoinTable, etc\n>>\n>> Besides violating some aesthetical concerns, I think it also causes\n>> practical issues:\n>>\n>> - code being in different translation code prevents the compiler from\n>> inlining etc. There's a lot of HOT calls going between both. For each\n>> new outer tuple we e.g. call, from nodeHashjoin.c separately into\n>> nodeHash.c for ExecHashGetHashValue(), ExecHashGetBucketAndBatch(),\n>> ExecHashGetSkewBucket(), ExecScanHashBucket(). They each have to\n>> do memory loads from HashJoinState/HashJoinTable, even though previous\n>> code *just* has done so.\n\nI wonder how much we can gain by this. I don't expect any definitive\nfigures from a patch at this stage, but maybe you have some guesses? \n\n>> - a separate executor node, and all the ancillary data (slots,\n>> expression context, target lists etc) is far from free\n>> - instead of just applying an \"empty outer\" style optimization to both\n>> sides of the HashJoin, we have to choose. Once unified it's fairly\n>> easy to just use it on both.\n>> - generally, a lot of improvements are harder to develop because of the\n>> odd separation.\n>\n>I agree with all of that.\n>\n>> Does anybody have good arguments for keeping them separate? The only\n>> real one I can see is that it's not a small change, and will make\n>> bugfixes etc a bit harder. Personally I think that's outweighed by the\n>> disadvantages.\n>\n>Yeah, the ~260KB of churn you came up with is probably the reason I\n>didn't even think of suggesting something along these lines while\n>working on PHJ, though it did occur to me that the division was\n>entirely artificial as I carefully smashed more holes in both\n>directions through that wall.\n>\n>Trying to think of a reason to keep Hash, I remembered Kohei KaiGai's\n>speculation about Hash nodes that are shared by different Hash Join\n>nodes (in the context of a partition-wise join where each partition is\n>joined against one table). But even if we were to try to do that, a\n>Hash node isn't necessary to share the hash table, so that's not an\n>argument.\n>\n>> Attached is a quick prototype that unifies them. It's not actually that hard,\n>> I think? Obviously this is far from ready, but I thought it'd be a good\n>> basis to get a discussion started?\n>\n>I haven't looked at the patch yet but this yet but it sounds like a\n>good start from the description.\n>\n>> Comments on the prototype:\n>>\n>> - I've hacked EXPLAIN to still show the Hash node, to reduce the size of\n>> the diffs. I'm doubtful that that's the right approach (and I'm sure\n>> it's not the right approach to do so with the code I injected) - I\n>> think the Hash node in the explain doesn't really help users, and just\n>> makes the explain bigger (except for making it clearer which side is\n>> hashed)\n>\n>Yeah, I'm not sure why you'd want to show a Hash node to users if\n>there is no way to use it in any other context than a Hash Join.\n>\n>FWIW, Oracle, DB2 and SQL Server don't show an intermediate Hash node\n>in their plans, and generally you just have to know which way around\n>the input relations are shown (from a quick a glance at some examples\n>found on the web, Oracle and SQL Server show hash relation above probe\n>relation, while DB2, PostgreSQL and MySQL show probe relation above\n>hash relation). Curiously, MySQL 8 just added hash joins, and they do\n>show a Hash node (at least in FORMAT=TREE mode, which looks a bit like\n>our EXPLAIN).\n>\n\nNot sure. Maybe we don't need to show an explicit Hash node, because\nthat might seem to imply there's a separate executor step. And that\nwould be misleading.\n\nBut IMO we should make it obvious which side of the join is hashed,\ninstead of relyin on users to \"know which way around the relations are\nshown\". The explain is often used by users who're learning stuff, or\nmaybe investigating it for the first time, and we should not make it\nunnecessarily hard to understand.\n\nI don't think we have any other \"explain node\" that would not represent\nan actual executor node, so not sure what's the right approach. So maybe\nthat's not the right way to do that ...\n\nOTOH we have tools visualizing execution plans, so maybe backwards\ncompatibility of the output is a concern too (I know we don't promise\nanything, though).\n\n>The fact that EXPLAIN doesn't label relations seems to be a separate\n>concern that applies equally to nestloop joins, and could perhaps be\n>addressed with some more optional verbosity, not a fake node?\n>\n\nYeah, that seems separate.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 31 Oct 2019 23:59:19 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: merging HashJoin and Hash nodes" }, { "msg_contents": "Hi,\n\nOn 2019-10-31 23:59:19 +0100, Tomas Vondra wrote:\n> On Tue, Oct 29, 2019 at 02:00:00PM +1300, Thomas Munro wrote:\n> > On Tue, Oct 29, 2019 at 12:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I've groused about this a few times, but to me it seems wrong that\n> > > HashJoin and Hash are separate nodes. They're so tightly bound together\n> > > that keeping them separate just doesn't architecturally makes sense,\n> > > imo. So I wrote a prototype.\n> > > \n> > > Evidence of being tightly bound together:\n> > > - functions in nodeHash.h that take a HashJoinState etc\n> > > - how many functions in nodeHash.h and nodeHashjoin.h are purely exposed\n> > > so the other side can call them\n> > > - there's basically no meaningful separation of concerns between code in\n> > > nodeHash.c and nodeHashjoin.c\n> > > - the Hash node doesn't really exist during most of the planning, it's\n> > > kind of faked up in create_hashjoin_plan().\n> > > - HashJoin knows that the inner node is always going to be a Hash node.\n> > > - HashJoinState and HashState both have pointers to HashJoinTable, etc\n> > > \n> > > Besides violating some aesthetical concerns, I think it also causes\n> > > practical issues:\n> > > \n> > > - code being in different translation code prevents the compiler from\n> > > inlining etc. There's a lot of HOT calls going between both. For each\n> > > new outer tuple we e.g. call, from nodeHashjoin.c separately into\n> > > nodeHash.c for ExecHashGetHashValue(), ExecHashGetBucketAndBatch(),\n> > > ExecHashGetSkewBucket(), ExecScanHashBucket(). They each have to\n> > > do memory loads from HashJoinState/HashJoinTable, even though previous\n> > > code *just* has done so.\n> \n> I wonder how much we can gain by this. I don't expect any definitive\n> figures from a patch at this stage, but maybe you have some guesses?\n\nIt's measureable, but not a world-changing difference. Some of the gains\nare limited by the compiler not realizing that it does not have to\nreload values across some external function calls. I saw somewhere\naround ~3% for a case that was bottlenecked by HJ lookups (not\nbuild).\n\nI think part of the effect size is also limited by other unnecessary\ninefficiencies being a larger bottleneck. E.g.\n\n1) the fact that ExecScanHashBucket() contains branches that have\n roughly 50% likelihoods, making them unpredictable ( 1. on a\n successfull lookup we oscillate between the first hashTuple != NULL\n test succeeding and failing except in case of bucket conflict; 2. the\n while (hashTuple != NULL) oscillates similarly, because it tests for\n I. initial lookup succeeding, II. further tuple in previous bucket\n lookup III. further tuples in case of hashvalue mismatch. Quite\n observable by profiling for br_misp_retired.conditional.\n2) The fact that there's *two* indirections for a successfull lookup\n that are very likely to be cache misses. First we need to look up the\n relevant bucket, second we need to actually fetch hashvalue from the\n pointer stored in the bucket.\n\n\nBut even independent of these larger inefficiencies, I suspect we could\nalso gain more from inlining by changing nodeHashjoin a bit. E.g. moving\nthe HJ_SCAN_BUCKET code into an always_inline function, and also\nreferencing it from the tail end of the HJ_NEED_NEW_OUTER code, instead\nof falling through, would allow to optimize away a number of loads (and\nI think also stores), and improve branch predictor\nefficiency. E.g. optimizing away store/load combinations for\nnode->hj_CurHashValue, node->hj_CurBucketNo, node->hj_CurSkewBucketNo;\nloads of hj_HashTable, ...; stores of node->hj_JoinState,\nnode->hj_MatchedOuter. And probably make the code easier to read, to\nboot.\n\n\n> But IMO we should make it obvious which side of the join is hashed,\n> instead of relyin on users to \"know which way around the relations are\n> shown\". The explain is often used by users who're learning stuff, or\n> maybe investigating it for the first time, and we should not make it\n> unnecessarily hard to understand.\n\nI agree. I wonder if just outputting something like 'Hashed Side:\nsecond' (or \"right\", or ...) could work. Not perfect, but I don't really\nhave a better idea.\n\nWe somewhat rely on users understanding inner/outer for explain output\n(I doubt this is good, to be clear), e.g. \"Inner Unique: true \". Made\nworse by the fact that \"inner\"/\"outer\" is also used to describe\ndifferent kinds of joins, with a mostly independent meaning.\n\n\n\n> OTOH we have tools visualizing execution plans, so maybe backwards\n> compatibility of the output is a concern too (I know we don't promise\n> anything, though).\n\nWell, execution *would* work a bit differently, so I don't feel too bad\nabout tools having to adapt to that. E.g. graphical explain tool really\nshouldn't display a separate Hash nodes anymore.\n\n\n> > The fact that EXPLAIN doesn't label relations seems to be a separate\n> > concern that applies equally to nestloop joins, and could perhaps be\n> > addressed with some more optional verbosity, not a fake node?\n\n> Yeah, that seems separate.\n\nI'm not sure that's true. If we were labelling sub-nodes with 'inner'\nand 'outer' or such, we could just make that 'hashed inner' or such. But\nchanging this seems to be a large compat break...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Oct 2019 16:43:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: merging HashJoin and Hash nodes" } ]
[ { "msg_contents": "The attached patch teaches psql to redisplay any not-yet-executed\nquery text after editing with \\e. The fact that you don't get to\nsee what you're about to execute has been complained of before,\nmost recently at bug #16034 [1]. In that thread I complained that\nwe needed some probably-not-very-portable readline functionality\nto make this work. However, after experimenting with trying to\nshove text back into readline's buffer, I realized that there's\nnot really any need to do that: we just need to print the waiting\ntext and then collect another line. (As a bonus, it works the\nsame even if you turned off readline with -n.)\n\nIt also seems like to make this not confusing, we need to regurgitate\na prompt before the query text. As an example, if I do\n\nregression=# \\e\n\nand then put this into the edited file:\n\nselect 1,\n2,\n3\n\nwhat I see after exiting the editor is now\n\nregression=# select 1,\n2,\n3\nregression-# \n\nWithout the initial prompt it looks (to me anyway) like output\nfrom the command, rather than something I've sort of automagically\ntyped.\n\nIn the cited bug, Pavlo argued that we should also print any\ncompleted commands that get sent to the backend immediately\nafter \\e. It'd be possible to do that by extending this patch\n(basically, dump about-to-be-executed commands if need_redisplay\nis still true), but on the whole I think that that would be overly\nchatty, so I didn't do it.\n\nThis could stand some review and testing (e.g. does it interact\nbadly with any other psql features), so I'll add it to the\nupcoming CF.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16034-a7ebf0622970a1dd%40postgresql.org", "msg_date": "Mon, 28 Oct 2019 23:00:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Getting psql to redisplay command after \\e" }, { "msg_contents": "\nHello Tom,\n\n> The attached patch teaches psql to redisplay any not-yet-executed\n> query text after editing with \\e. \n>\n> [...]\n\nI've tested this patch. Although I agree that it is an improvement, I'm a \nlittle at odd with the feature as is:\n\n psql=> \\e\n # select 1...\n\nthen:\n\n psql=> select 1...\n psql-> <prompt>\n\nI cannot move back with readline to edit further, I'm stuck there, which \nis strange. I would prefer a simpler:\n\n psql=> select 1...<prompt>\n\nthat would also be readline-aware, so that I know I'm there and ready to \nnl but also to edit directly if I want that.\n\nThat would suggest to remove the ending newline rather than appending it, \nand possibly to discuss a little bit with readline as well so that the \ndisplay line is also the current line for its point of view, so that it \ncan be edited further?\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 29 Oct 2019 09:23:43 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> The attached patch teaches psql to redisplay any not-yet-executed\n>> query text after editing with \\e. \n\n> I've tested this patch. Although I agree that it is an improvement, I'm a \n> little at odd with the feature as is:\n\n> psql=> \\e\n> # select 1...\n\n> then:\n\n> psql=> select 1...\n> psql-> <prompt>\n\n> I cannot move back with readline to edit further, I'm stuck there, which \n> is strange.\n\nI don't follow. readline doesn't allow you to edit already-entered lines\ntoday, that is, after typing \"select 1<return>\" you see\n\nregression=# select 1\nregression-# \n\nand there isn't any way to move back and edit the already-entered line\nwithin readline. I agree it might be nicer if you could do that, but\nthat's *far* beyond the scope of this patch. It would take entirely\nfundamental rethinking of our use of libreadline, if indeed it's possible\nat all. I also don't see how we could have syntax-aware per-line prompts\nif we were allowing readline to treat the whole query as one line.\n\nIn the larger picture, tinkering with how that works would affect\nevery psql user at the level of \"muscle memory\" editing habits,\nand I suspect that their reactions would not be uniformly positive.\nWhat I propose here doesn't affect anyone who doesn't use \\e at all.\nEven for \\e users it doesn't have any effect on what you need to type.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Oct 2019 12:06:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "On Mon, 2019-10-28 at 23:00 -0400, Tom Lane wrote:\n> The attached patch teaches psql to redisplay any not-yet-executed\n> query text after editing with \\e. The fact that you don't get to\n> see what you're about to execute has been complained of before,\n> most recently at bug #16034 [1]. In that thread I complained that\n> we needed some probably-not-very-portable readline functionality\n> to make this work. However, after experimenting with trying to\n> shove text back into readline's buffer, I realized that there's\n> not really any need to do that: we just need to print the waiting\n> text and then collect another line. (As a bonus, it works the\n> same even if you turned off readline with -n.)\n\nThis is a nice improvement.\n\nI tried to torture it with a hex editor, but couldn't get it to break.\n\nThere were some weird carriage returns in the patch, but after I\nremoved them, it applied fine.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 29 Oct 2019 22:05:34 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "\nHello Tom,\n\n>> psql=> select 1...\n>> psql-> <prompt>\n>\n>> I cannot move back with readline to edit further, I'm stuck there, which\n>> is strange.\n>\n> I don't follow. readline doesn't allow you to edit already-entered lines\n> today, that is, after typing \"select 1<return>\" you see\n>\n> regression=# select 1\n> regression-#\n>\n> and there isn't any way to move back and edit the already-entered line\n> within readline.\n\nYep.\n\nMy point is to possibly not implicitely <return> at the end of \\e, but to \nbehave as if we were moving in history, which allows editing the lines, so \nthat you would get\n\n psql=> select 1<cursor>\n\nInstead of the above.\n\n> I agree it might be nicer if you could do that, but that's *far* beyond \n> the scope of this patch. It would take entirely fundamental rethinking \n> of our use of libreadline, if indeed it's possible at all. I also don't \n> see how we could have syntax-aware per-line prompts if we were allowing \n> readline to treat the whole query as one line.\n\nI was suggesting something much simpler than rethinking readline handling. \nDoes not mean that it is a good idea, but while testing the patch I would \nhave liked the unfinished line to be in the current editing buffer, \nbasically as if I had not typed <nl>.\n\nISTM more natural that \\e behaves like history when coming back from \nediting, i.e. the \\e-edited line is set as the current buffer for \nreadline.\n\n> In the larger picture, tinkering with how that works would affect\n> every psql user at the level of \"muscle memory\" editing habits,\n> and I suspect that their reactions would not be uniformly positive.\n> What I propose here doesn't affect anyone who doesn't use \\e at all.\n> Even for \\e users it doesn't have any effect on what you need to type.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 31 Oct 2019 10:09:14 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> My point is to possibly not implicitely <return> at the end of \\e, but to \n> behave as if we were moving in history, which allows editing the lines, so \n> that you would get\n> psql=> select 1<cursor>\n> Instead of the above.\n\n>> I agree it might be nicer if you could do that, but that's *far* beyond \n>> the scope of this patch. It would take entirely fundamental rethinking \n>> of our use of libreadline, if indeed it's possible at all. I also don't \n>> see how we could have syntax-aware per-line prompts if we were allowing \n>> readline to treat the whole query as one line.\n\n> I was suggesting something much simpler than rethinking readline handling. \n> Does not mean that it is a good idea, but while testing the patch I would \n> have liked the unfinished line to be in the current editing buffer, \n> basically as if I had not typed <nl>.\n\nI did experiment with trying to do that, but I couldn't get it to work,\neven with the single version of libreadline I had at hand. It appears\nto me that readline() starts by clearing the internal buffer. Even if\nwe could persuade it to work in a particular readline version, I think\nthe odds of making it portable across all the libreadline and libedit\nversions that are out there aren't very good. And there's definitely\nno chance of being remotely compatible with that behavior when using the\nbare tty drivers (psql -n).\n\nIn practice, if you decide that you don't like what you're looking at,\nyou're probably going to go back into the editor to fix it, ie issue\nanother \\e. So I'm not sure that it's worth such pushups to get the\ndata into readline's buffer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Nov 2019 12:21:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "Hello Tom,\n\n>> I was suggesting something much simpler than rethinking readline handling.\n>> Does not mean that it is a good idea, but while testing the patch I would\n>> have liked the unfinished line to be in the current editing buffer,\n>> basically as if I had not typed <nl>.\n>\n> I did experiment with trying to do that, but I couldn't get it to work, \n> even with the single version of libreadline I had at hand. It appears \n> to me that readline() starts by clearing the internal buffer. Even if \n> we could persuade it to work in a particular readline version, I think \n> the odds of making it portable across all the libreadline and libedit \n> versions that are out there aren't very good. And there's definitely no \n> chance of being remotely compatible with that behavior when using the \n> bare tty drivers (psql -n).\n\nArgh, too bad.\n\nThis suggests that readline cannot be used to edit simply a known string? \n:-( \"rl_insert_text\" looked promising, although probably not portable, and \nI tried to make it work without much success anyway. Maybe I'll try to \ninvestigate more deeply later.\n\nNote that replacing the current buffer is exactly what history does. So \nmaybe that could be exploited by appending the edited line into history \n(add_history) and tell readline to move there (could not find how to do \nthat automatically, though)? Or some other history handling…\n\n> In practice, if you decide that you don't like what you're looking at,\n> you're probably going to go back into the editor to fix it, ie issue\n> another \\e. So I'm not sure that it's worth such pushups to get the\n> data into readline's buffer.\n\nFor me \\e should mean edit, not edit-and-execute, so I should be back to \nprompt, which is the crux of my unease with how the feature behaves, \nbecause it combines two functions that IMO shouldn't.\n\nAnyway the submitted patch is an improvement to the current status.\n\n-- \nFabien.", "msg_date": "Sun, 3 Nov 2019 20:58:39 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "ne 3. 11. 2019 v 20:58 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> Hello Tom,\n>\n> >> I was suggesting something much simpler than rethinking readline\n> handling.\n> >> Does not mean that it is a good idea, but while testing the patch I\n> would\n> >> have liked the unfinished line to be in the current editing buffer,\n> >> basically as if I had not typed <nl>.\n> >\n> > I did experiment with trying to do that, but I couldn't get it to work,\n> > even with the single version of libreadline I had at hand. It appears\n> > to me that readline() starts by clearing the internal buffer. Even if\n> > we could persuade it to work in a particular readline version, I think\n> > the odds of making it portable across all the libreadline and libedit\n> > versions that are out there aren't very good. And there's definitely no\n> > chance of being remotely compatible with that behavior when using the\n> > bare tty drivers (psql -n).\n>\n> Argh, too bad.\n>\n> This suggests that readline cannot be used to edit simply a known string?\n> :-( \"rl_insert_text\" looked promising, although probably not portable, and\n> I tried to make it work without much success anyway. Maybe I'll try to\n> investigate more deeply later.\n>\n\npspg uses rl_insert_text\n\nhttps://github.com/okbob/pspg/blob/59d115cd55926ab1886fc0dedbbc6ce0577b0cb3/src/pspg.c#L2522\n\nPavel\n\n\n> Note that replacing the current buffer is exactly what history does. So\n> maybe that could be exploited by appending the edited line into history\n> (add_history) and tell readline to move there (could not find how to do\n> that automatically, though)? Or some other history handling…\n>\n> > In practice, if you decide that you don't like what you're looking at,\n> > you're probably going to go back into the editor to fix it, ie issue\n> > another \\e. So I'm not sure that it's worth such pushups to get the\n> > data into readline's buffer.\n>\n> For me \\e should mean edit, not edit-and-execute, so I should be back to\n> prompt, which is the crux of my unease with how the feature behaves,\n> because it combines two functions that IMO shouldn't.\n>\n> Anyway the submitted patch is an improvement to the current status.\n>\n> --\n> Fabien.\n\nne 3. 11. 2019 v 20:58 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\nHello Tom,\n\n>> I was suggesting something much simpler than rethinking readline handling.\n>> Does not mean that it is a good idea, but while testing the patch I would\n>> have liked the unfinished line to be in the current editing buffer,\n>> basically as if I had not typed <nl>.\n>\n> I did experiment with trying to do that, but I couldn't get it to work, \n> even with the single version of libreadline I had at hand.  It appears \n> to me that readline() starts by clearing the internal buffer.  Even if \n> we could persuade it to work in a particular readline version, I think \n> the odds of making it portable across all the libreadline and libedit \n> versions that are out there aren't very good.  And there's definitely no \n> chance of being remotely compatible with that behavior when using the \n> bare tty drivers (psql -n).\n\nArgh, too bad.\n\nThis suggests that readline cannot be used to edit simply a known string? \n:-( \"rl_insert_text\" looked promising, although probably not portable, and \nI tried to make it work without much success anyway. Maybe I'll try to \ninvestigate more deeply later.pspg uses rl_insert_text https://github.com/okbob/pspg/blob/59d115cd55926ab1886fc0dedbbc6ce0577b0cb3/src/pspg.c#L2522Pavel \n\nNote that replacing the current buffer is exactly what history does. So \nmaybe that could be exploited by appending the edited line into history \n(add_history) and tell readline to move there (could not find how to do \nthat automatically, though)? Or some other history handling…\n\n> In practice, if you decide that you don't like what you're looking at,\n> you're probably going to go back into the editor to fix it, ie issue\n> another \\e.  So I'm not sure that it's worth such pushups to get the\n> data into readline's buffer.\n\nFor me \\e should mean edit, not edit-and-execute, so I should be back to \nprompt, which is the crux of my unease with how the feature behaves, \nbecause it combines two functions that IMO shouldn't.\n\nAnyway the submitted patch is an improvement to the current status.\n\n-- \nFabien.", "msg_date": "Sun, 3 Nov 2019 21:08:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I did experiment with trying to do that, but I couldn't get it to work, \n>> even with the single version of libreadline I had at hand. It appears \n>> to me that readline() starts by clearing the internal buffer. Even if \n>> we could persuade it to work in a particular readline version, I think \n>> the odds of making it portable across all the libreadline and libedit \n>> versions that are out there aren't very good. And there's definitely no \n>> chance of being remotely compatible with that behavior when using the \n>> bare tty drivers (psql -n).\n\n> This suggests that readline cannot be used to edit simply a known string? \n> :-( \"rl_insert_text\" looked promising, although probably not portable, and \n> I tried to make it work without much success anyway. Maybe I'll try to \n> investigate more deeply later.\n\nI think that rl_insert_text and friends can probably only be used from\nreadline callback functions. So in principle maybe you could make it\nwork by having an rl_startup_hook that injects text if there is any\nto inject. There would remain the issues of (a) is it portable across\na wide range of readline and libedit versions, (b) will the prompting\nbehavior be nice, and (c) do we really want this to work fundamentally\ndifferently when readline is turned off?\n\n(Pavel's code cited nearby seems to me to be a fine example of what\nwe do *not* want to do. Getting in bed with libreadline to that\nextent is inevitably going to misbehave in some places.)\n\n>> In practice, if you decide that you don't like what you're looking at,\n>> you're probably going to go back into the editor to fix it, ie issue\n>> another \\e. So I'm not sure that it's worth such pushups to get the\n>> data into readline's buffer.\n\n> For me \\e should mean edit, not edit-and-execute, so I should be back to \n> prompt, which is the crux of my unease with how the feature behaves, \n> because it combines two functions that IMO shouldn't.\n\nI don't understand that complaint at all. My proposed patch does not\nchange the behavior to force execution, and it does display a prompt ---\none that reflects whether you've given a complete command.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Nov 2019 19:35:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "I wrote:\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> This suggests that readline cannot be used to edit simply a known string? \n>> :-( \"rl_insert_text\" looked promising, although probably not portable, and \n>> I tried to make it work without much success anyway. Maybe I'll try to \n>> investigate more deeply later.\n\n> I think that rl_insert_text and friends can probably only be used from\n> readline callback functions. So in principle maybe you could make it\n> work by having an rl_startup_hook that injects text if there is any\n> to inject. There would remain the issues of (a) is it portable across\n> a wide range of readline and libedit versions, (b) will the prompting\n> behavior be nice, and (c) do we really want this to work fundamentally\n> differently when readline is turned off?\n\nI thought maybe you were going to work on this right away, but since\nyou haven't, I went ahead and pushed what I had. There's certainly\nplenty of time to reconsider if you find a better answer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Nov 2019 17:10:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting psql to redisplay command after \\e" }, { "msg_contents": "\n>> I think that rl_insert_text and friends can probably only be used from\n>> readline callback functions. So in principle maybe you could make it\n>> work by having an rl_startup_hook that injects text if there is any\n>> to inject. There would remain the issues of (a) is it portable across\n>> a wide range of readline and libedit versions, (b) will the prompting\n>> behavior be nice, and (c) do we really want this to work fundamentally\n>> differently when readline is turned off?\n>\n> I thought maybe you were going to work on this right away, but since\n> you haven't, I went ahead and pushed what I had. There's certainly\n> plenty of time to reconsider if you find a better answer.\n\nIndeeed.\n\nI started to play with the startup hook, it kind of worked but not exactly \nas I wished, and I do not have much time available this round.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 23 Nov 2019 19:08:11 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Getting psql to redisplay command after \\e" } ]
[ { "msg_contents": "Hello,\nI think this demonstrates a bug, tested in 11.5:\nhttps://gist.github.com/wkalt/a298fe82c564668c803b3537561e67a0\n\nThe same script succeeds if the index on line 11 is either dropped, made to\nbe non-partial on b, or shifted to a different column (the others are used\nin the partitioning; maybe significant).\n\nThis seems somewhat related to\nhttps://www.postgresql.org/message-id/flat/CAFjFpRc0hqO5hc-%3DFNePygo9j8WTtOvvmysesnN8bfkp3vxHPQ%40mail.gmail.com#00ca695e6c71834622a6e42323f5558a\n.\n\nRegards,\nWyatt\n\nHello,I think this demonstrates a bug, tested in 11.5:https://gist.github.com/wkalt/a298fe82c564668c803b3537561e67a0The same script succeeds if the index on line 11 is either dropped, made to be non-partial on b, or shifted to a different column (the others are used in the partitioning; maybe significant).This seems somewhat related to https://www.postgresql.org/message-id/flat/CAFjFpRc0hqO5hc-%3DFNePygo9j8WTtOvvmysesnN8bfkp3vxHPQ%40mail.gmail.com#00ca695e6c71834622a6e42323f5558a.Regards,Wyatt", "msg_date": "Mon, 28 Oct 2019 21:00:24 -0700", "msg_from": "Wyatt Alt <wyatt.alt@gmail.com>", "msg_from_op": true, "msg_subject": "[BUG] Partition creation fails after dropping a column and adding a\n partial index" }, { "msg_contents": "Here's a slightly smaller repro:\nhttps://gist.github.com/wkalt/36720f39c97567fa6cb18cf5c05ac60f\n\nHere's a slightly smaller repro: https://gist.github.com/wkalt/36720f39c97567fa6cb18cf5c05ac60f", "msg_date": "Mon, 28 Oct 2019 21:16:10 -0700", "msg_from": "Wyatt Alt <wyatt.alt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] Partition creation fails after dropping a column and adding\n a partial index" }, { "msg_contents": "On Mon, Oct 28, 2019 at 09:00:24PM -0700, Wyatt Alt wrote:\n> I think this demonstrates a bug, tested in 11.5:\n> https://gist.github.com/wkalt/a298fe82c564668c803b3537561e67a0\n\nIf this source goes away, then we would lose it. It is always better\nto copy directly the example in the message sent to the mailing lists,\nand here it is:\ncreate table demo (\n id int,\n useless int,\n d timestamp,\n b boolean\n) partition by range (id, d);\nalter table demo drop column useless;\n-- only seems to cause failure when it's a partial index on b.\ncreate index on demo(b) where b = 't';\ncreate table demo_1_20191031 partition of demo for values from (1,\n'2019-10-31') to (1, '2019-11-01');\n\n> The same script succeeds if the index on line 11 is either dropped, made to\n> be non-partial on b, or shifted to a different column (the others are used\n> in the partitioning; maybe significant).\n> \n> This seems somewhat related to\n> https://www.postgresql.org/message-id/flat/CAFjFpRc0hqO5hc-%3DFNePygo9j8WTtOvvmysesnN8bfkp3vxHPQ%40mail.gmail.com#00ca695e6c71834622a6e42323f5558a\n\nYes, something looks wrong with that. I have not looked at it in\ndetails yet though. I'll see about that tomorrow.\n--\nMichael", "msg_date": "Tue, 29 Oct 2019 13:16:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] Partition creation fails after dropping a column and\n adding a partial index" }, { "msg_contents": "On Tue, Oct 29, 2019 at 01:16:58PM +0900, Michael Paquier wrote:\n> Yes, something looks wrong with that. I have not looked at it in\n> details yet though. I'll see about that tomorrow.\n\nSo.. When building the attribute map for a cloned index (with either\nLIKE during the transformation or for partition indexes), we store\neach attribute number with 0 used for dropped columns. Unfortunately,\nif you look at the way the attribute map is built in this case the\ncode correctly generates the mapping with convert_tuples_by_name_map.\nBut, the length of the mapping used is incorrect as this makes use of \nthe number of attributes for the newly-created child relation, and not\nthe parent which includes the dropped column in its count. So the\nanswer is simply to use the parent as reference for the mapping\nlength.\n\nThe patch is rather simple as per the attached, with extended\nregression tests included. I have not checked on back-branches yet,\nbut that's visibly wrong since 8b08f7d down to v11 (will do that when\nback-patching).\n\nThere could be a point in changing convert_tuples_by_name_map & co so\nas they return the length of the map on top of the map to avoid such\nmistakes in the future. That's a more invasive patch not really\nadapted for a back-patch, but we could do that on HEAD once this bug\nis fixed. I have also checked other calls of this API and the\nhandling is done correctly.\n\nWyatt, what do you think?\n--\nMichael", "msg_date": "Thu, 31 Oct 2019 13:45:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] Partition creation fails after dropping a column and\n adding a partial index" }, { "msg_contents": "On Thu, Oct 31, 2019 at 9:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Oct 29, 2019 at 01:16:58PM +0900, Michael Paquier wrote:\n> > Yes, something looks wrong with that. I have not looked at it in\n> > details yet though. I'll see about that tomorrow.\n>\n> So.. When building the attribute map for a cloned index (with either\n> LIKE during the transformation or for partition indexes), we store\n> each attribute number with 0 used for dropped columns. Unfortunately,\n> if you look at the way the attribute map is built in this case the\n> code correctly generates the mapping with convert_tuples_by_name_map.\n> But, the length of the mapping used is incorrect as this makes use of\n> the number of attributes for the newly-created child relation, and not\n> the parent which includes the dropped column in its count. So the\n> answer is simply to use the parent as reference for the mapping\n> length.\n>\n> The patch is rather simple as per the attached, with extended\n> regression tests included. I have not checked on back-branches yet,\n> but that's visibly wrong since 8b08f7d down to v11 (will do that when\n> back-patching).\n>\n> There could be a point in changing convert_tuples_by_name_map & co so\n> as they return the length of the map on top of the map to avoid such\n> mistakes in the future. That's a more invasive patch not really\n> adapted for a back-patch, but we could do that on HEAD once this bug\n> is fixed. I have also checked other calls of this API and the\n> handling is done correctly.\n>\n> The patch works for me on master and on 12. I have rebased the patch for\nVersion 11.\n\n\n> Wyatt, what do you think?\n> --\n> Michael\n>\n\n\n-- \nIbrar Ahmed", "msg_date": "Thu, 31 Oct 2019 19:54:25 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Partition creation fails after dropping a column and adding\n a partial index" }, { "msg_contents": "On Thu, Oct 31, 2019 at 1:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Oct 29, 2019 at 01:16:58PM +0900, Michael Paquier wrote:\n> > Yes, something looks wrong with that. I have not looked at it in\n> > details yet though. I'll see about that tomorrow.\n>\n> So.. When building the attribute map for a cloned index (with either\n> LIKE during the transformation or for partition indexes), we store\n> each attribute number with 0 used for dropped columns. Unfortunately,\n> if you look at the way the attribute map is built in this case the\n> code correctly generates the mapping with convert_tuples_by_name_map.\n> But, the length of the mapping used is incorrect as this makes use of\n> the number of attributes for the newly-created child relation, and not\n> the parent which includes the dropped column in its count. So the\n> answer is simply to use the parent as reference for the mapping\n> length.\n>\n> The patch is rather simple as per the attached, with extended\n> regression tests included. I have not checked on back-branches yet,\n> but that's visibly wrong since 8b08f7d down to v11 (will do that when\n> back-patching).\n\nThe patch looks correct and applies to both v12 and v11.\n\n> There could be a point in changing convert_tuples_by_name_map & co so\n> as they return the length of the map on top of the map to avoid such\n> mistakes in the future. That's a more invasive patch not really\n> adapted for a back-patch, but we could do that on HEAD once this bug\n> is fixed. I have also checked other calls of this API and the\n> handling is done correctly.\n\nI've been bitten by this logical error when deciding what length to\nuse for the map, so seems like a good idea.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 1 Nov 2019 09:58:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Partition creation fails after dropping a column and adding\n a partial index" }, { "msg_contents": "On Fri, Nov 01, 2019 at 09:58:26AM +0900, Amit Langote wrote:\n> On Thu, Oct 31, 2019 at 1:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> The patch is rather simple as per the attached, with extended\n>> regression tests included. I have not checked on back-branches yet,\n>> but that's visibly wrong since 8b08f7d down to v11 (will do that when\n>> back-patching).\n> \n> The patch looks correct and applies to both v12 and v11.\n\nThanks for the review, committed down to v11. The version for v11 had\na couple of conflicts actually.\n\n>> There could be a point in changing convert_tuples_by_name_map & co so\n>> as they return the length of the map on top of the map to avoid such\n>> mistakes in the future. That's a more invasive patch not really\n>> adapted for a back-patch, but we could do that on HEAD once this bug\n>> is fixed. I have also checked other calls of this API and the\n>> handling is done correctly.\n> \n> I've been bitten by this logical error when deciding what length to\n> use for the map, so seems like a good idea.\n\nOkay, let's see about that then.\n--\nMichael", "msg_date": "Sat, 2 Nov 2019 14:20:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] Partition creation fails after dropping a column and\n adding a partial index" } ]
[ { "msg_contents": "The cache_plan argument to ri_PlanCheck has not been used since\ne8c9fd5fdf768323911f7088e8287f63b513c3c6. I propose to remove it.\n\nThat commit said \"I left it alone in case there is any future need for \nit\" but there hasn't been a need in 7 years, and I find it confusing to \nhave an unused function argument without a clear purpose. It would \ntrivial to put it back if needed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 29 Oct 2019 10:21:32 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove unused function argument" }, { "msg_contents": "On Tue, Oct 29, 2019 at 2:51 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> The cache_plan argument to ri_PlanCheck has not been used since\n> e8c9fd5fdf768323911f7088e8287f63b513c3c6. I propose to remove it.\n>\n> That commit said \"I left it alone in case there is any future need for\n> it\" but there hasn't been a need in 7 years, and I find it confusing to\n> have an unused function argument without a clear purpose. It would\n> trivial to put it back if needed.\n>\nCode changes looks fine to me.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Oct 2019 11:21:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused function argument" }, { "msg_contents": "On 2019-10-30 06:51, vignesh C wrote:\n> On Tue, Oct 29, 2019 at 2:51 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> The cache_plan argument to ri_PlanCheck has not been used since\n>> e8c9fd5fdf768323911f7088e8287f63b513c3c6. I propose to remove it.\n>>\n>> That commit said \"I left it alone in case there is any future need for\n>> it\" but there hasn't been a need in 7 years, and I find it confusing to\n>> have an unused function argument without a clear purpose. It would\n>> trivial to put it back if needed.\n>>\n> Code changes looks fine to me.\n\npushed, thanks\n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 6 Nov 2019 08:20:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove unused function argument" } ]
[ { "msg_contents": "Continuing the discussion from [0] and [1], here is a patch that \nautomates the process of updating Unicode derived files. Summary:\n\n- Edit UNICODE_VERSION and/or CLDR_VERSION in src/Makefile.global.in\n- Run make update-unicode\n- Commit\n\nI have added that to the release checklist in RELEASE_NOTES.\n\nThis also includes the script used in [0] that was not committed at that \ntime. Other than that, this just refactors existing build code.\n\nOpen questions that are currently not handled consistently:\n\n- Should the downloaded files be listed in .gitignore?\n- Should the downloaded files be cleaned by make clean (or distclean or \nmaintainer-clean or none)?\n- Should the generated files be excluded from pgindent? Currently, the \ngenerated files will not pass pgindent unchanged, so that could cause \nannoying whitespace battles when these files are updated and re-indented \naround release time.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/bbb19114-af1e-513b-08a9-61272794bd5c%402ndquadrant.com\n[1]: \nhttps://www.postgresql.org/message-id/flat/77f69366-ca31-6437-079f-47fce69bae1b%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 29 Oct 2019 11:06:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Add support for automatically updating Unicode derived files" }, { "msg_contents": "On Tue, Oct 29, 2019 at 6:06 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> Continuing the discussion from [0] and [1], here is a patch that\n> automates the process of updating Unicode derived files. Summary:\n>\n> - Edit UNICODE_VERSION and/or CLDR_VERSION in src/Makefile.global.in\n> - Run make update-unicode\n> - Commit\n\nHi Peter,\n\nI gave \"make update-unicode\" a try. It's unclear to me what the state\nof the build tree should be when a maintainer runs this, so I'll just\nreport what happens when running naively (on MacOS).\n\nAfter only running configure, \"make update-unicode\" gives this error\nat normalization-check:\n\nld: library not found for -lpgcommon\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\n\nAfter commenting that out, the next command \"$(MAKE) -C\ncontrib/unaccent $@\" failed, seemingly because $(PYTHON) is empty\nunless --with-python was specified at configure time.\n\n> Open questions that are currently not handled consistently:\n>\n> - Should the downloaded files be listed in .gitignore?\n\nThese files are transient byproducts of a build, and we don't want\nthem committed, so they seem like a normal candidate for .gitignore.\n\n> - Should the downloaded files be cleaned by make clean (or distclean or\n> maintainer-clean or none)?\n\nIt seems one would want to make clean without removing these files,\nand maintainer clean is for removing things that are preserved in\ndistribution tarballs. So I would go with distclean.\n\n> - Should the generated files be excluded from pgindent? Currently, the\n> generated files will not pass pgindent unchanged, so that could cause\n> annoying whitespace battles when these files are updated and re-indented\n> around release time.\n\nI see what you mean in the norm table header. I think generated files\nshould not be pgindent'd, since creating them is already a consistent,\nmechanical process, and their presentation is not as important as\nother code.\n\nOther comments:\n\n+print \"/* generated by\nsrc/common/unicode/generate-unicode_combining_table.pl, do not edit\n*/\\n\\n\";\n\nI would print out the full boilerplate like for other generated headers.\n\nLastly, src/common/unicode/README is outdated (and possibly no longer\nuseful at all?).\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Dec 2019 17:48:39 -0500", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "On 2019-12-19 23:48, John Naylor wrote:\n> I gave \"make update-unicode\" a try. It's unclear to me what the state\n> of the build tree should be when a maintainer runs this, so I'll just\n> report what happens when running naively (on MacOS).\n\nYeah, that wasn't fully thought through, it appears.\n\n> After only running configure, \"make update-unicode\" gives this error\n> at normalization-check:\n> \n> ld: library not found for -lpgcommon\n> clang: error: linker command failed with exit code 1 (use -v to see invocation)\n\nFixed by adding more make dependencies.\n\n> After commenting that out, the next command \"$(MAKE) -C\n> contrib/unaccent $@\" failed, seemingly because $(PYTHON) is empty\n> unless --with-python was specified at configure time.\n\nI'm not sure whether that's worth addressing.\n\n>> Open questions that are currently not handled consistently:\n>>\n>> - Should the downloaded files be listed in .gitignore?\n> \n> These files are transient byproducts of a build, and we don't want\n> them committed, so they seem like a normal candidate for .gitignore.\n\nOK done\n\n>> - Should the downloaded files be cleaned by make clean (or distclean or\n>> maintainer-clean or none)?\n> \n> It seems one would want to make clean without removing these files,\n> and maintainer clean is for removing things that are preserved in\n> distribution tarballs. So I would go with distclean.\n\nalso done\n\n>> - Should the generated files be excluded from pgindent? Currently, the\n>> generated files will not pass pgindent unchanged, so that could cause\n>> annoying whitespace battles when these files are updated and re-indented\n>> around release time.\n> \n> I see what you mean in the norm table header. I think generated files\n> should not be pgindent'd, since creating them is already a consistent,\n> mechanical process, and their presentation is not as important as\n> other code.\n\nI've left it alone for now because the little indentation problem \ncurrently present might actually go away with my Unicode normalization \nsupport patch.\n\n> Other comments:\n> \n> +print \"/* generated by\n> src/common/unicode/generate-unicode_combining_table.pl, do not edit\n> */\\n\\n\";\n> \n> I would print out the full boilerplate like for other generated headers.\n\nHmm, you are probably comparing with \nsrc/common/unicode/generate-unicode_norm_table.pl, but other file \ngenerating scripts around the tree print out a small header in the style \nthat I have. I'd rather adjust the output of \ngenerate-unicode_norm_table.pl to match those. (It's also not quite \ncorrect to make copyright claims about automatically generated output.)\n\n> Lastly, src/common/unicode/README is outdated (and possibly no longer\n> useful at all?).\n\nupdated\n\nnew patch attached\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 26 Dec 2019 19:38:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "On Thu, Dec 26, 2019 at 12:39 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-12-19 23:48, John Naylor wrote:\n> > I would print out the full boilerplate like for other generated headers.\n>\n> Hmm, you are probably comparing with\n> src/common/unicode/generate-unicode_norm_table.pl, but other file\n> generating scripts around the tree print out a small header in the style\n> that I have. I'd rather adjust the output of\n> generate-unicode_norm_table.pl to match those. (It's also not quite\n> correct to make copyright claims about automatically generated output.)\n\nHmm, the scripts I'm most familiar with have full headers. Your point\nabout copyright makes sense, and using smaller file headers would aid\nreadability of the scripts, but I also see how others may feel\ndifferently.\n\nv2 looks good to me, marked ready for committer.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jan 2020 08:13:53 -0600", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "On 2020-01-03 15:13, John Naylor wrote:\n> On Thu, Dec 26, 2019 at 12:39 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2019-12-19 23:48, John Naylor wrote:\n>>> I would print out the full boilerplate like for other generated headers.\n>>\n>> Hmm, you are probably comparing with\n>> src/common/unicode/generate-unicode_norm_table.pl, but other file\n>> generating scripts around the tree print out a small header in the style\n>> that I have. I'd rather adjust the output of\n>> generate-unicode_norm_table.pl to match those. (It's also not quite\n>> correct to make copyright claims about automatically generated output.)\n> \n> Hmm, the scripts I'm most familiar with have full headers. Your point\n> about copyright makes sense, and using smaller file headers would aid\n> readability of the scripts, but I also see how others may feel\n> differently.\n> \n> v2 looks good to me, marked ready for committer.\n\nCommitted, thanks.\n\nI have added a little tweak so that it works also without --with-python, \nto avoid gratuitous annoyances.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jan 2020 10:16:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Committed, thanks.\n\nThis patch is making src/tools/pginclude/headerscheck unhappy:\n\n./src/include/common/unicode_combining_table.h:3: error: array type has incomplete element type\n\nI guess that header needs another #include, or else you need to\nmove some declarations around.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Jan 2020 19:37:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "On 2020-01-15 01:37, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Committed, thanks.\n> \n> This patch is making src/tools/pginclude/headerscheck unhappy:\n> \n> ./src/include/common/unicode_combining_table.h:3: error: array type has incomplete element type\n> \n> I guess that header needs another #include, or else you need to\n> move some declarations around.\n\nHmm, this file is only meant to be included inside one particular \nfunction. Making it standalone includable would seem to be unnecessary. \n What should we do?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Jan 2020 09:59:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-01-15 01:37, Tom Lane wrote:\n>> This patch is making src/tools/pginclude/headerscheck unhappy:\n>> ./src/include/common/unicode_combining_table.h:3: error: array type has incomplete element type\n\n> Hmm, this file is only meant to be included inside one particular \n> function. Making it standalone includable would seem to be unnecessary. \n> What should we do?\n\nWell, we could make it a documented exception in headerscheck and\ncpluspluscheck.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 Jan 2020 10:43:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "On 2020-01-20 16:43, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-01-15 01:37, Tom Lane wrote:\n>>> This patch is making src/tools/pginclude/headerscheck unhappy:\n>>> ./src/include/common/unicode_combining_table.h:3: error: array type has incomplete element type\n> \n>> Hmm, this file is only meant to be included inside one particular\n>> function. Making it standalone includable would seem to be unnecessary.\n>> What should we do?\n> \n> Well, we could make it a documented exception in headerscheck and\n> cpluspluscheck.\n\nOK, done.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Jan 2020 12:25:06 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add support for automatically updating Unicode derived files" }, { "msg_contents": "I have committed the first Unicode data update using this new \"make \nupdate-unicode\" facility.\n\nCLDR is released regularly every 6 months, so around this time every \nyear would be the appropriate time to pull in the latest updates in \npreparation for our own release.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Apr 2020 10:01:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add support for automatically updating Unicode derived files" } ]
[ { "msg_contents": "When joining tables with USING, the listed columns are merged and no\nlonger belong to either the left or the right side.  That means they can\nno longer be qualified which can often be an inconvenience.\n\n\nSELECT a.x, b.y, z FROM a INNER JOIN b USING (z);\n\n\nThe SQL standard provides a workaround for this by allowing an alias on\nthe join clause. (<join correlation name> in section 7.10)\n\n\nSELECT j.x, j.y, j.z FROM a INNER JOIN b USING (z) AS j;\n\n\nAttached is a patch (based on 517bf2d910) adding this feature.\n\n-- \n\nVik Fearing", "msg_date": "Tue, 29 Oct 2019 11:47:48 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Join Correlation Name" }, { "msg_contents": "On 2019-10-29 11:47, Vik Fearing wrote:\n> When joining tables with USING, the listed columns are merged and no\n> longer belong to either the left or the right side.  That means they can\n> no longer be qualified which can often be an inconvenience.\n> \n> \n> SELECT a.x, b.y, z FROM a INNER JOIN b USING (z);\n> \n> \n> The SQL standard provides a workaround for this by allowing an alias on\n> the join clause. (<join correlation name> in section 7.10)\n> \n> \n> SELECT j.x, j.y, j.z FROM a INNER JOIN b USING (z) AS j;\n> \n> \n> Attached is a patch (based on 517bf2d910) adding this feature.\n\nIs this the same as https://commitfest.postgresql.org/25/2158/ ?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 29 Oct 2019 12:05:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Join Correlation Name" }, { "msg_contents": "On Tue, 29 Oct 2019 at 07:05, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-10-29 11:47, Vik Fearing wrote:\n> > When joining tables with USING, the listed columns are merged and no\n> > longer belong to either the left or the right side. That means they can\n> > no longer be qualified which can often be an inconvenience.\n> >\n> >\n> > SELECT a.x, b.y, z FROM a INNER JOIN b USING (z);\n>\n\nI'm confused. As far as I can tell you can qualify the join columns if you\nwant:\n\nodyssey=> select exam_id, sitting_id, room_id, exam_exam_sitting.exam_id\nfrom exam_exam_sitting join exam_exam_sitting_room using (exam_id,\nsitting_id) limit 5;\n exam_id | sitting_id | room_id | exam_id\n---------+------------+---------+---------\n 22235 | 23235 | 22113 | 22235\n 22237 | 23237 | 22113 | 22237\n 23101 | 21101 | 22215 | 23101\n 23101 | 21101 | 22216 | 23101\n 23101 | 21101 | 22224 | 23101\n(5 rows)\n\nodyssey=>\n\nIn the case of a non-inner join it can make a difference whether you use\nthe left side, right side, or non-qualified version. If you need to refer\nspecifically to the non-qualified version in a different part of the query,\nyou can give an alias to the result of the join:\n\n... (a join b using (z)) as t ...\n\n> The SQL standard provides a workaround for this by allowing an alias on\n> > the join clause. (<join correlation name> in section 7.10)\n> >\n> >\n> > SELECT j.x, j.y, j.z FROM a INNER JOIN b USING (z) AS j;\n>\n\nWhat I would like is to be able to use both USING and ON in the same join;\nI more often than I would like find myself saying things like ON ((l.a,\nl.b, lc.) = (r.a, r.b, r.c) AND l.ab = r.bb). Also I would like to be able\nto use and rename differently-named fields in a USING clause, something\nlike USING (a, b, c=d as f).\n\nA bit of thought convinces me that these are both essentially syntactic\nchanges; I think it's already possible to represent these in the existing\ninternal representation, they just aren't supported by the parser.\n\nOn Tue, 29 Oct 2019 at 07:05, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-10-29 11:47, Vik Fearing wrote:\n> When joining tables with USING, the listed columns are merged and no\n> longer belong to either the left or the right side.  That means they can\n> no longer be qualified which can often be an inconvenience.\n> \n> \n> SELECT a.x, b.y, z FROM a INNER JOIN b USING (z);I'm confused. As far as I can tell you can qualify the join columns if you want:odyssey=> select exam_id, sitting_id, room_id, exam_exam_sitting.exam_id from exam_exam_sitting join exam_exam_sitting_room using (exam_id, sitting_id) limit 5; exam_id | sitting_id | room_id | exam_id ---------+------------+---------+---------   22235 |      23235 |   22113 |   22235   22237 |      23237 |   22113 |   22237   23101 |      21101 |   22215 |   23101   23101 |      21101 |   22216 |   23101   23101 |      21101 |   22224 |   23101(5 rows)odyssey=> In the case of a non-inner join it can make a difference whether you use the left side, right side, or non-qualified version. If you need to refer specifically to the non-qualified version in a different part of the query, you can give an alias to the result of the join:... (a join b using (z)) as t ...\n> The SQL standard provides a workaround for this by allowing an alias on\n> the join clause. (<join correlation name> in section 7.10)\n> \n> \n> SELECT j.x, j.y, j.z FROM a INNER JOIN b USING (z) AS j;What I would like is to be able to use both USING and ON in the same join; I more often than I would like find myself saying things like ON ((l.a, l.b, lc.) = (r.a, r.b, r.c) AND l.ab = r.bb). Also I would like to be able to use and rename differently-named fields in a USING clause, something like USING (a, b, c=d as f).A bit of thought convinces me that these are both essentially syntactic changes; I think it's already possible to represent these in the existing internal representation, they just aren't supported by the parser.", "msg_date": "Tue, 29 Oct 2019 07:24:28 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Join Correlation Name" }, { "msg_contents": "On 29/10/2019 12:05, Peter Eisentraut wrote:\n> On 2019-10-29 11:47, Vik Fearing wrote:\n>> When joining tables with USING, the listed columns are merged and no\n>> longer belong to either the left or the right side.  That means they can\n>> no longer be qualified which can often be an inconvenience.\n>>\n>>\n>> SELECT a.x, b.y, z FROM a INNER JOIN b USING (z);\n>>\n>>\n>> The SQL standard provides a workaround for this by allowing an alias on\n>> the join clause. (<join correlation name> in section 7.10)\n>>\n>>\n>> SELECT j.x, j.y, j.z FROM a INNER JOIN b USING (z) AS j;\n>>\n>>\n>> Attached is a patch (based on 517bf2d910) adding this feature.\n>\n> Is this the same as https://commitfest.postgresql.org/25/2158/ ?\n\n\nCrap.  Yes, it is.\n\n\n\n", "msg_date": "Tue, 29 Oct 2019 12:51:26 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Join Correlation Name" }, { "msg_contents": "On 29/10/2019 12:24, Isaac Morland wrote:\n> If you need to refer specifically to the non-qualified version in a\n> different part of the query, you can give an alias to the result of\n> the join:\n>\n> ... (a join b using (z)) as t ...\n\n\nYes, this is about having standard SQL syntax for that.\n\n\n\n", "msg_date": "Tue, 29 Oct 2019 12:55:46 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Join Correlation Name" }, { "msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 29/10/2019 12:24, Isaac Morland wrote:\n>> If you need to refer specifically to the non-qualified version in a\n>> different part of the query, you can give an alias to the result of\n>> the join:\n>> ... (a join b using (z)) as t ...\n\n> Yes, this is about having standard SQL syntax for that.\n\nPlease present an argument why this proposal is standard SQL syntax.\nI see no support for it in the spec. AFAICS this proposal is just an\ninconsistent wart; it makes it possible to write\n\n\t(a join b using (z) as q) as t\n\nand then what do you do? Moreover, why should you be able to\nattach an alias to a USING join but not other sorts of joins?\n\nAfter digging around in the spec for awhile, it seems like\nthere actually isn't any way to attach an alias to a join\nper spec.\n\nAccording to SQL:2011 7.6 <table reference>, you can attach an\nAS clause to every variant of <table primary> *except* the\n<parenthesized joined table> variant. And there's nothing\nabout AS clauses in 7.7 <joined table>, which is where it would\nhave to be mentioned if this proposal were spec-compliant.\n\nWhat our grammar effectively does is to allow an AS clause to be\nattached to <parenthesized joined table> as well, which seems\nlike the most natural thing to do if the committee ever decide\nto rectify the shortcoming.\n\nAnyway, we already have the functionality covered, and I don't\nthink we need another non-spec, non-orthogonal way to do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Oct 2019 10:20:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Join Correlation Name" }, { "msg_contents": "On 29/10/2019 15:20, Tom Lane wrote:\n> Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n>> On 29/10/2019 12:24, Isaac Morland wrote:\n>>> If you need to refer specifically to the non-qualified version in a\n>>> different part of the query, you can give an alias to the result of\n>>> the join:\n>>> ... (a join b using (z)) as t ...\n>> Yes, this is about having standard SQL syntax for that.\n> Please present an argument why this proposal is standard SQL syntax.\n\n\nIs quoting the spec good enough?\n\nSQL:2016 Part 2 Foundation Section 7.10 <joined table>:\n\n\n<join specification> ::=\n    <join condition>\n    | <named columns join>\n\n<join condition> ::=\n    ON <search condition>\n\n<named columns join> ::=\n    USING <left paren> <join column list> <right paren> [ AS <join\ncorrelation name> ]\n\n<join correlation name> ::=\n    <correlation name>\n\n\n> I see no support for it in the spec. AFAICS this proposal is just an\n> inconsistent wart; it makes it possible to write\n>\n> \t(a join b using (z) as q) as t\n>\n> and then what do you do? Moreover, why should you be able to\n> attach an alias to a USING join but not other sorts of joins?\n\n\nI think possibly what the spec says (and that neither my patch nor\nPeter's implements) is assigning the alias just to the <join column\nlist>.  So my original example query should actually be:\n\n\nSELECT a.x, b.y, j.z FROM a INNER JOIN b USING (z) AS j;\n\n\n> After digging around in the spec for awhile, it seems like\n> there actually isn't any way to attach an alias to a join\n> per spec.\n>\n> According to SQL:2011 7.6 <table reference>, you can attach an\n> AS clause to every variant of <table primary> *except* the\n> <parenthesized joined table> variant. And there's nothing\n> about AS clauses in 7.7 <joined table>, which is where it would\n> have to be mentioned if this proposal were spec-compliant.\n>\n> What our grammar effectively does is to allow an AS clause to be\n> attached to <parenthesized joined table> as well, which seems\n> like the most natural thing to do if the committee ever decide\n> to rectify the shortcoming.\n>\n> Anyway, we already have the functionality covered, and I don't\n> think we need another non-spec, non-orthogonal way to do it.\n\n\nI think the issue here is you're looking at SQL:2011 whereas I am\nlooking at SQL:2016.\n\n-- \n\nVik Fearing\n\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:59:40 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Join Correlation Name" }, { "msg_contents": "Bonjour Vik,\n\n> Is quoting the spec good enough?\n> SQL:2016 Part 2 Foundation Section 7.10 <joined table>:\n\nAh, this is the one information I did not have when reviewing Peter's \npatch.\n\n> <named columns join> ::=\n>     USING <left paren> <join column list> <right paren> [ AS <join correlation name> ]\n>\n> <join correlation name> ::=\n>     <correlation name>\n>\n> I think possibly what the spec says (and that neither my patch nor\n> Peter's implements) is assigning the alias just to the <join column\n> list>. \n\nI think you are right, the alias is only on the identical columns.\n\nIt solves the issue I raised about inaccessible attributes, and explains \nwhy it is only available with USING and no other join variants.\n\n> So my original example query should actually be:\n>\n> SELECT a.x, b.y, j.z FROM a INNER JOIN b USING (z) AS j;\n\nYep, only z should be in j, it is really just about the USING clause.\n\n-- \nFabien.", "msg_date": "Wed, 30 Oct 2019 09:04:12 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Join Correlation Name" }, { "msg_contents": "On 30/10/2019 09:04, Fabien COELHO wrote:\n>\n>> I think possibly what the spec says (and that neither my patch nor\n>> Peter's implements) is assigning the alias just to the <join column\n>> list>. \n>\n> I think you are right, the alias is only on the identical columns.\n>\n> It solves the issue I raised about inaccessible attributes, and\n> explains why it is only available with USING and no other join variants.\n>\n>> So my original example query should actually be:\n>>\n>> SELECT a.x, b.y, j.z FROM a INNER JOIN b USING (z) AS j;\n>\n> Yep, only z should be in j, it is really just about the USING clause.\n\n\nMy reading of SQL:2016-2 7.10 SR 11.a convinces me that this is the case.\n\n\nMy reading of transformFromClauseItem() convinces me that this is way\nover my head and I have to abandon it. :-(\n\n\n\n", "msg_date": "Fri, 1 Nov 2019 23:08:35 +0100", "msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Join Correlation Name" } ]
[ { "msg_contents": "Hi.\n\nThis is not clear from doc, so I have asked on IRC too.\n\nfrom the DOC: https://www.postgresql.org/docs/current/trigger-definition.html\nIn the case of INSTEAD OF triggers, the possibly-modified row returned by each trigger becomes the input to the next trigger\n\nI modify OLD row, thus I expect to get modified version when run next query: \n\n WITH t1 AS( delete from abc returning *)\n select * from t1;\n\nfiddle: https://dbfiddle.uk/?rdbms=postgres_12&fiddle=637730305f66bf531794edb09a462c95\n\n> https://www.postgresql.org/docs/current/trigger-definition.html\nA row-level INSTEAD OF trigger should either return NULL to indicate that it did not modify any data from the view's underlying base tables,\nor it should return the view row that was passed in (the NEW row for INSERT and UPDATE operations, or the OLD row for DELETE operations).\nA nonnull return value is used to signal that the trigger performed the necessary data modifications in the view.\nThis will cause the count of the number of rows affected by the command to be incremented. For INSERT and UPDATE operations, the trigger may\nmodify the NEW row before returning it. This will change the data returned by INSERT RETURNING or UPDATE RETURNING,\nand is useful when the view will not show exactly the same data that was provided.\n\nBut I still does not understand. Doc explicitly do not prohibit modification of OLD and has no examples for DELETE RETURNING case\n\nSo I want to ask clarify doc a bit.\nIf this prohibited, why this is prohibited? have any discussion on this?\nIf not prohibited, does this is not implemented for DELETE RETURNING queries? if so, is it left for later?\n\nI have next use case.\nI am implementing Bi-Temporal tables. The table have columns: id, app_period, value\nfor example I have next data: 7, '[2019-01-01, 2020-01-01)', 130\nYou can imagine this as having value 7 for each day of the year.\nNow I want to delete this value for May month. I setup special variable to period: '[2019-05-01,2019-06-01)' and then delete:\n\n select app_period( '[2019-05-01,2019-06-01)' );\n WITH t1 AS( delete from abc returning *)\n select * from t1;\n\nAlgorithm of deletion is next:\n1. Deactivate target row\n 7, '[2019-01-01, 2020-01-01)', 130\n2. If target row has wider app_period then we insert record that data back:\n NOT '[2019-05-01,2019-06-01)' @> '[2019-01-01, 2020-01-01)'\n INSERT INTO abc ( id, app_period, value ) values \n ( 7, '[2019-01-01,2019-05-01)', 130 ),\n ( 7, '[2019-06-01,2020-01-01)', 130 ),\n3. OLD.app_period = OLD.app_period * app_period(); \n '[2019-01-01, 2020-01-01)' * '[2019-05-01,2019-06-01)' --> '[2019-05-01,2019-06-01)'\n\nBecause only 130 value is deleted from specified period I expect next result for the query above:\n ( 7, '[2019-05-01,2019-06-01)', 130 )\n\nBut despite on OLD was modified, actual result is:\n ( 7, '[2019-01-01,2020-01-01)', 130 )\nYou can see that this is original data.\n\nSo, does INSTEAD OF DELETE support modification of row?\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Tue, 29 Oct 2019 17:54:36 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "On Tue, Oct 29, 2019 at 05:54:36PM +0200, Eugen Konkov wrote:\n> Hi.\n> \n> This is not clear from doc, so I have asked on IRC too.\n> \n> from the DOC: https://www.postgresql.org/docs/current/trigger-definition.html\n> In the case of INSTEAD OF triggers, the possibly-modified row returned by each trigger becomes the input to the next trigger\n> \n> I modify OLD row, thus I expect to get modified version when run next query: \n> \n> WITH t1 AS( delete from abc returning *)\n> select * from t1;\n> \n> fiddle: https://dbfiddle.uk/?rdbms=postgres_12&fiddle=637730305f66bf531794edb09a462c95\n\nWow, that is a very nice way to present the queries.\n\n> > https://www.postgresql.org/docs/current/trigger-definition.html\n> A row-level INSTEAD OF trigger should either return NULL to indicate that it did not modify any data from the view's underlying base tables,\n> or it should return the view row that was passed in (the NEW row for INSERT and UPDATE operations, or the OLD row for DELETE operations).\n> A nonnull return value is used to signal that the trigger performed the necessary data modifications in the view.\n> This will cause the count of the number of rows affected by the command to be incremented. For INSERT and UPDATE operations, the trigger may\n> modify the NEW row before returning it. This will change the data returned by INSERT RETURNING or UPDATE RETURNING,\n> and is useful when the view will not show exactly the same data that was provided.\n> \n> But I still does not understand. Doc explicitly do not prohibit modification of OLD and has no examples for DELETE RETURNING case\n\nI looked in the CREATE TRIGGER manual page and found this:\n\n\thttps://www.postgresql.org/docs/12/sql-createtrigger.html\n\tIf the trigger fires before or instead of the event, the trigger\n\tcan skip the operation for the current row, or change the row\n\tbeing inserted (for INSERT and UPDATE operations only).\n\nI don't see the \"(for INSERT and UPDATE operations only)\" language in\nthe main trigger documentation,\nhttps://www.postgresql.org/docs/current/trigger-definition.html. I have\nwritten the attached patch to fix that. Does that help?\n\nAs far as allowing DELETE to modify the trigger row for RETURNING, I am\nnot sure how much work it would take to allow that, but it seems like it\nis a valid requite, and if so, I can add it to the TODO list.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Wed, 6 Nov 2019 13:59:35 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "> I looked in the CREATE TRIGGER manual page and found this:\n\n> https://www.postgresql.org/docs/12/sql-createtrigger.html\n> If the trigger fires before or instead of the event, the trigger\n> can skip the operation for the current row, or change the row\n> being inserted (for INSERT and UPDATE operations only).\n\n> I don't see the \"(for INSERT and UPDATE operations only)\" language in\n> the main trigger documentation,\n> https://www.postgresql.org/docs/current/trigger-definition.html. I have\n> written the attached patch to fix that. Does that help?\n\nNo. If we document that PG does not allow to modify OLD at instead\nof trigger, the we can not implement that. Probably we can put note\nthat \"currently modification of the trigger row for RETURNING is not\nimplemented\"\n\n> As far as allowing DELETE to modify the trigger row for RETURNING, I am\n> not sure how much work it would take to allow that, but it seems like it\n> is a valid requite, and if so, I can add it to the TODO list.\n\nYes, Add please into TODO the feature to \"allowing DELETE to modify the trigger row\nfor RETURNING\". Becuase, as I have described at first letter, without\nthis the RETURNING rows **does not correspond actually deleted data**\n\nThank you.\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Thu, 7 Nov 2019 11:20:32 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "Hello Eugen,\n\nThursday, November 7, 2019, 11:20:32 AM, you wrote:\n\n>> I looked in the CREATE TRIGGER manual page and found this:\n\n>> https://www.postgresql.org/docs/12/sql-createtrigger.html\n>> If the trigger fires before or instead of the event, the trigger\n>> can skip the operation for the current row, or change the row\n>> being inserted (for INSERT and UPDATE operations only).\n\n>> I don't see the \"(for INSERT and UPDATE operations only)\" language in\n>> the main trigger documentation,\n>> https://www.postgresql.org/docs/current/trigger-definition.html. I have\n>> written the attached patch to fix that. Does that help?\n\n> No. If we document that PG does not allow to modify OLD at instead\n> of trigger, the we can not implement that. Probably we can put note\n> that \"currently modification of the trigger row for RETURNING is not\n> implemented\"\n\nsorry, typo. Please read:\n\"currently modification of the trigger row for DELETE RETURNING is notimplemented\"\n\n\n>> As far as allowing DELETE to modify the trigger row for RETURNING, I am\n>> not sure how much work it would take to allow that, but it seems like it\n>> is a valid requite, and if so, I can add it to the TODO list.\n\n> Yes, Add please into TODO the feature to \"allowing DELETE to modify the trigger row\n> for RETURNING\". Becuase, as I have described at first letter, without\n> this the RETURNING rows **does not correspond actually deleted data**\n\n> Thank you.\n\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Thu, 7 Nov 2019 11:24:29 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "On Thu, Nov 7, 2019 at 11:24:29AM +0200, Eugen Konkov wrote:\n> Hello Eugen,\n> \n> Thursday, November 7, 2019, 11:20:32 AM, you wrote:\n> \n> >> I looked in the CREATE TRIGGER manual page and found this:\n> \n> >> https://www.postgresql.org/docs/12/sql-createtrigger.html\n> >> If the trigger fires before or instead of the event, the trigger\n> >> can skip the operation for the current row, or change the row\n> >> being inserted (for INSERT and UPDATE operations only).\n> \n> >> I don't see the \"(for INSERT and UPDATE operations only)\" language in\n> >> the main trigger documentation,\n> >> https://www.postgresql.org/docs/current/trigger-definition.html. I have\n> >> written the attached patch to fix that. Does that help?\n> \n> > No. If we document that PG does not allow to modify OLD at instead\n> > of trigger, the we can not implement that. Probably we can put note\n> > that \"currently modification of the trigger row for RETURNING is not\n> > implemented\"\n> \n> sorry, typo. Please read:\n> \"currently modification of the trigger row for DELETE RETURNING is notimplemented\"\n\nIn looking at the existing docs, the bullet above the quoted text says:\n\n\tFor row-level INSERT and UPDATE triggers only, the returned row becomes\n\t ----\n\tthe row that will be inserted or will replace the row being updated.\n\tThis allows the trigger function to modify the row being inserted or\n\tupdated.\n\nFirst, notice \"only\", which was missing from the later sentence:\n\n\tFor <command>INSERT</command> and <command>UPDATE</command>\n\toperations [only], the trigger may modify the\n\t<varname>NEW</varname> row before returning it.\n\nwhich I have now added with my applied patch to all supported releases. \n\nThe major use of modifying NEW is to modify the data that goes into the\ndatabase, and its use to modify data seen by later executed triggers, or\nby RETURNING, is only a side-effect of its primary purpose. Therefore,\nit is not surprising that, since DELETE does not modify any data, just\nremoves it, that the modification of OLD to appear in later triggers or\nRETURNING is not supported.\n\n> >> As far as allowing DELETE to modify the trigger row for RETURNING, I am\n> >> not sure how much work it would take to allow that, but it seems like it\n> >> is a valid requite, and if so, I can add it to the TODO list.\n> \n> > Yes, Add please into TODO the feature to \"allowing DELETE to modify the trigger row\n> > for RETURNING\". Becuase, as I have described at first letter, without\n> > this the RETURNING rows **does not correspond actually deleted data**\n> \n> > Thank you.\n\nI have added a TODO item:\n\n\tAllow DELETE triggers to modify rows, for use by RETURNING \n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 Nov 2019 16:26:55 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "On Thu, Nov 7, 2019 at 04:26:55PM -0500, Bruce Momjian wrote:\n> On Thu, Nov 7, 2019 at 11:24:29AM +0200, Eugen Konkov wrote:\n> > >> As far as allowing DELETE to modify the trigger row for RETURNING, I am\n> > >> not sure how much work it would take to allow that, but it seems like it\n> > >> is a valid requite, and if so, I can add it to the TODO list.\n> > \n> > > Yes, Add please into TODO the feature to \"allowing DELETE to modify the trigger row\n> > > for RETURNING\". Becuase, as I have described at first letter, without\n> > > this the RETURNING rows **does not correspond actually deleted data**\n> > \n> > > Thank you.\n> \n> I have added a TODO item:\n> \n> \tAllow DELETE triggers to modify rows, for use by RETURNING \n\nThinking some more on this, I now don't think a TODO makes sense, so I\nhave removed it.\n\nTriggers are designed to check and modify input data, and since DELETE\nhas no input data, it makes no sense. In the attached SQL script, you\ncan see that only the BEFORE INSERT trigger fires, so there is no way\neven with INSERT to change what is passed after the write to RETURNING. \nWhat you can do is to modify the returning expression, which is what I\nhave done for the last query --- hopefully that will help you.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Thu, 7 Nov 2019 17:28:18 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "Hello Bruce,\n\nFriday, November 8, 2019, 12:28:18 AM, you wrote:\n\n> On Thu, Nov 7, 2019 at 04:26:55PM -0500, Bruce Momjian wrote:\n>> On Thu, Nov 7, 2019 at 11:24:29AM +0200, Eugen Konkov wrote:\n>> > >> As far as allowing DELETE to modify the trigger row for RETURNING, I am\n>> > >> not sure how much work it would take to allow that, but it seems like it\n>> > >> is a valid requite, and if so, I can add it to the TODO list.\n>> > \n>> > > Yes, Add please into TODO the feature to \"allowing DELETE to modify the trigger row\n>> > > for RETURNING\". Becuase, as I have described at first letter, without\n>> > > this the RETURNING rows **does not correspond actually deleted data**\n>> > \n>> > > Thank you.\n>> \n>> I have added a TODO item:\n>> \n>> Allow DELETE triggers to modify rows, for use by RETURNING \n\n> Thinking some more on this, I now don't think a TODO makes sense, so I\n> have removed it.\n\n> Triggers are designed to check and modify input data, and since DELETE\n> has no input data, it makes no sense. In the attached SQL script, you\n> can see that only the BEFORE INSERT trigger fires, so there is no way\n> even with INSERT to change what is passed after the write to RETURNING.\n> What you can do is to modify the returning expression, which is what I\n> have done for the last query --- hopefully that will help you.\n\nYou lost my idea. First of all I am talking about views and an\nINSTEAD OF triggers.\n\nINSERT/UPDATE operation present which data is added into DB\nDELETE operation present which data is deleted from DB\n(in my case I am not deleted exact that data which matched by where.\nSee example below)\n\nThus INSTEAD OF INSERT/UPDATE triggers are designed to check and modify input data\neg. we can insert/update something different then incoming data (here\nwe are modifying NEW)\n\nThus INSTEAD OF DELETE triggers are designed to check and delete **output** data\neg. we can delete something different then underlaid data (here we are\nmodifying OLD)\n\nfor example, we have next data: 1 2 3 4 5 6 7 8\nit is not presented by eight rows, but instead it is presented as one\nrow with range data type: [1..8]\n\nWhen we insert data we will not get new row, we change current:\ninsert into table values ( 9 ) will result\n[1..9]\ninstead of\n[1..8]\n9\n\nSo lets look into INSTEAD OF DELETE trigger when we deleting\ndata:\ndelete from table where x in ( 5, 6, 7 );\nafter deleting this we should get:\n[1..4]\n[8..9]\n\nthus\nwith t1 as ( delete from table where x in ( 5, 6, 7 ) returning * )\nselect * from t1\nshould return:\n[5..7]\ninstead of\n[1..9]\nbecause we does not delete ALL [1..9], we just delete ONLY [5..7]\n\nThus I need to change matched row OLD.x from [1..9] to [5..7]\n\n\n\nPlease reread my first letter. There I describe more real life example\nwhen I am manipulating bi-temporal data.\n\nwhere some value exist at given period:\nid | app_period | value\n7 [2019-01-01, 2019-04-05) 207\n\nAnd I am deleting third month: [ 2019-03-01, 2019-04-01 )\nwith t1 as ( delete from table where app_period && [ 2019-03-01,\n2019-04-01 ) returning * )\nselect * from t1;\n7 [ 2019-03-01, 2019-04-01 ) 207\n\nselect * from table;\n7 [ 2019-01-01, 2019-03-01 ) 207\n7 [ 2019-04-01, 2019-04-05 ) 207\n\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Sat, 9 Nov 2019 14:05:02 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "Hello Eugen,\n\nSaturday, November 9, 2019, 2:05:02 PM, you wrote:\n\n> Hello Bruce,\n\n> Friday, November 8, 2019, 12:28:18 AM, you wrote:\n\n>> On Thu, Nov 7, 2019 at 04:26:55PM -0500, Bruce Momjian wrote:\n>>> On Thu, Nov 7, 2019 at 11:24:29AM +0200, Eugen Konkov wrote:\n>>> > >> As far as allowing DELETE to modify the trigger row for RETURNING, I am\n>>> > >> not sure how much work it would take to allow that, but it seems like it\n>>> > >> is a valid requite, and if so, I can add it to the TODO list.\n>>> > \n>>> > > Yes, Add please into TODO the feature to \"allowing DELETE to modify the trigger row\n>>> > > for RETURNING\". Becuase, as I have described at first letter, without\n>>> > > this the RETURNING rows **does not correspond actually deleted data**\n>>> > \n>>> > > Thank you.\n>>> \n>>> I have added a TODO item:\n>>> \n>>> Allow DELETE triggers to modify rows, for use by RETURNING \n\n>> Thinking some more on this, I now don't think a TODO makes sense, so I\n>> have removed it.\n\n>> Triggers are designed to check and modify input data, and since DELETE\n>> has no input data, it makes no sense. In the attached SQL script, you\n>> can see that only the BEFORE INSERT trigger fires, so there is no way\n>> even with INSERT to change what is passed after the write to RETURNING.\n>> What you can do is to modify the returning expression, which is what I\n>> have done for the last query --- hopefully that will help you.\n\n> You lost my idea. First of all I am talking about views and an\n> INSTEAD OF triggers.\n\n> INSERT/UPDATE operation present which data is added into DB\n> DELETE operation present which data is deleted from DB\n> (in my case I am not deleted exact that data which matched by where.\n> See example below)\n\n> Thus INSTEAD OF INSERT/UPDATE triggers are designed to check and modify input data\n> eg. we can insert/update something different then incoming data (here\n> we are modifying NEW)\n\n> Thus INSTEAD OF DELETE triggers are designed to check and delete **output** data\n> eg. we can delete something different then underlaid data (here we are\n> modifying OLD)\n\n> for example, we have next data: 1 2 3 4 5 6 7 8\n> it is not presented by eight rows, but instead it is presented as one\n> row with range data type: [1..8]\n\n> When we insert data we will not get new row, we change current:\n> insert into table values ( 9 ) will result\n> [1..9]\n> instead of\n> [1..8]\n> 9\n\n> So lets look into INSTEAD OF DELETE trigger when we deleting\n> data:\n> delete from table where x in ( 5, 6, 7 );\n> after deleting this we should get:\n> [1..4]\n> [8..9]\n\n> thus\n> with t1 as ( delete from table where x in ( 5, 6, 7 ) returning * )\n> select * from t1\n> should return:\n> [5..7]\n> instead of\n> [1..9]\n> because we does not delete ALL [1..9], we just delete ONLY [5..7]\n\n> Thus I need to change matched row OLD.x from [1..9] to [5..7]\n\n\n\n> Please reread my first letter. There I describe more real life example\n> when I am manipulating bi-temporal data.\n\n> where some value exist at given period:\n> id | app_period | value\n> 7 [2019-01-01, 2019-04-05) 207\n\n> And I am deleting third month: [ 2019-03-01, 2019-04-01 )\n> with t1 as ( delete from table where app_period && [ 2019-03-01,\n> 2019-04-01 ) returning * )\n> select * from t1;\n> 7 [ 2019-03-01, 2019-04-01 ) 207\n\n> select * from table;\n> 7 [ 2019-01-01, 2019-03-01 ) 207\n> 7 [ 2019-04-01, 2019-04-05 ) 207\n\nHere when data is deleted the next row is matched:\n 7 [2019-01-01, 2019-04-05) 207\nand assigned to OLD.\nBecause I am deleting data ONLY from [ 2019-03-01, 2019-04-01 ) period\nI am required to change OLD:\n\nOLD.app_period = [ 2019-03-01, 2019-04-01 )\n\nSo I should get:\n> 7 [ 2019-03-01, 2019-04-01 ) 207\ninstead of\n> 7 [2019-01-01, 2019-04-05) 207\n\n\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Sat, 9 Nov 2019 14:10:13 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "\n> 8 нояб. 2019 г., в 0:26, Bruce Momjian <bruce@momjian.us> написал(а):\n> \n> First, notice \"only\", which was missing from the later sentence:\n> \n> For <command>INSERT</command> and <command>UPDATE</command>\n> operations [only], the trigger may modify the\n> <varname>NEW</varname> row before returning it.\n> \n> which I have now added with my applied patch to all supported releases. \n> \n\nHi Bruce, \n\nI happened to browse recent documentation-related commits and I didn’t see this patch in REL_12_STABLE. Judging by the commit message, it should be applied there too.\n\n", "msg_date": "Mon, 11 Nov 2019 19:00:22 +0300", "msg_from": "Liudmila Mantrova <l.mantrova@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "On Mon, Nov 11, 2019 at 07:00:22PM +0300, Liudmila Mantrova wrote:\n> \n> > 8 нояб. 2019 г., в 0:26, Bruce Momjian <bruce@momjian.us> написал(а):\n> > \n> > First, notice \"only\", which was missing from the later sentence:\n> > \n> > For <command>INSERT</command> and <command>UPDATE</command>\n> > operations [only], the trigger may modify the\n> > <varname>NEW</varname> row before returning it.\n> > \n> > which I have now added with my applied patch to all supported releases. \n> > \n> \n> Hi Bruce, \n> \n> I happened to browse recent documentation-related commits and I didn’t see this patch in REL_12_STABLE. Judging by the commit message, it should be applied there too.\n\nWow, not sure how that happened, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 12 Nov 2019 22:04:59 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "Hi again.\n\n> Thinking some more on this, I now don't think a TODO makes sense, so I\n> have removed it.\n\nPlease look into this example: https://dbfiddle.uk/?rdbms=postgres_12&fiddle=95ed9fab6870d7c4b6266ea4d93def13\nThis is real life code from our production.\n\nYou can see that this is important to get correct info about deleted\ndata\n\n -- EXPECTED app_period: [\"2018-08-20\", \"2018-08-25\")\n -- ACTUAL app_period: [\"2018-08-14\", )\n\n> Triggers are designed to check and modify input data, and since DELETE\n> has no input data, it makes no sense.\n\nPlease put back into TODO list this feature request to allow\ntriggers to modify output data.\n\nINPUT -- receives data OK (behavior is expected)\nUPDATE -- receives and returns data OK (behavior is expected)\nDELETE -- returns data FAIL (behavior is not expected)\n\nThis is inconsistent to allow modify output data for UPDATE and\nrestrict to do this for DELETE\n\n\nThank you\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Wed, 4 Dec 2019 13:37:57 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "Hello Eugen,\n\n> https://dbfiddle.uk/?rdbms=postgres_12&fiddle=95ed9fab6870d7c4b6266ea4d93def13\n\nsorry, forget to update link to the latest example:\nhttps://dbfiddle.uk/?rdbms=postgres_12&fiddle=8e114ccc9f15a30ca3115cdc6c70d247\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Wed, 4 Dec 2019 13:39:43 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" }, { "msg_contents": "Hello Bruce,\n\n> Triggers are designed to check and modify input data, and since DELETE\n> has no input data, it makes no sense.\n\nSorry, I am still ambiguous. You say that DELETE has no input data,\nbut doc says that it has:\n\nhttps://www.postgresql.org/docs/current/trigger-definition.html\nFor a row-level trigger, the input data also includes ... the OLD row for ... DELETE triggers\n\n\nAlso restricting DELETE to change the returned data by DELETE\nRETURNING seems as incomplete.\n\nFor example if triggers implement some compression.\n -- insert into field ZZZZZ value\n -- compress and actually store Zx5 into field\n -- Delete this insert row\n -- So user should get back that the value ZZZZZ was deleted and not Zx5.\n Correct?\n\n but currently user will see Zx5, because next code:\n\n OLD.value = uncompress( OLD.value );\n\n does not effect RETURNING =(\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n", "msg_date": "Fri, 17 Jan 2020 12:14:03 +0200", "msg_from": "Eugen Konkov <kes-kes@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: Does 'instead of delete' trigger support modification of OLD" } ]
[ { "msg_contents": "Hi,\n\none of the most frequent conflicts I see is that two patches add files\nto OBJS (or one of its other spellings), and there are conflicts because\nanother file has been added.\n\nRight now there's two reasons why that's likely to happen:\n1) By listing multiple objects for each line, we get a conflict whenever\n one of the other files on that lines gets modified\n2) Due to our line-length conventions, we have to re-flow long lines,\n which often triggers reflowing subsequent lines too.\n\nNow, obviously these types of conflicts are easy enough to resolve, but\nit's still annoying. It seems that this would be substantially less\noften a problem if we just split such lines to one file per\nline. E.g. instead of\n\nOBJS_COMMON = base64.o config_info.o controldata_utils.o d2s.o exec.o f2s.o \\\n\tfile_perm.o ip.o keywords.o kwlookup.o link-canary.o md5.o \\\n\tpg_lzcompress.o pgfnames.o psprintf.o relpath.o \\\n\trmtree.o saslprep.o scram-common.o string.o stringinfo.o \\\n\tunicode_norm.o username.o wait_error.o\n\nhave\n\nOBJS_COMMON = \\\n\tbase64.o \\\n\tconfig_info.o \\\n\tcontroldata_utils.o \\\n\td2s.o \\\n\texec.o \\\n\tf2s.o \\\n\tfile_perm.o \\\n\tip.o \\\n\tkeywords.o \\\n\tkwlookup.o \\\n\tlink-canary.o \\\n\tmd5.o \\\n\tpg_lzcompress.o \\\n\tpgfnames.o \\\n\tpsprintf.o \\\n\trelpath.o \\\n\trmtree.o \\\n\tsaslprep.o \\\n\tscram-common.o \\\n\tstring.o \\\n\tstringinfo.o \\\n\tunicode_norm.o \\\n\tusername.o \\\n\twait_error.o\n\na one-off conversion of this seems easy enough to script.\n\nComments?\n\n- Andres\n\n\n", "msg_date": "Tue, 29 Oct 2019 13:09:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "RFC: split OBJS lines to one object per line" }, { "msg_contents": "On Tue, Oct 29, 2019 at 1:09 PM Andres Freund <andres@anarazel.de> wrote:\n> Comments?\n\nI think that this is a good idea. I see no downside.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 29 Oct 2019 13:16:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> one of the most frequent conflicts I see is that two patches add files\n> to OBJS (or one of its other spellings), and there are conflicts because\n> another file has been added.\n> ...\n> Now, obviously these types of conflicts are easy enough to resolve, but\n> it's still annoying. It seems that this would be substantially less\n> often a problem if we just split such lines to one file per\n> line.\n\nWe did something similar not too long ago in configure.in (bfa6c5a0c),\nand it seems to have helped. +1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:31:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "Hi,\n\nOn 2019-10-29 16:31:11 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > one of the most frequent conflicts I see is that two patches add files\n> > to OBJS (or one of its other spellings), and there are conflicts because\n> > another file has been added.\n> > ...\n> > Now, obviously these types of conflicts are easy enough to resolve, but\n> > it's still annoying. It seems that this would be substantially less\n> > often a problem if we just split such lines to one file per\n> > line.\n> \n> We did something similar not too long ago in configure.in (bfa6c5a0c),\n> and it seems to have helped. +1\n\nCool. Any opinion on whether to got for\n\nOBJS = \\\n\tdest.o \\\n\tfastpath.o \\\n...\n\nor\n\nOBJS = dest.o \\\n\tfastpath.o \\\n...\n\nI'm mildly inclined to go for the former.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Oct 2019 23:32:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-29 16:31:11 -0400, Tom Lane wrote:\n>> We did something similar not too long ago in configure.in (bfa6c5a0c),\n>> and it seems to have helped. +1\n\n> Cool. Any opinion on whether to got for ...\n\nNot here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 02:56:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "On Tue, Oct 29, 2019 at 11:32:09PM -0700, Andres Freund wrote:\n> Cool. Any opinion on whether to got for\n> \n> OBJS = \\\n> \tdest.o \\\n> \tfastpath.o \\\n> ...\n> \n> or\n> \n> OBJS = dest.o \\\n> \tfastpath.o \\\n> ...\n> \n> I'm mildly inclined to go for the former.\n\nFWIW, I am more used to the latter, but the former is also fine by\nme.\n--\nMichael", "msg_date": "Wed, 30 Oct 2019 16:40:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "\n\nOn 10/29/19 11:32 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-10-29 16:31:11 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> one of the most frequent conflicts I see is that two patches add files\n>>> to OBJS (or one of its other spellings), and there are conflicts because\n>>> another file has been added.\n>>> ...\n>>> Now, obviously these types of conflicts are easy enough to resolve, but\n>>> it's still annoying. It seems that this would be substantially less\n>>> often a problem if we just split such lines to one file per\n>>> line.\n>>\n>> We did something similar not too long ago in configure.in (bfa6c5a0c),\n>> and it seems to have helped. +1\n> \n> Cool. Any opinion on whether to got for\n> \n> OBJS = \\\n> \tdest.o \\\n> \tfastpath.o \\\n> ...\n> \n> or\n> \n> OBJS = dest.o \\\n> \tfastpath.o \\\n> ...\n> \n> I'm mildly inclined to go for the former.\n\n+1 for the former.\n\n> \n> Greetings,\n> \n> Andres Freund\n> \n> \n\n\n", "msg_date": "Thu, 31 Oct 2019 09:48:46 -0700", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "Hi,\n\nOn 2019-10-29 23:32:09 -0700, Andres Freund wrote:\n> On 2019-10-29 16:31:11 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > one of the most frequent conflicts I see is that two patches add files\n> > > to OBJS (or one of its other spellings), and there are conflicts because\n> > > another file has been added.\n> > > ...\n> > > Now, obviously these types of conflicts are easy enough to resolve, but\n> > > it's still annoying. It seems that this would be substantially less\n> > > often a problem if we just split such lines to one file per\n> > > line.\n> > \n> > We did something similar not too long ago in configure.in (bfa6c5a0c),\n> > and it seems to have helped. +1\n> \n> Cool. Any opinion on whether to got for\n> \n> OBJS = \\\n> \tdest.o \\\n> \tfastpath.o \\\n> ...\n> \n> or\n> \n> OBJS = dest.o \\\n> \tfastpath.o \\\n> ...\n> \n> I'm mildly inclined to go for the former.\n\nPushed a patch going with the former. Let's see what the buildfarm\nsays...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Nov 2019 14:47:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "On Tue, Nov 05, 2019 at 02:47:55PM -0800, Andres Freund wrote:\n> Pushed a patch going with the former. Let's see what the buildfarm\n> says...\n\nThanks Andres. On a rather related note, would it make sense to do\nthe same for regression and isolation tests in our in-core modules?\n--\nMichael", "msg_date": "Thu, 7 Nov 2019 11:24:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Nov 05, 2019 at 02:47:55PM -0800, Andres Freund wrote:\n>> Pushed a patch going with the former. Let's see what the buildfarm\n>> says...\n\n> Thanks Andres. On a rather related note, would it make sense to do\n> the same for regression and isolation tests in our in-core modules?\n\nI don't think it'd be a great idea to change parallel_schedule like\nthat. Independently adding test scripts to the same parallel batch\nprobably won't end well: you might end up over the concurrency limit,\nor the scripts might conflict through sharing table names or the like.\nSo I'd rather see that there's a conflict to worry about.\n\nAnyway, merge conflicts there aren't so common IME.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 Nov 2019 12:02:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "Hi,\n\nOn 2019-11-07 11:24:37 +0900, Michael Paquier wrote:\n> On Tue, Nov 05, 2019 at 02:47:55PM -0800, Andres Freund wrote:\n> > Pushed a patch going with the former. Let's see what the buildfarm\n> > says...\n> \n> Thanks Andres. On a rather related note, would it make sense to do\n> the same for regression and isolation tests in our in-core modules?\n\nI don't see them as being frequent sources of conflicts (partially\nbecause we don't change linebreaks due to line length limits, I think),\nso I don't think it's really worthwhile.\n\nOne I could see some benefit in, would be the SUBDIRS makefile lines.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Nov 2019 09:20:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "On Thu, Nov 07, 2019 at 12:02:04PM -0500, Tom Lane wrote:\n> I don't think it'd be a great idea to change parallel_schedule like\n> that. Independently adding test scripts to the same parallel batch\n> probably won't end well: you might end up over the concurrency limit,\n> or the scripts might conflict through sharing table names or the like.\n> So I'd rather see that there's a conflict to worry about.\n> \n> Anyway, merge conflicts there aren't so common IME.\n\nFWIW, I was not referring to the schedule files here, just to REGRESS\nand ISOLATION in the modules' Makefiles. If you think that's not\nworth doing it, let's drop my suggestion then.\n--\nMichael", "msg_date": "Fri, 8 Nov 2019 18:07:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "On Fri, 8 Nov 2019 at 14:38, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 07, 2019 at 12:02:04PM -0500, Tom Lane wrote:\n> > I don't think it'd be a great idea to change parallel_schedule like\n> > that. Independently adding test scripts to the same parallel batch\n> > probably won't end well: you might end up over the concurrency limit,\n> > or the scripts might conflict through sharing table names or the like.\n> > So I'd rather see that there's a conflict to worry about.\n> >\n> > Anyway, merge conflicts there aren't so common IME.\n>\n> FWIW, I was not referring to the schedule files here, just to REGRESS\n> and ISOLATION in the modules' Makefiles. If you think that's not\n> worth doing it, let's drop my suggestion then.\n> --\n\nI found some inconsistency in alphabetical order in\nsrc/backend/tsearch/Makefile, src/backend/utils/Makefile and\nsrc/pl/plpython/Makefile files. Attached patch is fixing those order\nrelated inconsistency.\n\nThanks and Regards\nMahendra Thalor\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 17 Dec 2019 23:40:17 +0530", "msg_from": "Mahendra Singh <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "On Tue, Dec 17, 2019 at 11:40:17PM +0530, Mahendra Singh wrote:\n> I found some inconsistency in alphabetical order in\n> src/backend/tsearch/Makefile, src/backend/utils/Makefile and\n> src/pl/plpython/Makefile files. Attached patch is fixing those order\n> related inconsistency.\n\nThanks, committed. The one-liner style is also used in ifaddrs, but\nfmgrtab.c is generated so I have left that out. Now, have you tried\nto compile plpython before sending this patch? Because as you forgot\nto add one backslash after WIN32RES, compilation was failing there.\nAnd you also forgot to remove two backslashes at the end of the other\ntwo lists modified :)\n--\nMichael", "msg_date": "Wed, 18 Dec 2019 10:53:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" }, { "msg_contents": "On Wed, 18 Dec 2019 at 07:23, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 17, 2019 at 11:40:17PM +0530, Mahendra Singh wrote:\n> > I found some inconsistency in alphabetical order in\n> > src/backend/tsearch/Makefile, src/backend/utils/Makefile and\n> > src/pl/plpython/Makefile files. Attached patch is fixing those order\n> > related inconsistency.\n>\n> Thanks, committed. The one-liner style is also used in ifaddrs, but\n\nThanks Michael for quick response.\n\n> fmgrtab.c is generated so I have left that out. Now, have you tried\n> to compile plpython before sending this patch? Because as you forgot\n> to add one backslash after WIN32RES, compilation was failing there.\n> And you also forgot to remove two backslashes at the end of the other\n> two lists modified :)\n\nSorry, I forgot to add backslashes. I will take care from next time.\n\nThanks and Regards\nMahendra Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Dec 2019 08:30:24 +0530", "msg_from": "Mahendra Singh <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RFC: split OBJS lines to one object per line" } ]
[ { "msg_contents": "Hi,\n\nWhen using -b, --bkp-details pg_waldump outputs an unnecessary newline\nfor blocks that contain an FPW.\n\nIn --bkp-details block references are output on their own lines, like:\n\nrmgr: SPGist len (rec/tot): 4348/ 4348, tx: 980, lsn: 0/01985818, prev 0/01983850, desc: PICKSPLIT ndel 92; nins 93\n blkref #0: rel 1663/16384/16967 fork main blk 3\n blkref #1: rel 1663/16384/16967 fork main blk 6\n blkref #2: rel 1663/16384/16967 fork main blk 5\n blkref #3: rel 1663/16384/16967 fork main blk 1\nrmgr: Heap len (rec/tot): 69/ 69, tx: 980, lsn: 0/01986930, prev 0/01985818, desc: INSERT off 2 flags 0x00\n blkref #0: rel 1663/16384/16961 fork main blk 1\n\nbut unfortunately, when there's actually an FPW present, it looks like:\n\nrmgr: XLOG len (rec/tot): 75/ 11199, tx: 977, lsn: 0/019755E0, prev 0/0194EDD8, desc: FPI\n blkref #0: rel 1663/16384/16960 fork main blk 32 (FPW); hole: offset: 548, length: 4484\n\n blkref #1: rel 1663/16384/16960 fork main blk 33 (FPW); hole: offset: 548, length: 4484\n\n blkref #2: rel 1663/16384/16960 fork main blk 34 (FPW); hole: offset: 548, length: 4484\n\nrmgr: Heap len (rec/tot): 188/ 188, tx: 977, lsn: 0/019781D0, prev 0/019755E0, desc: INPLACE off 23\n\nwhich clearly seems unnecessary. Looking at the code it seems to me that\n\nstatic void\nXLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)\n{\n...\n printf(\"\\tblkref #%u: rel %u/%u/%u fork %s blk %u\",\n block_id,\n rnode.spcNode, rnode.dbNode, rnode.relNode,\n forkNames[forknum],\n blk);\n if (XLogRecHasBlockImage(record, block_id))\n {\n if (record->blocks[block_id].bimg_info &\n BKPIMAGE_IS_COMPRESSED)\n {\n printf(\" (FPW%s); hole: offset: %u, length: %u, \"\n \"compression saved: %u\\n\",\n XLogRecBlockImageApply(record, block_id) ?\n \"\" : \" for WAL verification\",\n record->blocks[block_id].hole_offset,\n record->blocks[block_id].hole_length,\n BLCKSZ -\n record->blocks[block_id].hole_length -\n record->blocks[block_id].bimg_len);\n }\n else\n {\n printf(\" (FPW%s); hole: offset: %u, length: %u\\n\",\n XLogRecBlockImageApply(record, block_id) ?\n \"\" : \" for WAL verification\",\n record->blocks[block_id].hole_offset,\n record->blocks[block_id].hole_length);\n }\n }\n putchar('\\n');\n\nwas intended to not actually print a newline in the printfs in the if\npreceding the putchar.\n\nThis is a fairly longstanding bug, introduced in:\n\ncommit 2c03216d831160bedd72d45f712601b6f7d03f1c\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: 2014-11-20 17:56:26 +0200\n\n Revamp the WAL record format.\n\n\nDoes anybody have an opinion about fixing it just in master or also\nbackpatching it? I guess there could be people having written parsers\nfor the waldump output? I'm inclined to backpatch.\n\n\nI also find a second minor bug:\n\nstatic void\nXLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)\n{\n...\n const char *id;\n...\n id = desc->rm_identify(info);\n if (id == NULL)\n id = psprintf(\"UNKNOWN (%x)\", info & ~XLR_INFO_MASK);\n...\n printf(\"desc: %s \", id);\n\nafter that \"id\" is not referenced anymore. Which means we would leak\nmemory if there were a lot of UNKNOWN records. This is from\ncommit 604f7956b9460192222dd37bd3baea24cb669a47\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2014-09-22 16:48:14 +0200\n\n Improve code around the recently added rm_identify rmgr callback.\n\nWhile not a lot of memory, it's not absurd to run pg_waldump against a\nlarge amount of WAL, so backpatching seems mildly advised.\n\nI'm inlined to think that the best fix is to just move the relevant code\nto the callsite, and not psprintf'ing into a temporary buffer. We'd need\nadditional state to free the memory, as rm_identify returns a static\nbuffer.\n\nSo I'll make it\n\n id = desc->rm_identify(info);\n if (id == NULL)\n printf(\"desc: UNKNOWN (%x) \", info & ~XLR_INFO_MASK);\n else\n printf(\"desc: %s \", id);\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:33:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pg_waldump erroneously outputs newline for FPWs, and another minor\n bug" }, { "msg_contents": "On Tue, Oct 29, 2019 at 4:33 PM Andres Freund <andres@anarazel.de> wrote:\n> Does anybody have an opinion about fixing it just in master or also\n> backpatching it? I guess there could be people having written parsers\n> for the waldump output? I'm inclined to backpatch.\n\nThe same commit from Heikki omitted one field from that record, for no\ngood reason. I backpatched a bugfix to the output format for nbtree\npage splits a few weeks ago, fixing that problem. I agree that we\nshould also backpatch this bugfix.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 29 Oct 2019 16:42:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_waldump erroneously outputs newline for FPWs, and another\n minor bug" }, { "msg_contents": "On Tue, Oct 29, 2019 at 04:42:07PM -0700, Peter Geoghegan wrote:\n> The same commit from Heikki omitted one field from that record, for no\n> good reason. I backpatched a bugfix to the output format for nbtree\n> page splits a few weeks ago, fixing that problem. I agree that we\n> should also backpatch this bugfix.\n\nThe output format of pg_waldump may matter for some tools, like\nJehan-Guillaume's PAF [1], but I am ready to bet that any tools like\nthat just skip any noise newlines, so +1 for a backpatch.\n\nI am adding Jehan-Guillaume in CC just in case.\n\n[1]: https://github.com/ClusterLabs/PAF\n--\nMichael", "msg_date": "Wed, 30 Oct 2019 09:26:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_waldump erroneously outputs newline for FPWs, and another\n minor bug" }, { "msg_contents": "Hi,\n\nOn 2019-10-29 16:33:41 -0700, Andres Freund wrote:\n> Hi,\n> \n> When using -b, --bkp-details pg_waldump outputs an unnecessary newline\n> for blocks that contain an FPW.\n> \n> In --bkp-details block references are output on their own lines, like:\n> \n> rmgr: SPGist len (rec/tot): 4348/ 4348, tx: 980, lsn: 0/01985818, prev 0/01983850, desc: PICKSPLIT ndel 92; nins 93\n> blkref #0: rel 1663/16384/16967 fork main blk 3\n> blkref #1: rel 1663/16384/16967 fork main blk 6\n> blkref #2: rel 1663/16384/16967 fork main blk 5\n> blkref #3: rel 1663/16384/16967 fork main blk 1\n> rmgr: Heap len (rec/tot): 69/ 69, tx: 980, lsn: 0/01986930, prev 0/01985818, desc: INSERT off 2 flags 0x00\n> blkref #0: rel 1663/16384/16961 fork main blk 1\n> \n> but unfortunately, when there's actually an FPW present, it looks like:\n> \n> rmgr: XLOG len (rec/tot): 75/ 11199, tx: 977, lsn: 0/019755E0, prev 0/0194EDD8, desc: FPI\n> blkref #0: rel 1663/16384/16960 fork main blk 32 (FPW); hole: offset: 548, length: 4484\n> \n> blkref #1: rel 1663/16384/16960 fork main blk 33 (FPW); hole: offset: 548, length: 4484\n> \n> blkref #2: rel 1663/16384/16960 fork main blk 34 (FPW); hole: offset: 548, length: 4484\n> \n> rmgr: Heap len (rec/tot): 188/ 188, tx: 977, lsn: 0/019781D0, prev 0/019755E0, desc: INPLACE off 23\n> \n> which clearly seems unnecessary. Looking at the code it seems to me that\n> \n> static void\n> XLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)\n> {\n> ...\n> printf(\"\\tblkref #%u: rel %u/%u/%u fork %s blk %u\",\n> block_id,\n> rnode.spcNode, rnode.dbNode, rnode.relNode,\n> forkNames[forknum],\n> blk);\n> if (XLogRecHasBlockImage(record, block_id))\n> {\n> if (record->blocks[block_id].bimg_info &\n> BKPIMAGE_IS_COMPRESSED)\n> {\n> printf(\" (FPW%s); hole: offset: %u, length: %u, \"\n> \"compression saved: %u\\n\",\n> XLogRecBlockImageApply(record, block_id) ?\n> \"\" : \" for WAL verification\",\n> record->blocks[block_id].hole_offset,\n> record->blocks[block_id].hole_length,\n> BLCKSZ -\n> record->blocks[block_id].hole_length -\n> record->blocks[block_id].bimg_len);\n> }\n> else\n> {\n> printf(\" (FPW%s); hole: offset: %u, length: %u\\n\",\n> XLogRecBlockImageApply(record, block_id) ?\n> \"\" : \" for WAL verification\",\n> record->blocks[block_id].hole_offset,\n> record->blocks[block_id].hole_length);\n> }\n> }\n> putchar('\\n');\n> \n> was intended to not actually print a newline in the printfs in the if\n> preceding the putchar.\n> \n> This is a fairly longstanding bug, introduced in:\n> \n> commit 2c03216d831160bedd72d45f712601b6f7d03f1c\n> Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: 2014-11-20 17:56:26 +0200\n> \n> Revamp the WAL record format.\n> \n> \n> Does anybody have an opinion about fixing it just in master or also\n> backpatching it? I guess there could be people having written parsers\n> for the waldump output? I'm inclined to backpatch.\n> \n> \n> I also find a second minor bug:\n> \n> static void\n> XLogDumpDisplayRecord(XLogDumpConfig *config, XLogReaderState *record)\n> {\n> ...\n> const char *id;\n> ...\n> id = desc->rm_identify(info);\n> if (id == NULL)\n> id = psprintf(\"UNKNOWN (%x)\", info & ~XLR_INFO_MASK);\n> ...\n> printf(\"desc: %s \", id);\n> \n> after that \"id\" is not referenced anymore. Which means we would leak\n> memory if there were a lot of UNKNOWN records. This is from\n> commit 604f7956b9460192222dd37bd3baea24cb669a47\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2014-09-22 16:48:14 +0200\n> \n> Improve code around the recently added rm_identify rmgr callback.\n> \n> While not a lot of memory, it's not absurd to run pg_waldump against a\n> large amount of WAL, so backpatching seems mildly advised.\n> \n> I'm inlined to think that the best fix is to just move the relevant code\n> to the callsite, and not psprintf'ing into a temporary buffer. We'd need\n> additional state to free the memory, as rm_identify returns a static\n> buffer.\n> \n> So I'll make it\n> \n> id = desc->rm_identify(info);\n> if (id == NULL)\n> printf(\"desc: UNKNOWN (%x) \", info & ~XLR_INFO_MASK);\n> else\n> printf(\"desc: %s \", id);\n\nPushed fixes for these.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Oct 2019 22:59:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pg_waldump erroneously outputs newline for FPWs, and another\n minor bug" }, { "msg_contents": "On Wed, 30 Oct 2019 09:26:21 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Oct 29, 2019 at 04:42:07PM -0700, Peter Geoghegan wrote:\n> > The same commit from Heikki omitted one field from that record, for no\n> > good reason. I backpatched a bugfix to the output format for nbtree\n> > page splits a few weeks ago, fixing that problem. I agree that we\n> > should also backpatch this bugfix. \n> \n> The output format of pg_waldump may matter for some tools, like\n> Jehan-Guillaume's PAF [1], but I am ready to bet that any tools like\n> that just skip any noise newlines, so +1 for a backpatch.\n> \n> I am adding Jehan-Guillaume in CC just in case.\n\nThank you Michael!\n\n\n", "msg_date": "Mon, 4 Nov 2019 12:42:35 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: pg_waldump erroneously outputs newline for FPWs, and another\n minor bug" } ]
[ { "msg_contents": "Hi,\n\nThis patch, in a slightly rougher form, was submitted as part of [1],\nbut it seems worth bringing up separately, rather than just committing\nhearing no objections.\n\n From the commit message:\n\n Make StringInfo available to frontend code.\n\n There's plenty places in frontend code that could benefit from a\n string buffer implementation. Some because it yields simpler and\n faster code, and some others because of the desire to share code\n between backend and frontend.\n\n While there is a string buffer implementation available to frontend\n code, libpq's PQExpBuffer, it is clunkier than stringinfo, it\n introduces a libpq dependency, doesn't allow for sharing between\n frontend and backend code, and has a higher API/ABI stability\n requirement due to being exposed via libpq.\n\n Therefore it seems best to just making StringInfo being usable by\n frontend code. There's not much to do for that, except for rewriting\n two subsequent elog/ereport calls into others types of error\n reporting, and deciding on a maximum string length.\n\n For the maximum string size I decided to privately define MaxAllocSize\n to the same value as used in the backend. It seems likely that we'll\n want to reconsider this for both backend and frontend code in the not\n too far away future.\n\n For now I've left stringinfo.h in lib/, rather than common/, to reduce\n the likelihood of unnecessary breakage. We could alternatively decide\n to provide a redirecting stringinfo.h in lib/, or just not provide\n compatibility.\n\nI'm still using stringinfo in the aforementioned thread, and I also want\nto use it in a few more places. On the more ambitious side I really\nwould like to have a minimal version of elog.h available in the backend,\nand that would really be a lot easier with stringinfo available.\n\nI also would like to submit a few patches expanding stringinfo's\ncapabilities and performance, and it seems to me it'd be better to do so\nafter moving (lest they introduce new FE vs BE compat issues).\n\n\nThis allows us to remove compat.c hackery providing some stringinfo\nfunctionality for pg_waldump (which now actually needs to pass in a\nStringInfo...). I briefly played with converting more code in\npg_waldump.c than just that one call to StringInfo, but it seems that'd\nbe best done separately.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20190920051857.2fhnvhvx4qdddviz@alap3.anarazel.de", "msg_date": "Tue, 29 Oct 2019 17:10:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Make StringInfo available to frontend code." }, { "msg_contents": "At Tue, 29 Oct 2019 17:10:01 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> This patch, in a slightly rougher form, was submitted as part of [1],\n> but it seems worth bringing up separately, rather than just committing\n> hearing no objections.\n..\n> I'm still using stringinfo in the aforementioned thread, and I also want\n> to use it in a few more places. On the more ambitious side I really\n> would like to have a minimal version of elog.h available in the backend,\n> and that would really be a lot easier with stringinfo available.\n> \n> I also would like to submit a few patches expanding stringinfo's\n> capabilities and performance, and it seems to me it'd be better to do so\n> after moving (lest they introduce new FE vs BE compat issues).\n> \n> \n> This allows us to remove compat.c hackery providing some stringinfo\n> functionality for pg_waldump (which now actually needs to pass in a\n> StringInfo...). I briefly played with converting more code in\n> pg_waldump.c than just that one call to StringInfo, but it seems that'd\n> be best done separately.\n> \n> Comments?\n\nIt uses different form for the same message for FE and BE.\n\ncommon/stringinfo.c:289-\n> BE:\tereport(ERROR,\n> \t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> \t\t\t errmsg(\"out of memory\"),\n> \t\t\t errdetail(\"Cannot enlarge string buffer containing %d\n> bytes by %d more bytes.\",\n> \n> FE: +\t\t_(\"out of memory\\n\\nCannot enlarge string buffer containing %d\n> bytes by %d more bytes.\\n\"),\n\n.po files will be smaller and more stable if we keep the same\ntranslation unit for the same messages. That being said it doesn't\nmatter if it is tentative and the minimal elog.h for frontend comes\nsoon.\n\n\n> /* It's possible we could use a different value for this in frontend code */\n> #define MaxAllocSize\t((Size) 0x3fffffff) /* 1 gigabyte - 1 */\n\nThe same symbol is defined for frontend in psprint.c. Isn't it better\nmerge them and place it in postgres_fe.h?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 30 Oct 2019 10:58:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make StringInfo available to frontend code." }, { "msg_contents": "Hi,\n\nOn 2019-10-30 10:58:59 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 29 Oct 2019 17:10:01 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > Hi,\n> > \n> > This patch, in a slightly rougher form, was submitted as part of [1],\n> > but it seems worth bringing up separately, rather than just committing\n> > hearing no objections.\n> ..\n> > I'm still using stringinfo in the aforementioned thread, and I also want\n> > to use it in a few more places. On the more ambitious side I really\n> > would like to have a minimal version of elog.h available in the backend,\n> > and that would really be a lot easier with stringinfo available.\n> > \n> > I also would like to submit a few patches expanding stringinfo's\n> > capabilities and performance, and it seems to me it'd be better to do so\n> > after moving (lest they introduce new FE vs BE compat issues).\n> > \n> > \n> > This allows us to remove compat.c hackery providing some stringinfo\n> > functionality for pg_waldump (which now actually needs to pass in a\n> > StringInfo...). I briefly played with converting more code in\n> > pg_waldump.c than just that one call to StringInfo, but it seems that'd\n> > be best done separately.\n> > \n> > Comments?\n> \n> It uses different form for the same message for FE and BE.\n\n> common/stringinfo.c:289-\n> > BE:\tereport(ERROR,\n> > \t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> > \t\t\t errmsg(\"out of memory\"),\n> > \t\t\t errdetail(\"Cannot enlarge string buffer containing %d\n> > bytes by %d more bytes.\",\n> > \n> > FE: +\t\t_(\"out of memory\\n\\nCannot enlarge string buffer containing %d\n> > bytes by %d more bytes.\\n\"),\n> \n> .po files will be smaller and more stable if we keep the same\n> translation unit for the same messages. That being said it doesn't\n> matter if it is tentative and the minimal elog.h for frontend comes\n> soon.\n\nI'm inclined to think that the contortions necessary to allow reusing\nthe translation strings here would be more work than worthwhile. Also,\ndo we even try to share the translations between backend and frontend?\n\n\n> > /* It's possible we could use a different value for this in frontend code */\n> > #define MaxAllocSize\t((Size) 0x3fffffff) /* 1 gigabyte - 1 */\n> \n> The same symbol is defined for frontend in psprint.c. Isn't it better\n> merge them and place it in postgres_fe.h?\n\nNo, I don't think so. I'd rather have less than more code depend on it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 29 Oct 2019 19:06:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Make StringInfo available to frontend code." }, { "msg_contents": "Hello.\n\nAt Tue, 29 Oct 2019 19:06:38 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2019-10-30 10:58:59 +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 29 Oct 2019 17:10:01 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > > Hi,\n> > > \n> > > This patch, in a slightly rougher form, was submitted as part of [1],\n> > > but it seems worth bringing up separately, rather than just committing\n> > > hearing no objections.\n> > ..\n> > > I'm still using stringinfo in the aforementioned thread, and I also want\n> > > to use it in a few more places. On the more ambitious side I really\n> > > would like to have a minimal version of elog.h available in the backend,\n> > > and that would really be a lot easier with stringinfo available.\n> > > \n> > > I also would like to submit a few patches expanding stringinfo's\n> > > capabilities and performance, and it seems to me it'd be better to do so\n> > > after moving (lest they introduce new FE vs BE compat issues).\n> > > \n> > > \n> > > This allows us to remove compat.c hackery providing some stringinfo\n> > > functionality for pg_waldump (which now actually needs to pass in a\n> > > StringInfo...). I briefly played with converting more code in\n> > > pg_waldump.c than just that one call to StringInfo, but it seems that'd\n> > > be best done separately.\n> > > \n> > > Comments?\n> > \n> > It uses different form for the same message for FE and BE.\n> \n> > common/stringinfo.c:289-\n> > > BE:\tereport(ERROR,\n> > > \t\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> > > \t\t\t errmsg(\"out of memory\"),\n> > > \t\t\t errdetail(\"Cannot enlarge string buffer containing %d\n> > > bytes by %d more bytes.\",\n> > > \n> > > FE: +\t\t_(\"out of memory\\n\\nCannot enlarge string buffer containing %d\n> > > bytes by %d more bytes.\\n\"),\n> > \n> > .po files will be smaller and more stable if we keep the same\n> > translation unit for the same messages. That being said it doesn't\n> > matter if it is tentative and the minimal elog.h for frontend comes\n> > soon.\n> \n> I'm inclined to think that the contortions necessary to allow reusing\n> the translation strings here would be more work than worthwhile. Also,\n\nMaybe so for doing that in this stage. So I expect that elog.h for FE\ncomes.\n\n> the translation strings here would be more work than worthwhile. Also,\n> do we even try to share the translations between backend and frontend?\n\nNo. I don't mean that. FE and BE have their own .po files, anyway.\n\n> > > /* It's possible we could use a different value for this in frontend code */\n> > > #define MaxAllocSize\t((Size) 0x3fffffff) /* 1 gigabyte - 1 */\n> > \n> > The same symbol is defined for frontend in psprint.c. Isn't it better\n> > merge them and place it in postgres_fe.h?\n> \n> No, I don't think so. I'd rather have less than more code depend on it.\n\nOk, understoold. Thanks for the reply.\n\nregards.\n\n\n", "msg_date": "Wed, 30 Oct 2019 11:18:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make StringInfo available to frontend code." }, { "msg_contents": "> On 30 Oct 2019, at 01:10, Andres Freund <andres@anarazel.de> wrote:\n\n> Make StringInfo available to frontend code.\n\nI’ve certainly wanted just that on multiple occasions, so +1 on this.\n\n> Therefore it seems best to just making StringInfo being usable by\n> frontend code. There's not much to do for that, except for rewriting\n> two subsequent elog/ereport calls into others types of error\n> reporting, and deciding on a maximum string length.\n\nSkimming (but not testing) the patch, it seems a reasonable approach.\n\n+ * StringInfo provides an extensible string data type. It can be used to\n\nIt might be useful to point out the upper bound on the extensibility in the\nrewrite of this sentence, and that it’s not guaranteed to be consistent between\nfrontend and backend.\n\n> I'm still using stringinfo in the aforementioned thread, and I also want\n> to use it in a few more places. On the more ambitious side I really\n> would like to have a minimal version of elog.h available in the backend,\n> and that would really be a lot easier with stringinfo available.\n\ns/backend/frontend/?\n\ncheers ./daniel\n\n", "msg_date": "Fri, 1 Nov 2019 23:19:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Make StringInfo available to frontend code." }, { "msg_contents": "Hi,\n\nOn 2019-11-01 23:19:33 +0100, Daniel Gustafsson wrote:\n> > On 30 Oct 2019, at 01:10, Andres Freund <andres@anarazel.de> wrote:\n> \n> > Make StringInfo available to frontend code.\n> \n> I’ve certainly wanted just that on multiple occasions, so +1 on this.\n\nCool.\n\n\n> + * StringInfo provides an extensible string data type. It can be used to\n> \n> It might be useful to point out the upper bound on the extensibility in the\n> rewrite of this sentence, and that it’s not guaranteed to be consistent between\n> frontend and backend.\n\nHm. Something like 'Currently the maximum length of a StringInfo is\n1GB.'? I don't really think it's worth pointing out that they may not be\nconsistent, when they currently are...\n\nAnd I suspect we should just fix the length limit to be higher for both,\nperhaps somehow allowing to limit the length for the backend cases where\nwe want to error out if a string gets too long (possibly adding a\nseparate initialization API that allows to specify the memory allocation\nflags or such).\n\n\n> > I'm still using stringinfo in the aforementioned thread, and I also want\n> > to use it in a few more places. On the more ambitious side I really\n> > would like to have a minimal version of elog.h available in the backend,\n> > and that would really be a lot easier with stringinfo available.\n> \n> s/backend/frontend/?\n\nIndeed.\n\nThanks for looking,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Nov 2019 19:21:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Make StringInfo available to frontend code." }, { "msg_contents": "> On 2 Nov 2019, at 03:21, Andres Freund <andres@anarazel.de> wrote:\n> On 2019-11-01 23:19:33 +0100, Daniel Gustafsson wrote:\n\n>> + * StringInfo provides an extensible string data type. It can be used to\n>> \n>> It might be useful to point out the upper bound on the extensibility in the\n>> rewrite of this sentence, and that it’s not guaranteed to be consistent between\n>> frontend and backend.\n> \n> Hm. Something like 'Currently the maximum length of a StringInfo is\n> 1GB.’?\n\nSomething along those lines (or the define/mechanism with which the upper\nboundary controlled, simply stating the limit seems more straightforward.)\n\n> I don't really think it's worth pointing out that they may not be\n> consistent, when they currently are…\n\nGood point.\n\n> And I suspect we should just fix the length limit to be higher for both,\n> perhaps somehow allowing to limit the length for the backend cases where\n> we want to error out if a string gets too long (possibly adding a\n> separate initialization API that allows to specify the memory allocation\n> flags or such).\n\nSounds reasonable, maybe even where one can set errhint/errdetail on the “out\nof memory” error to get better reporting on failures.\n\ncheers ./daniel\n\n", "msg_date": "Sat, 2 Nov 2019 23:57:06 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Make StringInfo available to frontend code." }, { "msg_contents": "Hi,\n\nOn 2019-11-02 23:57:06 +0100, Daniel Gustafsson wrote:\n> > On 2 Nov 2019, at 03:21, Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-11-01 23:19:33 +0100, Daniel Gustafsson wrote:\n> \n> >> + * StringInfo provides an extensible string data type. It can be used to\n> >> \n> >> It might be useful to point out the upper bound on the extensibility in the\n> >> rewrite of this sentence, and that it’s not guaranteed to be consistent between\n> >> frontend and backend.\n> > \n> > Hm. Something like 'Currently the maximum length of a StringInfo is\n> > 1GB.’?\n> \n> Something along those lines (or the define/mechanism with which the upper\n> boundary controlled, simply stating the limit seems more straightforward.)\n\nPushed now, with a variation of my suggestion above.\n\nAs nobody commented on that, I've not adjusted the location of\nstringinfo.h. If somebody has feelings about it being in the wrong\nplace, and whether to put a redirecting header in place, or whether to\njust break extensions using stringinfo, ...\n\n\n> > And I suspect we should just fix the length limit to be higher for both,\n> > perhaps somehow allowing to limit the length for the backend cases where\n> > we want to error out if a string gets too long (possibly adding a\n> > separate initialization API that allows to specify the memory allocation\n> > flags or such).\n> \n> Sounds reasonable, maybe even where one can set errhint/errdetail on the “out\n> of memory” error to get better reporting on failures.\n\nThat seems too much customization for the caller / too much added\ncomplexity, for the little gain it'd provide. Normally if a caller wants\nsomething like that, they can push something onto the error context\nstack, and get context information out that way.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Nov 2019 16:43:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Make StringInfo available to frontend code." } ]
[ { "msg_contents": "Hello,\n\n\nI propose new simple sql query, which shows total block numbers in the \nrelation.\n\nI now reviewing this patch (https://commitfest.postgresql.org/25/2211/) \nand I think,\nit is usefull for knowing how many blocks there are in the relation to \ndetermine whether we use VACUUM RESUME or not.\n\nOf cource, we can know this value such as\n\nselect (pg_relation_size('t') / \ncurrent_setting('block_size')::bigint)::int;\n\n\nbut I think it is a litte bit complex.\n\n\n\nComment and feedback are very welcome.\n\nRegards ,\n\n\nYu Kimura", "msg_date": "Wed, 30 Oct 2019 16:48:00 +0900", "msg_from": "btkimurayuzk <btkimurayuzk@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Add SQL function to show total block numbers in the relation" }, { "msg_contents": "btkimurayuzk <btkimurayuzk@oss.nttdata.com> writes:\n> I propose new simple sql query, which shows total block numbers in the \n> relation.\n> ...\n> Of cource, we can know this value such as\n> select (pg_relation_size('t') / \n> current_setting('block_size')::bigint)::int;\n\nI don't really see why the existing solution isn't sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 10:09:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add SQL function to show total block numbers in the relation" }, { "msg_contents": "On Wed, Oct 30, 2019 at 10:09:47AM -0400, Tom Lane wrote:\n> btkimurayuzk <btkimurayuzk@oss.nttdata.com> writes:\n>> I propose new simple sql query, which shows total block numbers in the \n>> relation.\n>> ...\n>> Of cource, we can know this value such as\n>> select (pg_relation_size('t') / \n>> current_setting('block_size')::bigint)::int;\n> \n> I don't really see why the existing solution isn't sufficient.\n\n+1.\n--\nMichael", "msg_date": "Thu, 31 Oct 2019 12:29:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add SQL function to show total block numbers in the relation" }, { "msg_contents": "> btkimurayuzk <btkimurayuzk@oss.nttdata.com> writes:\n>> I propose new simple sql query, which shows total block numbers in the\n>> relation.\n>> ...\n>> Of cource, we can know this value such as\n>> select (pg_relation_size('t') /\n>> current_setting('block_size')::bigint)::int;\n> \n> I don't really see why the existing solution isn't sufficient.\n\nI think it's a little difficult to introduce the block size using two \nvalues `current block size` and `reference size`\nfor beginners who are not familiar with the internal structure of \nPostgres,\n\nThis is the reason why the existing solution was insufficient.\n\nWhat do you think?\n\nRegards,\nYu Kimura\n\n\n", "msg_date": "Thu, 07 Nov 2019 17:04:51 +0900", "msg_from": "btkimurayuzk <btkimurayuzk@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add SQL function to show total block numbers in the relation" }, { "msg_contents": "Hello, Kimura-san.\n\nAt Thu, 07 Nov 2019 17:04:51 +0900, btkimurayuzk <btkimurayuzk@oss.nttdata.com> wrote in \n> > btkimurayuzk <btkimurayuzk@oss.nttdata.com> writes:\n> >> I propose new simple sql query, which shows total block numbers in the\n> >> relation.\n> >> ...\n> >> Of cource, we can know this value such as\n> >> select (pg_relation_size('t') /\n> >> current_setting('block_size')::bigint)::int;\n> > I don't really see why the existing solution isn't sufficient.\n> \n> I think it's a little difficult to introduce the block size using two\n> values `current block size` and `reference size`\n> for beginners who are not familiar with the internal structure of\n> Postgres,\n> \n> This is the reason why the existing solution was insufficient.\n> \n> What do you think?\n\nSorry, but I also vote -1 for the new function.\n\nSize in block number is useless for those who doesn't understand the\nnotion of block, or block size. Those who understands the notion\nshould come up with the simple formula (except the annoying\ncasts). Anyone can find the clue to the base values by searching the\ndocument in the Web with the keywords \"block size\" and \"relation size\"\nor even with \"table size\". (FWIW, I would even do the same for the new\nfunction if any...) If they need it so frequently, a user-defined\nfunction is easily made up.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 Nov 2019 18:01:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add SQL function to show total block numbers in the relation" }, { "msg_contents": "On Thu, Nov 07, 2019 at 06:01:34PM +0900, Kyotaro Horiguchi wrote:\n> Sorry, but I also vote -1 for the new function.\n\nSo do I. If there are no objections, I will mark the patch as\nrejected in the CF app.\n\n> If they need it so frequently, a user-defined function is easily\n> made up.\n\nYep.\n--\nMichael", "msg_date": "Fri, 8 Nov 2019 09:30:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add SQL function to show total block numbers in the relation" }, { "msg_contents": "On Fri, Nov 08, 2019 at 09:30:56AM +0900, Michael Paquier wrote:\n> On Thu, Nov 07, 2019 at 06:01:34PM +0900, Kyotaro Horiguchi wrote:\n>> Sorry, but I also vote -1 for the new function.\n> \n> So do I. If there are no objections, I will mark the patch as\n> rejected in the CF app.\n\nAnd done.\n--\nMichael", "msg_date": "Tue, 12 Nov 2019 13:54:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add SQL function to show total block numbers in the relation" }, { "msg_contents": "> Size in block number is useless for those who doesn't understand the\n> notion of block, or block size. Those who understands the notion\n> should come up with the simple formula (except the annoying\n> casts). Anyone can find the clue to the base values by searching the\n> document in the Web with the keywords \"block size\" and \"relation size\"\n> or even with \"table size\". (FWIW, I would even do the same for the new\n> function if any...) If they need it so frequently, a user-defined\n> function is easily made up.\n> \n> regards.\n\n\nI didn't know about the existence of the user-defined function .\nI fully understood , Thanks .\n\nRegards,\n\nYu Kimura\n\n\n\n", "msg_date": "Wed, 13 Nov 2019 10:55:36 +0900", "msg_from": "btkimurayuzk <btkimurayuzk@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Add SQL function to show total block numbers in the relation" } ]
[ { "msg_contents": "A global index by very definition is a single index on the parent table\nthat maps to many\nunderlying table partitions. The parent table itself does not have any\nunderlying storage,\nso it must, therefore, retrieve the data satisfying index constraints from\nthe underlying tables.\nIn very crude terms, it is an accumulation of data from table partitions so\nthat data spanning\nacross multiple partitions are accessed in one go as opposed to\nindividually querying each\npartition.\n\nFor the initial version of this work, we are only considering to build\nb-tree global indexes.\n\n- Partitioned Index (Index Partitioning)\nWhen global indexes become too large, then those are partitioned to keep\nthe performance\nand maintenance overhead manageable. These are not within the scope of this\nwork.\n\n\n- Local Index\nA local index is an index that is local to a specific table partition; i.e.\nit doesn’t span across\nmultiple partitions. So, when we create an index on a parent table, it will\ncreate a separate\nindex for all its partitions. Unfortunately, PostgreSQL uses the\nterminology of “partitioned index”\nwhen it refers to local indexes. This work with fix this terminology for\nPostgreSQL so that the\nnomenclature remains consistent with other DBMS.\n\n\n- Why We Need Global Index?\nA global index is expected to give two very important upgrades to the\npartitioning feature set in\nPostgreSQL. It is expected to give a significant improvement in\nread-performance for queries\ntargeting multiple local indexes of partitions. It also adds a unique\nconstraint across partitions.\n\n\n- Unique Constraint\nData uniqueness is a critical requirement for building an index. For global\nindexes that span across\nmultiple partitions, uniqueness will have to be enforced on index\ncolumn(s). This effectively translates\ninto a unique constraint.\n\n\n- Performance\nCurrently, the pseudo index created on the parent table of partitions does\nnot contain any\ndata. Rather, it dereferences to the local indexes when an index search is\nrequired. This\nmeans that multiple indexes will have to be evaluated and data to be\ncombined thereafter.\nHowever, with the global indexes, data will reside with global index\ndeclared on the parent\ntable. This avoids the need for multi-level index lookups. So read\nperformance is expected\nto be significantly higher in cases. There will however be a negative\nperformance impact\nduring write (insert/update) of data. This is discussed in more detail\nlater on.\n\n\n- Creating a GLOBAL Index - Syntax\nA global index may be created with the addition of a “GLOBAL” keyword to\nthe index statement.\nAlternatively, one could specify the “LOCAL” keyword to create local\nindexes on partitions.\nWe are suggesting to call this set of keywords: “partition_index_type”. By\ndefault,\npartition_index_type will be set as LOCAL. Here is a sample of the create\nindex syntax.\n\nCREATE Index idx parent (columns) [GLOBAL | LOCAL];\n\nNote: There is no shift/reduced by adding these options.\n\n\n- Pointing Index to Tuple\nCurrently, CTID carries a page and offset information for a known heap\n(table name). However,\nin the context of global indexes, this information within an index is\ninsufficient. Since the index is\nexpected to carry tuples from multiple partitions (heaps), CTID alone will\nnot be able to link an index\nnode to a tuple. This requires carrying additional data for the heap name\nto be stored with each\nindex node.\n\nHow this should be implemented is a point to be discussed. A few\npossibilities are listed below:\n\n-- Expand CTID to include a relfilenode id. In PostgreSQL-Conf Asia, Bruce\nsuggested having the OID\ninstead of relfilenode as relfilenode can be duplicated across tablespaces.\n -- Using OID introduces another complication where we would need to\nquery catalog for OID to\n heap mapping.\n\n-- The second option is to have a variable-length CTID. We can reserve some\ntop-level bit for segregation of\nGlobal CTID or Standard CTID. Robert Haas suggested in PostgreSQL-EU to\ndiscuss this with Peter Geoghegan.\n -- I discussed it with Peter and he believes that it is a very\ninvasive approach that requires a whole lot of\n the effort to get committed.\n\n-- Heikki pointed me to include heap specific information using the INCLUDE\nkeyword so that heap information\nis stored with each index node as data.\n -- We (Peter and I) also discussed that option and this looks a more\neasy and non-invasive option.\n\n\n- Optimizer\nThe challenge with optimizer is a selection between local and global\nindexes when both are present.\nHow do we:\n\n-- Evaluate the cost of scanning a global index?\n-- When should the LOCAL index be preferred over the GLOBAL index and vice\nversa?\n -- Should we hit a GLOBAL index when the query is targeting a\ncouple of partitions only?\n -- We need to consider the sizes of those partitions being hit and\nthe sizes of partitions not being hit.\n-- Bruce suggested that we prioritize a GLOBAL index in the first version\nso that in every case,\nthe GLOBAL index is utilized.\n\n\n- Write Performance and Vacuum\nThere will be some write performance degradation because every change in\npartition tables must\npropagate upwards to the GLOBAL index on the parent table. This can be\nthought of as another index\non a table, however, the [slight] performance degradation will be due to\nthe fact that the GLOBAL\nindex may carry a much bigger dataset with data from multiple partitions\nresulting in a higher tree\ntraversal and update time. This applies to both write and vacuum processes.\n\nIt is still an open question though on how this will be handled within the\ncode and how we can better\noptimize this process.\n\nI have a POC patch and working on finalizing the patch, Hamid Akhtar is\nalso working with me on this\nwork.\n\n\n\n--\nIbrar Ahmed\n\nA global index by very definition is a single index on the parent table that maps to manyunderlying table partitions. The parent table itself does not have any underlying storage,so it must, therefore, retrieve the data satisfying index constraints from the underlying tables.In very crude terms, it is an accumulation of data from table partitions so that data spanningacross multiple partitions are accessed in one go as opposed to individually querying eachpartition.For the initial version of this work, we are only considering to build b-tree global indexes.- Partitioned Index (Index Partitioning)When global indexes become too large, then those are partitioned to keep the performanceand maintenance overhead manageable. These are not within the scope of this work.- Local IndexA local index is an index that is local to a specific table partition; i.e. it doesn’t span acrossmultiple partitions. So, when we create an index on a parent table, it will create a separateindex for all its partitions. Unfortunately, PostgreSQL uses the terminology of “partitioned index”when it refers to local indexes. This work with fix this terminology for PostgreSQL so that thenomenclature remains consistent with other DBMS. - Why We Need Global Index?A global index is expected to give two very important upgrades to the partitioning feature set inPostgreSQL. It is expected to give a significant improvement in read-performance for queriestargeting multiple local indexes of partitions. It also adds a unique constraint across partitions.- Unique ConstraintData uniqueness is a critical requirement for building an index. For global indexes that span acrossmultiple partitions, uniqueness will have to be enforced on index column(s). This effectively translatesinto a unique constraint.- PerformanceCurrently, the pseudo index created on the parent table of partitions does not contain anydata. Rather, it dereferences to the local indexes when an index search is required. Thismeans that multiple indexes will have to be evaluated and data to be combined thereafter.However, with the global indexes, data will reside with global index declared on the parenttable. This avoids the need for multi-level index lookups. So read performance is expectedto be significantly higher in cases. There will however be a negative performance impactduring write (insert/update) of data. This is discussed in more detail later on.- Creating a GLOBAL Index - SyntaxA global index may be created with the addition of a “GLOBAL” keyword to the index statement.Alternatively, one could specify the “LOCAL” keyword to create local indexes on partitions.We are suggesting to call this set of keywords: “partition_index_type”. By default,partition_index_type will be set as LOCAL. Here is a sample of the create index syntax.CREATE Index idx parent (columns) [GLOBAL | LOCAL];Note: There is no shift/reduced by adding these options.- Pointing Index to TupleCurrently, CTID carries a page and offset information for a known heap (table name). However,in the context of global indexes, this information within an index is insufficient. Since the index isexpected to carry tuples from multiple partitions (heaps), CTID alone will not be able to link an indexnode to a tuple. This requires carrying additional data for the heap name to be stored with eachindex node.How this should be implemented is a point to be discussed. A few possibilities are listed below:-- Expand CTID to include a relfilenode id. In PostgreSQL-Conf Asia, Bruce suggested having the OIDinstead of relfilenode as relfilenode can be duplicated across tablespaces.       -- Using OID introduces another complication where we would need to query catalog for OID to         heap mapping.-- The second option is to have a variable-length CTID. We can reserve some top-level bit for segregation ofGlobal CTID or Standard CTID. Robert Haas suggested in PostgreSQL-EU to discuss this with Peter Geoghegan.      -- I discussed it with Peter and he believes that it is a very invasive approach that requires a whole lot of         the effort to get committed. -- Heikki pointed me to include heap specific information using the INCLUDE keyword so that heap informationis stored with each index node as data.       -- We (Peter and I) also discussed that option and this looks a more easy and non-invasive option.- OptimizerThe challenge with optimizer is a selection between local and global indexes when both are present.How do we: -- Evaluate the cost of scanning a global index?-- When should the LOCAL index be preferred over the GLOBAL index and vice versa?        -- Should we hit a GLOBAL index when the query is targeting a couple of partitions only?        -- We need to consider the sizes of those partitions being hit and the sizes of partitions not being hit.-- Bruce suggested that we prioritize a GLOBAL index in the first version so that in every case,the GLOBAL index is utilized. - Write Performance and VacuumThere will be some write performance degradation because every change in partition tables mustpropagate upwards to the GLOBAL index on the parent table. This can be thought of as another indexon a table, however, the [slight] performance degradation will be due to the fact that the GLOBALindex may carry a much bigger dataset with data from multiple partitions resulting in a higher treetraversal and update time. This applies to both write and vacuum processes.It is still an open question though on how this will be handled within the code and how we can betteroptimize this process.I have a POC patch and working on finalizing the patch, Hamid Akhtar is also working with me on thiswork.--Ibrar Ahmed", "msg_date": "Wed, 30 Oct 2019 13:38:29 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: Global Index" }, { "msg_contents": "Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> A global index by very definition is a single index on the parent table\n> that maps to many\n> underlying table partitions.\n\nI believe that the current design of partitioning is explicitly intended\nto avoid the need for such a construct. It'd be absolutely disastrous\nto have such a thing from many standpoints, including the breadth of\nlocking needed to work with the global index, the difficulty of vacuuming,\nand the impossibility of cheaply attaching or detaching partitions.\n\nIn other words, this is a \"feature\" we do not want.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 10:13:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Wed, Oct 30, 2019 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I believe that the current design of partitioning is explicitly intended\n> to avoid the need for such a construct. It'd be absolutely disastrous\n> to have such a thing from many standpoints, including the breadth of\n> locking needed to work with the global index, the difficulty of vacuuming,\n> and the impossibility of cheaply attaching or detaching partitions.\n>\n> In other words, this is a \"feature\" we do not want.\n\nI don't think that's true. Certainly, a lot of EnterpriseDB customers\nwant this feature - it comes up regularly in discussions here. But\nthat is not to say that the technical challenges are not formidable,\nand I don't think this proposal really grapples with any of them.\nHowever, that doesn't mean that the feature isn't desirable.\n\nOne of the biggest reasons why people want it is to enforce uniqueness\nfor secondary keys - e.g. the employees table is partitioned by\nemployee ID, but SSN should also be unique, at least among employees\nfor whom it's not NULL.\n\nBut people also want it for faster data retrieval: if you're looking\nfor a commonly-occurring value, an index per partition is fine. But if\nyou're looking for values that occur only once or a few times across\nthe whole hierarchy, an index scan per partition is very costly.\nConsider, e.g.:\n\nNested Loop\n-> Seq Scan\n-> Append\n -> Index Scan on each_partition\n\nYou don't have to have very many partitions for that to suck, and it's\na thing that people want to do. Runtime partition pruning helps with\nthis case a lot, but, once again, only for the primary key. Secondary\nkeys are a big problem for partitioning today, in many ways.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 30 Oct 2019 12:12:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Oct 30, 2019 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I believe that the current design of partitioning is explicitly intended\n>> to avoid the need for such a construct. It'd be absolutely disastrous\n>> to have such a thing from many standpoints, including the breadth of\n>> locking needed to work with the global index, the difficulty of vacuuming,\n>> and the impossibility of cheaply attaching or detaching partitions.\n>> In other words, this is a \"feature\" we do not want.\n\n> I don't think that's true. Certainly, a lot of EnterpriseDB customers\n> want this feature - it comes up regularly in discussions here. But\n> that is not to say that the technical challenges are not formidable,\n> and I don't think this proposal really grapples with any of them.\n> However, that doesn't mean that the feature isn't desirable.\n\nWell, the *effects* of the feature seem desirable, but that doesn't\nmean that we want an implementation that actually has a shared index.\nAs soon as you do that, you've thrown away most of the benefits of\nhaving a partitioned data structure in the first place.\n\nNo, I don't have an idea how we might support, eg, uniqueness of\nnon-partition-key columns without that. But we need to spend our\neffort on figuring that out, not on building a complicated mechanism\nwhose performance is never going to not suck.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 12:23:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Wed, Oct 30, 2019 at 9:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, the *effects* of the feature seem desirable, but that doesn't\n> mean that we want an implementation that actually has a shared index.\n> As soon as you do that, you've thrown away most of the benefits of\n> having a partitioned data structure in the first place.\n\nRight, but that's only the case for the global index. Global indexes\nare useful when used judiciously. They enable the use of partitioning\nfor use cases where not being able to enforce uniqueness across all\npartitions happens to be a deal breaker. I bet that this is fairly\ncommon.\n\n> No, I don't have an idea how we might support, eg, uniqueness of\n> non-partition-key columns without that. But we need to spend our\n> effort on figuring that out, not on building a complicated mechanism\n> whose performance is never going to not suck.\n\nI don't think that there is a way to solve the problem that doesn't\nlook very much like a global index. Also, being able to push down a\npartition number when scanning a global index seems like it would be\nvery compelling in some scenarios.\n\nI'm a bit worried about the complexity that will need to be added to\nnbtree to make global indexes work, but it's probably possible to come\nup with something that isn't too bad. GIN already uses an\nimplementation level attribute number column for multi-column GIN\nindexes, which is a little like what Ibrar has in mind. The really\ncomplicated new code required for global indexes will be in places\nlike vacuumlazy.c.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 30 Oct 2019 09:48:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Oct 30, 2019 at 9:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, the *effects* of the feature seem desirable, but that doesn't\n>> mean that we want an implementation that actually has a shared index.\n>> As soon as you do that, you've thrown away most of the benefits of\n>> having a partitioned data structure in the first place.\n\n> Right, but that's only the case for the global index. Global indexes\n> are useful when used judiciously.\n\nBut ... why bother with partitioning then? To me, the main reasons\nwhy you might want a partitioned table are\n\n* ability to cheaply add and remove partitions, primarily so that\nyou can cheaply do things like \"delete the oldest month's data\".\n\n* ability to scale past our limits on the physical size of one table\n--- both the hard BlockNumber-based limit, and the performance\nconstraints of e.g. vacuuming a very large table.\n\nBoth of those go out the window with a global index. So you might\nas well just have one table and forget all the overhead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 13:05:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "Hi,\n\nOn 2019-10-30 13:05:57 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Wed, Oct 30, 2019 at 9:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Well, the *effects* of the feature seem desirable, but that doesn't\n> >> mean that we want an implementation that actually has a shared index.\n> >> As soon as you do that, you've thrown away most of the benefits of\n> >> having a partitioned data structure in the first place.\n> \n> > Right, but that's only the case for the global index. Global indexes\n> > are useful when used judiciously.\n> \n> But ... why bother with partitioning then? To me, the main reasons\n> why you might want a partitioned table are\n\nQuite commonly there's a lot of *other* indexes, often on a lot wider\ndata than the primary key, that don't need to be global. And whereas in\na lot of cases the primary key in a partitioned table has pretty good\nlocality (and thus will be mostly buffered IO), other indexes will often\nnot have that property (i.e. not have much correlation with table\nposition).\n\n\n> * ability to cheaply add and remove partitions, primarily so that\n> you can cheaply do things like \"delete the oldest month's data\".\n\nYou can still do that to some degree with a global index. Imagine\ne.g. keeping a 'partition id' as a sort-of column in the global\nindex. That allows you to drop the partition, without having to\nimmediately rebuild the index, by checking the partition id against the\nlive partitions during lookup. So sure, your'e wasting space for a bit\nin the global index, but it'll also be space that is likely to be fairly\nefficiently reclaimed the next time vacuum touches the index. And if\nnot the global index can be rebuilt concurrently without blocking\nwrites.\n\n\n> * ability to scale past our limits on the physical size of one table\n> --- both the hard BlockNumber-based limit, and the performance\n> constraints of e.g. vacuuming a very large table.\n\nFor that to be a problem for a global index the global index (which will\noften be something like two int4 or int8 columns) itself would need to\nbe above the block number based limit - which doesn't seem that close.\n\nWRT vacuuming - based on my observations the table itself isn't a\nperformance problem for vacuuming all that commonly anymore, it's the\nassociated index scans. So yea, that's a problem.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 30 Oct 2019 10:27:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Wed, Oct 30, 2019 at 9:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Well, the *effects* of the feature seem desirable, but that doesn't\n> > mean that we want an implementation that actually has a shared index.\n> > As soon as you do that, you've thrown away most of the benefits of\n> > having a partitioned data structure in the first place.\n> \n> Right, but that's only the case for the global index. Global indexes\n> are useful when used judiciously. They enable the use of partitioning\n> for use cases where not being able to enforce uniqueness across all\n> partitions happens to be a deal breaker. I bet that this is fairly\n> common.\n\nAbsolutely- our lack of such is a common point of issue when folks are\nconsidering using or migrating to PostgreSQL.\n\nThanks,\n\nStephen", "msg_date": "Thu, 31 Oct 2019 14:50:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Thu, 31 Oct 2019 at 14:50, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Peter Geoghegan (pg@bowt.ie) wrote:\n>\n[....]\n\n>\n> Absolutely- our lack of such is a common point of issue when folks are\n> considering using or migrating to PostgreSQL.\n>\n\nNot sure how similar my situation really is, but I find myself wanting to\nhave indices that cross non-partition members of an inheritance hierarchy:\n\ncreate table t (\n id int,\n primary key (id)\n);\n\ncreate table t1 (\n a text\n) inherits (t);\n\ncreate table t2 (\n b int,\n c int\n) inherits (t);\n\nSo \"t\"s are identified by an integer; and one kind of \"t\" has a single text\nattribute while a different kind of \"t\" has 2 int attributes. The idea is\nthat there is a single primary key constraint on the whole hierarchy that\nensures only one record with a particular id can exist in all the tables\ntogether. I can imagine wanting to do this with other unique constraints\nalso.\n\nAt present I don't actually use inheritance; instead I put triggers on the\nchild tables that do an insert on the parent table, which has the effect of\nenforcing the uniqueness I want.\n\nOn Thu, 31 Oct 2019 at 14:50, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n[....] \n\nAbsolutely- our lack of such is a common point of issue when folks are\nconsidering using or migrating to PostgreSQL.Not sure how similar my situation really is, but I find myself wanting to have indices that cross non-partition members of an inheritance hierarchy:create table t (    id int,    primary key (id));create table t1 (    a text) inherits (t);create table t2 (    b int,    c int) inherits (t);So \"t\"s are identified by an integer; and one kind of \"t\" has a single text attribute while a different kind of \"t\" has 2 int attributes. The idea is that there is a single primary key constraint on the whole hierarchy that ensures only one record with a particular id can exist in all the tables together. I can imagine wanting to do this with other unique constraints also.At present I don't actually use inheritance; instead I put triggers on the child tables that do an insert on the parent table, which has the effect of enforcing the uniqueness I want.", "msg_date": "Thu, 31 Oct 2019 15:02:40 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Thu, Oct 31, 2019 at 03:02:40PM -0400, Isaac Morland wrote:\n>On Thu, 31 Oct 2019 at 14:50, Stephen Frost <sfrost@snowman.net> wrote:\n>\n>> Greetings,\n>>\n>> * Peter Geoghegan (pg@bowt.ie) wrote:\n>>\n>[....]\n>\n>>\n>> Absolutely- our lack of such is a common point of issue when folks are\n>> considering using or migrating to PostgreSQL.\n>>\n>\n>Not sure how similar my situation really is, but I find myself wanting to\n>have indices that cross non-partition members of an inheritance hierarchy:\n>\n>create table t (\n> id int,\n> primary key (id)\n>);\n>\n>create table t1 (\n> a text\n>) inherits (t);\n>\n>create table t2 (\n> b int,\n> c int\n>) inherits (t);\n>\n>So \"t\"s are identified by an integer; and one kind of \"t\" has a single text\n>attribute while a different kind of \"t\" has 2 int attributes. The idea is\n>that there is a single primary key constraint on the whole hierarchy that\n>ensures only one record with a particular id can exist in all the tables\n>together. I can imagine wanting to do this with other unique constraints\n>also.\n>\n\nIMO the chances of us supporting global indexes with generic inheritance\nhierarchies are about zero. We don't even support creating \"partition\"\nindexes on those hierarchies ...\n\n>At present I don't actually use inheritance; instead I put triggers on the\n>child tables that do an insert on the parent table, which has the effect of\n>enforcing the uniqueness I want.\n\nDoes it? Are you sure it actually works in READ COMMITTED? What exactly\ndoes the trigger do?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 31 Oct 2019 20:21:55 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On 10/30/19 10:27, Andres Freund wrote:\n> On 2019-10-30 13:05:57 -0400, Tom Lane wrote:\n>> Peter Geoghegan <pg@bowt.ie> writes:\n>>> On Wed, Oct 30, 2019 at 9:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Well, the *effects* of the feature seem desirable, but that doesn't\n>>>> mean that we want an implementation that actually has a shared index.\n>>>> As soon as you do that, you've thrown away most of the benefits of\n>>>> having a partitioned data structure in the first place.\n>>\n>>> Right, but that's only the case for the global index. Global indexes\n>>> are useful when used judiciously.\n>>\n>> But ... why bother with partitioning then? To me, the main reasons\n>> why you might want a partitioned table are\n> \n> Quite commonly there's a lot of *other* indexes, often on a lot wider\n> data than the primary key, that don't need to be global. And whereas in\n> a lot of cases the primary key in a partitioned table has pretty good\n> locality (and thus will be mostly buffered IO), other indexes will often\n> not have that property (i.e. not have much correlation with table\n> position).\n\nI asked around a little bit and got some interesting responses. Thought\nI'd pass two of them along.\n\nOne person worked on a payments network (150,000+ installed readers),\nthe transaction table was date partitioned (1 per day) based on insert\ntimestamp, but lookups and updates were typically by the unique\ntransaction id. Oracle DB, they kept 180 daily partitions, several\nmillion rows per day. Transactions did not arrive in order, and could be\ndelayed if some part of the network was slow (they opted to allow the $2\ncharge rather than reject sales) and when the cash transaction records\nwere uploaded. Step one for their PG conversion created a read replica\nin PG 9.6, and the cost of doing the individual index lookups across 180\npartitions (and 180 indexes) was very high, so they stored max and min\ntxn id per partition and would generate a query with all the dates that\na txn id could have been in so that only a small number of partition\nindexes would be accessed. They wanted a global index on txn id for\nperformance, not for uniqueness – id generated on reader with guid-like\nsemantics.\n\nA second person worked on several large-scale systems and he relayed\nthat in some cases where they used Oracle global indexes on partitioned\ntables, they ended up deciding to reverse that decision as things scaled\nbecause of restrictive locking during partition maintenance (this is the\nexact issue Tom points out). So even on a database _with_ the option of\nusing a global index, they've sometimes opted for \"workaround\" design\npatterns instead:\n* To solve uniqueness, manage serialization at the appliation level.\nIsolate operations (e.g. using a queue) and use that to make sure that\ntwo sessions don’t try to insert the same record at the same time. From\nan RDBMS, this looks like a separate, smaller table that is being used\nto manage work activity.\n* To solve the additional IO for a global table scan ... We often don’t\nneed to do this because the load in this pattern is not typically highly\nconcurrent. If we are looking for higher concurrency, we can usually\nadd a hack/workaround that filters on a partition key to provide “pretty\ngood” pruning. The net result is that you get 2-3x the IO due to the\nlack of global index (same workaround as first story above).\n\nQuote: \"So ... I don’t actually like the idea of introducing this.\nUnless, someone can solve the ugly challenges we have had [around\npartition maintenance operations].\"\n\nI actually don't think those challenges are so un-solvable. I think that\nglobal indexes will be irrelevant to most workloads. I'm not entirely\nconvinced that they won't be useful for a few people with specific\nworkloads and large amounts of data in PostgreSQL where the benefits\noutweigh the costs. I definitely agree that care needs to be taken\naround index maintenance operations if there's an effort here.\n\n\n>> * ability to cheaply add and remove partitions, primarily so that\n>> you can cheaply do things like \"delete the oldest month's data\".\n> \n> You can still do that to some degree with a global index. Imagine\n> e.g. keeping a 'partition id' as a sort-of column in the global\n> index. That allows you to drop the partition, without having to\n> immediately rebuild the index, by checking the partition id against the\n> live partitions during lookup. So sure, your'e wasting space for a bit\n> in the global index, but it'll also be space that is likely to be fairly\n> efficiently reclaimed the next time vacuum touches the index. And if\n> not the global index can be rebuilt concurrently without blocking\n> writes.\n\nAnother idea might be to leverage PostgreSQL's partial indexes. If the\nindex is created \"where date>2020\" and you're dropping an index from\n2019 then you can entirely ignore the index. Not a panacea for every\nindex maintenance operation, but for the super-common case of dropping\nthe oldest partition you can now:\n\n1) create new index concurrently \"where dt>2020\"\n2) drop the old index\n3) drop the 2019 partition\n\ndoesn't solve world hunger but there's lots of benefit for such a simple\nhack.\n\n\n>> * ability to scale past our limits on the physical size of one table\n>> --- both the hard BlockNumber-based limit, and the performance\n>> constraints of e.g. vacuuming a very large table.\n> \n> For that to be a problem for a global index the global index (which will\n> often be something like two int4 or int8 columns) itself would need to\n> be above the block number based limit - which doesn't seem that close.\n> \n> WRT vacuuming - based on my observations the table itself isn't a\n> performance problem for vacuuming all that commonly anymore, it's the\n> associated index scans. So yea, that's a problem.\n\nI'm sure zheap will make all our dreams come true, right? :D\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n", "msg_date": "Mon, 25 Nov 2019 15:05:03 -0800", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On 11/25/19 15:05, Jeremy Schneider wrote:\n> ... the cost of doing the individual index lookups across 180\n> partitions (and 180 indexes) was very high, so they stored max and min\n> txn id per partition and would generate a query with all the dates that\n> a txn id could have been in so that only a small number of partition\n> indexes would be accessed. \n> \n> .. If we are looking for higher concurrency, we can usually\n> add a hack/workaround that filters on a partition key to provide “pretty\n> good” pruning. The net result is that you get 2-3x the IO due to the\n> lack of global index (same workaround as first story above).\n\nIs that basically like a global BRIN index with granularity at the\npartition level?\n\n-J\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n", "msg_date": "Mon, 25 Nov 2019 15:44:39 -0800", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Nov 25, 2019 at 03:44:39PM -0800, Jeremy Schneider wrote:\n> On 11/25/19 15:05, Jeremy Schneider wrote:\n> > ... the cost of doing the individual index lookups across 180\n> > partitions (and 180 indexes) was very high, so they stored max and min\n> > txn id per partition and would generate a query with all the dates that\n> > a txn id could have been in so that only a small number of partition\n> > indexes would be accessed. \n> > \n> > .. If we are looking for higher concurrency, we can usually\n> > add a hack/workaround that filters on a partition key to provide “pretty\n> > good” pruning. The net result is that you get 2-3x the IO due to the\n> > lack of global index (same workaround as first story above).\n> \n> Is that basically like a global BRIN index with granularity at the\n> partition level?\n\nExactly! :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 18 Dec 2019 22:03:28 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On 19/12/19 4:03, Bruce Momjian wrote:\n> On Mon, Nov 25, 2019 at 03:44:39PM -0800, Jeremy Schneider wrote:\n>> On 11/25/19 15:05, Jeremy Schneider wrote:\n>>> ... the cost of doing the individual index lookups across 180\n>>> partitions (and 180 indexes) was very high, so they stored max and min\n>>> txn id per partition and would generate a query with all the dates that\n>>> a txn id could have been in so that only a small number of partition\n>>> indexes would be accessed.\n>>>\n>>> .. If we are looking for higher concurrency, we can usually\n>>> add a hack/workaround that filters on a partition key to provide “pretty\n>>> good” pruning. The net result is that you get 2-3x the IO due to the\n>>> lack of global index (same workaround as first story above).\n>> Is that basically like a global BRIN index with granularity at the\n>> partition level?\n> Exactly! :-)\n\nActually, one \"kind of\" BRIN index *per partitioned table* mapping (key \nrange) -> (partition oid)... and so concurrency doesn't need to be very \naffected.\n\n(we don't need to do things just like other RDBMS do, ya know... ;)\n\n\nIIRC, this precise approach was suggested around 2016 when initially \ndiscussing the \"declarative partitioning\" which originated Postgres' \ncurrent partitioning scheme, in order to optimize partition pruning.\n\n\nJust my .02€\n\n     / J.L.\n\n\n\n\n", "msg_date": "Thu, 19 Dec 2019 09:48:40 +0100", "msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Thu, Dec 19, 2019 at 09:48:40AM +0100, Jose Luis Tallon wrote:\n> On 19/12/19 4:03, Bruce Momjian wrote:\n> > On Mon, Nov 25, 2019 at 03:44:39PM -0800, Jeremy Schneider wrote:\n> > > On 11/25/19 15:05, Jeremy Schneider wrote:\n> > > > ... the cost of doing the individual index lookups across 180\n> > > > partitions (and 180 indexes) was very high, so they stored max and min\n> > > > txn id per partition and would generate a query with all the dates that\n> > > > a txn id could have been in so that only a small number of partition\n> > > > indexes would be accessed.\n> > > > \n> > > > .. If we are looking for higher concurrency, we can usually\n> > > > add a hack/workaround that filters on a partition key to provide “pretty\n> > > > good” pruning. The net result is that you get 2-3x the IO due to the\n> > > > lack of global index (same workaround as first story above).\n> > > Is that basically like a global BRIN index with granularity at the\n> > > partition level?\n> > Exactly! :-)\n> \n> Actually, one \"kind of\" BRIN index *per partitioned table* mapping (key\n> range) -> (partition oid)... and so concurrency doesn't need to be very\n> affected.\n> \n> (we don't need to do things just like other RDBMS do, ya know... ;)\n> \n> \n> IIRC, this precise approach was suggested around 2016 when initially\n> discussing the \"declarative partitioning\" which originated Postgres' current\n> partitioning scheme, in order to optimize partition pruning.\n\nRobert Haas identified two needs for global indexes:\n\n\thttps://www.postgresql.org/message-id/CA+Tgmob_J2M2+QKWrhg2NjQEkMEwZNTfd7a6Ubg34fJuZPkN2g@mail.gmail.com\n\t\n\tOne of the biggest reasons why people want it is to enforce uniqueness\n\tfor secondary keys - e.g. the employees table is partitioned by\n\temployee ID, but SSN should also be unique, at least among employees\n\tfor whom it's not NULL.\n\t\n\tBut people also want it for faster data retrieval: if you're looking\n\tfor a commonly-occurring value, an index per partition is fine. But if\n\tyou're looking for values that occur only once or a few times across\n\tthe whole hierarchy, an index scan per partition is very costly.\n\nI don't see lossy BRIN indexes helping with the uniqueness use-case, and\nI am not sure they would help with the rare case either. They would\nhelp for range-based partitions, but I thought our existing facilities\nworked in that case.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 19 Dec 2019 11:12:07 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On 12/19/19 08:12, Bruce Momjian wrote:\n> I don't see lossy BRIN indexes helping with the uniqueness use-case, and\n> I am not sure they would help with the rare case either. They would\n> help for range-based partitions, but I thought our existing facilities\n> worked in that case.\n\nCorrelated data. The existing facilities work if the filtering column\nis exactly the same as the partition column. But it's not at all\nuncommon to have other columns with correlated data, perhaps the most\nobvious of which is timeseries tables with many date columns of various\ndefinitions (row first update, row latest update, invoice date, payment\ndate, process date, ship date, etc).\n\nWhat if you could use *two* indexes in a single execution plan? Use the\nglobal BRIN to narrow down to 2 or 3 out of a hundred or more\npartitions, then use local indexes to find specific rows in the\npartitions of interest? That might work, without being too overly\ncomplicated.\n\n-J\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n", "msg_date": "Thu, 19 Dec 2019 11:28:55 -0800", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Thu, Dec 19, 2019 at 11:28:55AM -0800, Jeremy Schneider wrote:\n> On 12/19/19 08:12, Bruce Momjian wrote:\n> > I don't see lossy BRIN indexes helping with the uniqueness use-case, and\n> > I am not sure they would help with the rare case either. They would\n> > help for range-based partitions, but I thought our existing facilities\n> > worked in that case.\n> \n> Correlated data. The existing facilities work if the filtering column\n> is exactly the same as the partition column. But it's not at all\n> uncommon to have other columns with correlated data, perhaps the most\n> obvious of which is timeseries tables with many date columns of various\n> definitions (row first update, row latest update, invoice date, payment\n> date, process date, ship date, etc).\n> \n> What if you could use *two* indexes in a single execution plan? Use the\n> global BRIN to narrow down to 2 or 3 out of a hundred or more\n> partitions, then use local indexes to find specific rows in the\n> partitions of interest? That might work, without being too overly\n> complicated.\n\nNo, that is very interesting --- having secondary indexes for\npartitioned tables that trim most partitions. Would index lookups on\neach partition index be very slow? BRIN indexes? I am assuming global\nindexes would only avoid such lookups.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 19 Dec 2019 21:34:33 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "\n> On Oct 30, 2019, at 12:05 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> But ... why bother with partitioning then? To me, the main reasons\n> why you might want a partitioned table are\n> \n> * ability to cheaply add and remove partitions, primarily so that\n> you can cheaply do things like \"delete the oldest month's data\".\n> \n> * ability to scale past our limits on the physical size of one table\n> --- both the hard BlockNumber-based limit, and the performance\n> constraints of e.g. vacuuming a very large table.\n\nA third case is data locality. In that case global indexes would be useful for queries that do not correlate will with hot data.\n\n> Both of those go out the window with a global index. So you might\n> as well just have one table and forget all the overhead.\n\nPartition pruning could still be valuable even with global indexes, provided that we teach vacuum how to clean up tuples in an index that point at a partition that has been deleted.\n\n", "msg_date": "Tue, 7 Jan 2020 22:55:13 +0000", "msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "I've been following this topic for a long time. It's been a year since the last response.\nIt was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n\nSo, I wish the whole feature to mature as soon as possible.\nI summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n\nNext, I encountered some difficulties when implementing the DDL of the partition table with global index, and I hope to get some help from the community\n\nHere are some details what has been implemented\n1 Definition of global index\nUsing the INCLUDE keyword to include the tableoid of the partitioned table.\n\n2. Maintenance of global index by partition table DML.\nBoth INSERT and UPDATE of a partitioned table maintain global index\n\n3. Global index scan\nPlanner: Processes predicate conditions on the primary partition, generating paths and plans for the global index.\nExecuter: index scan get indextup, get the tableoid from indextup, and verify the visibility of the data in the partition.\n\n4. Vacuum partition table maintains global index\nEach partitioned table VACUUM cleans its own garbage data in the global index.\n\nAfter the above function point is completed, the global index can be used without partition table DDL.\n\nDemo:\n--Use pgbench to create the test partition table\npgbench -i -s 1000 --partitions=6 --partition-method=range\n\n—- create global index on bid, bid is not partition key\nCREATE INDEX idx_pgbench_accounts_bid on pgbench_accounts(bid) global;\n\n— check global index status\nselect * , sum(alivetup) over()as sum_alivetup, sum(deadtup) over() as sum_deadtup from bt_get_global_index_status('idx_pgbench_accounts_bid');\n relname | alivetup | deadtup | sum_alivetup | sum_deadtup \n--------------------+----------+---------+--------------+-------------\n pgbench_accounts_1 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_2 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_3 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_4 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_5 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_6 | 16666665 | 0 | 100000000 | 0\n(6 rows)\n\n— run pgbench for for a while\npgbench -M prepared -j 32 -c 32 -T 60 -P1\n\n\n—- check global index, The index has bloated\npostgres=# select * , sum(alivetup) over()as sum_alivetup, sum(deadtup) over() as sum_deadtup from bt_get_global_index_status('idx_pgbench_accounts_bid');\n relname | alivetup | deadtup | sum_alivetup | sum_deadtup \n--------------------+----------+---------+--------------+-------------\n pgbench_accounts_1 | 16717733 | 0 | 100306102 | 0\n pgbench_accounts_2 | 16717409 | 0 | 100306102 | 0\n pgbench_accounts_3 | 16717540 | 0 | 100306102 | 0\n pgbench_accounts_4 | 16717972 | 0 | 100306102 | 0\n pgbench_accounts_5 | 16717578 | 0 | 100306102 | 0\n pgbench_accounts_6 | 16717870 | 0 | 100306102 | 0\n(6 rows)\n\n—- vacuum partition table\nvacuum pgbench_accounts;\n\n—- Garbage is collected, global index still looks correct and valid.\npostgres=# select * , sum(alivetup) over()as sum_alivetup, sum(deadtup) over() as sum_deadtup from bt_get_global_index_status('idx_pgbench_accounts_bid');\n relname | alivetup | deadtup | sum_alivetup | sum_deadtup \n--------------------+----------+---------+--------------+-------------\n pgbench_accounts_1 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_2 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_3 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_4 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_5 | 16666667 | 0 | 100000000 | 0\n pgbench_accounts_6 | 16666665 | 0 | 100000000 | 0\n(6 rows)\n\n—-\n\n—- global index scan works well\npostgres=# select tableoid ,count(*) from pgbench_accounts where bid = 834 group by tableoid;\n tableoid | count \n----------+-------\n 16455 | 33335\n 16458 | 66665\n(2 rows)\n\npostgres=# explain select tableoid ,count(*) from pgbench_accounts where bid = 834 group by tableoid;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=2945.23..2945.24 rows=1 width=12)\n Group Key: pgbench_accounts.tableoid\n -> Global Index Scan using idx_pgbench_accounts_bid on pgbench_accounts (cost=0.50..10.18 rows=587011 width=4)\n Index Cond: (bid = 834)\n(4 rows)\n\n\nThe following is how to implement DDL of global index. How to maintain global index of DDL of partitioned table.\nThis seems to be more difficult than the previous work.\n\nI understand there are four main parts\n\n1 Build global index or reindex, especially in concurrent mode\n\n2 Detach partition\nWould it be a good idea to make a flag to global index and let VACUUM handle the index data of the Detach partition?\n\n3 Attach partition\nIt is easy to Attach a new empty partition, but adding a new one with data is not.\nIf there is a unique key conflict, do we slowly clean up the garbage or invalidate the entire index?\n\n4 Truncate partition with global index\nDo we need to process the heap and index data separately in multiple transactions?\nThis will lose the ability to roll back for Truncate operation.\nIs it worth it?\n\n\nLooking forward to your feedback.\n\nThanks!\n\nWenjing", "msg_date": "Thu, 7 Jan 2021 17:44:01 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Thu, Jan 7, 2021 at 05:44:01PM +0800, 曾文旌 wrote:\n> I've been following this topic for a long time. It's been a year since the last response.\n> It was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n> \n> So, I wish the whole feature to mature as soon as possible.\n> I summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n\nI think you need to address the items mentioned in this blog, and the\nemail link it mentions:\n\n\thttps://momjian.us/main/blogs/pgblog/2020.html#July_1_2020\n\nI am not clear this is a feature we will want. Yes, people ask for it,\nbut if the experience will be bad for them and they will regret using\nit, I am not sure we want it. Of course, if you code it up and we get\na good user experience, we would want it --- I am just saying it is not\nclear right now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 7 Jan 2021 09:16:20 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Thu, Jan 7, 2021 at 4:44 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n> I've been following this topic for a long time. It's been a year since the last response.\n> It was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n>\n> So, I wish the whole feature to mature as soon as possible.\n> I summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n\nYou need to summarize the basic design choices you've made here. Like,\nwhat did you do about the fact that TIDs have to be 6 bytes?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Jan 2021 10:04:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "> 2021年1月7日 22:16,Bruce Momjian <bruce@momjian.us> 写道:\n> \n> On Thu, Jan 7, 2021 at 05:44:01PM +0800, 曾文旌 wrote:\n>> I've been following this topic for a long time. It's been a year since the last response.\n>> It was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n>> \n>> So, I wish the whole feature to mature as soon as possible.\n>> I summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n> \n> I think you need to address the items mentioned in this blog, and the\n> email link it mentions:\n> \n> \thttps://momjian.us/main/blogs/pgblog/2020.html#July_1_2020\n\nThank you for your reply.\nI read your blog and it helped me a lot.\n\nThe blog mentions a specific problem: \"A large global index might also reintroduce problems that prompted the creation of partitioning in the first place. \"\nI don't quite understand, could you give some specific information?\n\nIn addition you mentioned: \"It is still unclear if these use-cases justify the architectural changes needed to enable global indexes.\"\nPlease also describe the problems you see, I will confirm each specific issue one by one.\n\n\nThanks\n\nWenjing\n\n\n> \n> I am not clear this is a feature we will want. Yes, people ask for it,\n> but if the experience will be bad for them and they will regret using\n> it, I am not sure we want it. Of course, if you code it up and we get\n> a good user experience, we would want it --- I am just saying it is not\n> clear right now.\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EnterpriseDB https://enterprisedb.com\n> \n> The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Fri, 8 Jan 2021 11:26:48 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Fri, Jan 8, 2021 at 4:02 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n>\n> > 2021年1月7日 22:16,Bruce Momjian <bruce@momjian.us> 写道:\n> >\n> > On Thu, Jan 7, 2021 at 05:44:01PM +0800, 曾文旌 wrote:\n> >> I've been following this topic for a long time. It's been a year since the last response.\n> >> It was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n> >>\n> >> So, I wish the whole feature to mature as soon as possible.\n> >> I summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n> >\n> > I think you need to address the items mentioned in this blog, and the\n> > email link it mentions:\n> >\n> > https://momjian.us/main/blogs/pgblog/2020.html#July_1_2020\n>\n> Thank you for your reply.\n> I read your blog and it helped me a lot.\n>\n> The blog mentions a specific problem: \"A large global index might also reintroduce problems that prompted the creation of partitioning in the first place. \"\n> I don't quite understand, could you give some specific information?\n>\n> In addition you mentioned: \"It is still unclear if these use-cases justify the architectural changes needed to enable global indexes.\"\n> Please also describe the problems you see, I will confirm each specific issue one by one.\n\nOne example is date partitioning. People frequently need to store\nonly the most recent data. For instance doing a monthly partitioning\nand dropping the oldest partition every month once you hit the wanted\nretention is very efficient for that use case, as it should be almost\ninstant (provided that you can acquire the necessary locks\nimmediately). But if you have a global index, you basically lose the\nadvantage of partitioning as it'll require heavy changes on that\nindex.\n\n\n", "msg_date": "Fri, 8 Jan 2021 16:26:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Fri, Jan 8, 2021 at 11:26:48AM +0800, 曾文旌 wrote:\n> > On Thu, Jan 7, 2021 at 05:44:01PM +0800, 曾文旌 wrote:\n> >> I've been following this topic for a long time. It's been a year since the last response.\n> >> It was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n> >> \n> >> So, I wish the whole feature to mature as soon as possible.\n> >> I summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n> > \n> > I think you need to address the items mentioned in this blog, and the\n> > email link it mentions:\n> > \n> > \thttps://momjian.us/main/blogs/pgblog/2020.html#July_1_2020\n> \n> Thank you for your reply.\n> I read your blog and it helped me a lot.\n> \n> The blog mentions a specific problem: \"A large global index might also reintroduce problems that prompted the creation of partitioning in the first place. \"\n> I don't quite understand, could you give some specific information?\n\nWell, if you created partitions, you probably did so because:\n\n1. heap files are smaller, allowing for more targeted sequential scans\n2. smaller indexes\n3. the ability to easily drop tables/indexes that are too old\n\nIf you have global indexes, #1 is the same, but #2 is not longer true,\nand for #3, you can drop the heap but the index entries still exist in\nthe global index and must be removed.\n\nSo, if you created partitions for one of the three reasons, once you\nhave global indexes, some of those advantage of partitioning are no\nlonger true. I am sure there are some workloads where the advantages of\npartitioning, minus the advantages lost when using global indexes, are\nuseful, but are there enough of them to make the feature useful? I\ndon't know.\n\n> In addition you mentioned: \"It is still unclear if these use-cases justify the architectural changes needed to enable global indexes.\"\n> Please also describe the problems you see, I will confirm each specific issue one by one.\n\nWell, the email thread I linked to has a lot of them, but the\nfundamental issue is that you have to break the logic that a single\nindex serves a single heap file. Considering what I said above, is\nthere enough usefulness to warrant such an architectural change?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 8 Jan 2021 10:50:40 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "> 2021年1月8日 16:26,Julien Rouhaud <rjuju123@gmail.com> 写道:\n> \n> On Fri, Jan 8, 2021 at 4:02 PM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n>> \n>>> 2021年1月7日 22:16,Bruce Momjian <bruce@momjian.us> 写道:\n>>> \n>>> On Thu, Jan 7, 2021 at 05:44:01PM +0800, 曾文旌 wrote:\n>>>> I've been following this topic for a long time. It's been a year since the last response.\n>>>> It was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n>>>> \n>>>> So, I wish the whole feature to mature as soon as possible.\n>>>> I summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n>>> \n>>> I think you need to address the items mentioned in this blog, and the\n>>> email link it mentions:\n>>> \n>>> https://momjian.us/main/blogs/pgblog/2020.html#July_1_2020\n>> \n>> Thank you for your reply.\n>> I read your blog and it helped me a lot.\n>> \n>> The blog mentions a specific problem: \"A large global index might also reintroduce problems that prompted the creation of partitioning in the first place. \"\n>> I don't quite understand, could you give some specific information?\n>> \n>> In addition you mentioned: \"It is still unclear if these use-cases justify the architectural changes needed to enable global indexes.\"\n>> Please also describe the problems you see, I will confirm each specific issue one by one.\n> \n> One example is date partitioning. People frequently need to store\n> only the most recent data. For instance doing a monthly partitioning\n> and dropping the oldest partition every month once you hit the wanted\n> retention is very efficient for that use case, as it should be almost\n> instant (provided that you can acquire the necessary locks\n> immediately). But if you have a global index, you basically lose the\n> advantage of partitioning as it'll require heavy changes on that\n> index.\nIf the global index removes all the major benefits of partitioned tables, then it is not worth having it.\n\nThis is indeed a typical scenario for a partitioned table.\nthere are two basic operations\n1) Monthly DETACH old child table\n2) Monthly ATTACH new child table\n\nFor 1) The DETACH old child table can be finished immediately, global index can be kept valid after DETACH is completed, and the cleanup of garbage data in global index can be deferred to VACUUM.\nThis is similar to the global index optimization done by Oracle12c.\nFor 2) ATTACH new empty child table can also be completed immediately.\nIf this is the case, many of the advantages of partitioned tables will be retained, while the advantages of global indexes will be gained.", "msg_date": "Mon, 11 Jan 2021 19:40:18 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "> 2021年1月7日 23:04,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Thu, Jan 7, 2021 at 4:44 AM 曾文旌 <wenjing.zwj@alibaba-inc.com> wrote:\n>> I've been following this topic for a long time. It's been a year since the last response.\n>> It was clear that our customers wanted this feature as well, and a large number of them mentioned it.\n>> \n>> So, I wish the whole feature to mature as soon as possible.\n>> I summarized the scheme mentioned in the email and completed the POC patch(base on PG_13).\n> \n> You need to summarize the basic design choices you've made here. Like,\n> what did you do about the fact that TIDs have to be 6 bytes?\n\nThese are the basic choices, and most of them come from discussions in previous emails.\n\n1 Definition of global index\nObviously, we need to expand Index address info(CTID) to include child table info in GlobalIndexTuple.\n\n1.1 As mentioned in the previous email, Bruce suggested having the OID\ninstead of relfilenode as relfilenode can be duplicated across tablespaces. \nI agree with that.\n\n1.2 And Heikki pointed me to include heap specific information using the INCLUDE keyword so that heap information\nis stored with each index node as data.\n\nSo ,In POC stage, I choose use INCLUDE keyword to INCLUDE the tableoid of global index. This will add 4 bytes to each IndexTuple.\n\nConsidering that if a single partitioned table does not exceed 65535 child tables, perhaps two bytes for tracking which child table the data belongs to is sufficient.\n\n2. Maintenance of global index by partition table DML.\nThe DML of each child table of the partitioned table needs to maintain the global index on the partitioned table.\n\n3. Global index scan\nPlanner: \nProcesses predicate on the primary partition, generating paths and plans for the global index.\nThe cost model of the global index may need to be considered. We need to make the global index or the local index selected in their respective advantageous scenarios.\n\nExecuter: \nThe index scan get indextup, get the tableoid from indextup, and verify the visibility of the data in the child table.\nIf a child table is DETACH, then the index item of this table is ignored during the index scan until VACUUM finishes cleaning up the global index.\n\n4. Vacuum partition table maintains global index.\nOld data in the global index also needs to be cleaned up, and vaccum is suitable for it.\nEach child table in VACUUM, while vacuuming its own index, also vacuums the global index on the partitioned table.\n\n5. Other\nThe global index indexes all of the child tables, which makes the global index large and has many levels. \nFollow the technical route, The partitioned indexes are a further target.\n\nThis is my basic idea for implementing global index.\nLooking forward to your feedback.\n\nThanks!\n\nWenjing\n\n> \n> -- \n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Mon, 11 Jan 2021 21:00:45 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Jan 11, 2021 at 07:40:18PM +0800, 曾文旌 wrote:\n> >> In addition you mentioned: \"It is still unclear if these use-cases justify the architectural changes needed to enable global indexes.\"\n> >> Please also describe the problems you see, I will confirm each specific issue one by one.\n> > \n> > One example is date partitioning. People frequently need to store\n> > only the most recent data. For instance doing a monthly partitioning\n> > and dropping the oldest partition every month once you hit the wanted\n> > retention is very efficient for that use case, as it should be almost\n> > instant (provided that you can acquire the necessary locks\n> > immediately). But if you have a global index, you basically lose the\n> > advantage of partitioning as it'll require heavy changes on that\n> > index.\n> If the global index removes all the major benefits of partitioned tables, then it is not worth having it.\n> \n> This is indeed a typical scenario for a partitioned table.\n> there are two basic operations\n> 1) Monthly DETACH old child table\n> 2) Monthly ATTACH new child table\n> \n> For 1) The DETACH old child table can be finished immediately, global index can be kept valid after DETACH is completed, and the cleanup of garbage data in global index can be deferred to VACUUM.\n> This is similar to the global index optimization done by Oracle12c.\n> For 2) ATTACH new empty child table can also be completed immediately.\n> If this is the case, many of the advantages of partitioned tables will be retained, while the advantages of global indexes will be gained.\n\nYes, we can keep the index rows for the deleted partition and clean them\nup later, but what is the advantage of partitioning then? Just heap\ndeletion quickly? Is that enough of a value?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 11 Jan 2021 12:46:49 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Jan 11, 2021 at 07:40:18PM +0800, 曾文旌 wrote:\n>> This is indeed a typical scenario for a partitioned table.\n>> there are two basic operations\n>> 1) Monthly DETACH old child table\n>> 2) Monthly ATTACH new child table\n>> \n>> For 1) The DETACH old child table can be finished immediately, global index can be kept valid after DETACH is completed, and the cleanup of garbage data in global index can be deferred to VACUUM.\n\n> Yes, we can keep the index rows for the deleted partition and clean them\n> up later, but what is the advantage of partitioning then? Just heap\n> deletion quickly? Is that enough of a value?\n\nMore to the point, you still have a massive index cleanup operation to do.\nDeferred or not, that's going to take a lot of cycles, and it will leave\nyou with a bloated global index. I find it hard to believe that this\napproach will seem like an improvement over doing partitioning the way\nwe do it now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 Jan 2021 13:34:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Jan 11, 2021 at 12:46 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > For 1) The DETACH old child table can be finished immediately, global index can be kept valid after DETACH is completed, and the cleanup of garbage data in global index can be deferred to VACUUM.\n> > This is similar to the global index optimization done by Oracle12c.\n> > For 2) ATTACH new empty child table can also be completed immediately.\n> > If this is the case, many of the advantages of partitioned tables will be retained, while the advantages of global indexes will be gained.\n>\n> Yes, we can keep the index rows for the deleted partition and clean them\n> up later, but what is the advantage of partitioning then? Just heap\n> deletion quickly? Is that enough of a value?\n\nI actually think the idea of lazily deleting the index entries is\npretty good, but it won't work if the way the global index is\nimplemented is by adding a tableoid column. Because then, I might\ndetach a partition and later reattach it and the old index entries are\nstill there but the table contents might have changed. Worse yet, the\ntable might be dropped and the table OID reused for a completely\nunrelated table with completely unrelated contents, which could then\nbe attached as a new partition.\n\nOne of the big selling points of global indexes is that they allow you\nto enforce uniqueness on a column unrelated to the partitioning\ncolumn. Another is that you can look up a value by doing a single\nindex scan on the global index rather than an index scan per\npartition. Those things are no less valuable for performing index\ndeletion lazily.\n\nHowever, there is a VACUUM amplification effect to worry about here\nwhich Wenjing seems not to be considering. Suppose I have a table\nwhich is not partitioned and it is 1TB in size with an index that is\n128GB in size. To vacuum the table, I need to do 1TB + 128GB of I/O.\nNow, suppose I now partition the table into 1024 partitions each with\nits own local index. Each partition is 1GB in size and the index on\neach partition is 128MB in size. To vacuum an individual partition\nrequires 1GB + 128MB of I/O, so to vacuum all the partitions requires\nthe same amount of total I/O as before. But, now suppose that I have a\nsingle global index instead of a local index per partition. First, how\nbig will that index be? It will not be 128GB, but somewhat bigger,\nbecause it needs extra space for every indexed tuple. Let's say 140GB.\nFurthermore, it will need to be vacuumed whenever any child is\nvacuumed, because it contains some index entries from every child. So\nthe total I/O to vacuum all partitions is now 1GB * 1024 + 140GB *\n1024 = 141TB, which is a heck of a lot worse than the 1.125TB I\nrequired with the unpartitioned table or the locally partitioned\ntable.\n\nThat's not necessarily a death sentence for every use case, but it's\ngoing to be pretty bad for tables that are big and heavily updated.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Jan 2021 13:37:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Jan 11, 2021 at 10:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I actually think the idea of lazily deleting the index entries is\n> pretty good, but it won't work if the way the global index is\n> implemented is by adding a tableoid column.\n\nPerhaps there is an opportunity to apply some of the infrastructure\nthat Masahiko Sawada has been working on, that makes VACUUM more\nincremental in certain specific scenarios:\n\nhttps://postgr.es/m/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com\n\nI think that VACUUM can be taught to skip the ambulkdelete() step for\nindexes in many common scenarios. Global indexes might be one place in\nwhich that's almost essential.\n\n> However, there is a VACUUM amplification effect to worry about here\n> which Wenjing seems not to be considering.\n\n> That's not necessarily a death sentence for every use case, but it's\n> going to be pretty bad for tables that are big and heavily updated.\n\nThe main way in which index vacuuming is currently a death sentence\nfor this design (as you put it) is that it's an all-or-nothing thing.\nPresumably you'll need to VACUUM the entire global index for each\npartition that receives even one UPDATE. That seems pretty extreme,\nand probably not acceptable. In a way it's not really a new problem,\nbut the fact remains: it makes global indexes much less valuable.\n\nHowever, it probably would be okay if a global index feature performed\npoorly in scenarios where partitions get lots of UPDATEs that produce\nlots of index bloat and cause lots of LP_DEAD line pointers to\naccumulate in heap pages. It is probably reasonable to just expect\nusers to not do that if they want to get acceptable performance while\nusing a global index. Especially since it probably is not so bad if\nthe index bloat situation gets out of hand for just one of the\npartitions (say the most recent one) every once in a while. You at\nleast don't have the same crazy I/O multiplier effect that you\ndescribed.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 Jan 2021 11:01:20 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Jan 11, 2021 at 01:37:02PM -0500, Robert Haas wrote:\n> However, there is a VACUUM amplification effect to worry about here\n...\n> That's not necessarily a death sentence for every use case, but it's\n> going to be pretty bad for tables that are big and heavily updated.\n\nYeah, I had not really gotten that far in my thinking, but someone is\ngoing to need to create a POC and then we need to test it to see if it\noffers a reasonably valuable feature.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 11 Jan 2021 14:23:13 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Jan 11, 2021 at 11:01:20AM -0800, Peter Geoghegan wrote:\n> However, it probably would be okay if a global index feature performed\n> poorly in scenarios where partitions get lots of UPDATEs that produce\n> lots of index bloat and cause lots of LP_DEAD line pointers to\n> accumulate in heap pages. It is probably reasonable to just expect\n> users to not do that if they want to get acceptable performance while\n> using a global index. Especially since it probably is not so bad if\n> the index bloat situation gets out of hand for just one of the\n> partitions (say the most recent one) every once in a while. You at\n> least don't have the same crazy I/O multiplier effect that you\n> described.\n\nOnce you layer on all the places a global index will be worse than just\ncreating a single large table, or a partitioned table with an index per\nchild, there might not be much usefulness left. A POC patch might tell\nus that, and might allow us to mark it as \"not wanted\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 11 Jan 2021 14:25:55 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Jan 11, 2021 at 11:25 AM Bruce Momjian <bruce@momjian.us> wrote:\n> Once you layer on all the places a global index will be worse than just\n> creating a single large table, or a partitioned table with an index per\n> child, there might not be much usefulness left. A POC patch might tell\n> us that, and might allow us to mark it as \"not wanted\".\n\nI'm confused. Of course it's true to some degree that having a global\nindex \"defeats the purpose\" of having a partitioned table. But only to\na degree. And for some users it will make the difference between using\npartitioning and not using partitioning -- they simply won't be able\nto tolerate not having it available (e.g. because of a requirement for\na unique constraint that does not cover the partitioning key).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 Jan 2021 12:05:43 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "On Mon, Jan 11, 2021 at 12:05:43PM -0800, Peter Geoghegan wrote:\n> On Mon, Jan 11, 2021 at 11:25 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > Once you layer on all the places a global index will be worse than just\n> > creating a single large table, or a partitioned table with an index per\n> > child, there might not be much usefulness left. A POC patch might tell\n> > us that, and might allow us to mark it as \"not wanted\".\n> \n> I'm confused. Of course it's true to some degree that having a global\n> index \"defeats the purpose\" of having a partitioned table. But only to\n> a degree. And for some users it will make the difference between using\n> partitioning and not using partitioning -- they simply won't be able\n> to tolerate not having it available (e.g. because of a requirement for\n> a unique constraint that does not cover the partitioning key).\n\nYes, that is a good point. For those cases, I think we need to look at\nthe code complexity/overhead of supporting that feature. There are\ngoing to be a few cases it is a win, but will the code complexity be\nworth it?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 11 Jan 2021 15:24:06 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" }, { "msg_contents": "> 2021年1月12日 02:37,Robert Haas <robertmhaas@gmail.com> 写道:\n> \n> On Mon, Jan 11, 2021 at 12:46 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>> For 1) The DETACH old child table can be finished immediately, global index can be kept valid after DETACH is completed, and the cleanup of garbage data in global index can be deferred to VACUUM.\n>>> This is similar to the global index optimization done by Oracle12c.\n>>> For 2) ATTACH new empty child table can also be completed immediately.\n>>> If this is the case, many of the advantages of partitioned tables will be retained, while the advantages of global indexes will be gained.\n>> \n>> Yes, we can keep the index rows for the deleted partition and clean them\n>> up later, but what is the advantage of partitioning then? Just heap\n>> deletion quickly? Is that enough of a value?\n> \n> I actually think the idea of lazily deleting the index entries is\n> pretty good, but it won't work if the way the global index is\n> implemented is by adding a tableoid column. Because then, I might\n> detach a partition and later reattach it and the old index entries are\n> still there but the table contents might have changed. Worse yet, the\n> table might be dropped and the table OID reused for a completely\n> unrelated table with completely unrelated contents, which could then\n> be attached as a new partition.\n> \n> One of the big selling points of global indexes is that they allow you\n> to enforce uniqueness on a column unrelated to the partitioning\n> column. Another is that you can look up a value by doing a single\n> index scan on the global index rather than an index scan per\n> partition. Those things are no less valuable for performing index\n> deletion lazily.\n> \n> However, there is a VACUUM amplification effect to worry about here\n> which Wenjing seems not to be considering. Suppose I have a table\n> which is not partitioned and it is 1TB in size with an index that is\n> 128GB in size. To vacuum the table, I need to do 1TB + 128GB of I/O.\n> Now, suppose I now partition the table into 1024 partitions each with\n> its own local index. Each partition is 1GB in size and the index on\n> each partition is 128MB in size. To vacuum an individual partition\n> requires 1GB + 128MB of I/O, so to vacuum all the partitions requires\n> the same amount of total I/O as before. But, now suppose that I have a\n> single global index instead of a local index per partition. First, how\n> big will that index be? It will not be 128GB, but somewhat bigger,\n> because it needs extra space for every indexed tuple. Let's say 140GB.\n> Furthermore, it will need to be vacuumed whenever any child is\n> vacuumed, because it contains some index entries from every child. So\n> the total I/O to vacuum all partitions is now 1GB * 1024 + 140GB *\n> 1024 = 141TB, which is a heck of a lot worse than the 1.125TB I\n> required with the unpartitioned table or the locally partitioned\n> table.\nThank you for pointing this out.\nIt seems that some optimization can be done, but there is no good way\nto completely eliminate the vacuum amplification effect of the global index.\nMaybe we can only count on Zheap, which doesn't need to do Vaccum.\n\n\n\n> \n> That's not necessarily a death sentence for every use case, but it's\n> going to be pretty bad for tables that are big and heavily updated.\n> \n> -- \n> Robert Haas\n> EDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jan 2021 18:00:40 +0800", "msg_from": "=?utf-8?B?5pu+5paH5peM?= <wenjing.zwj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Global Index" } ]
[ { "msg_contents": "Please consider this scenario (race conditions):\n\n1. FlushBuffer() has written the buffer but hasn't yet managed to clear the\nBM_DIRTY flag (however BM_JUST_DIRTIED could be cleared by now).\n\n2. Another backend modified a hint bit and called MarkBufferDirtyHint().\n\n3. In MarkBufferDirtyHint(), if XLogHintBitIsNeeded() evaluates to true\n(e.g. due to checksums enabled), new LSN is computed, however it's not\nassigned to the page because the buffer is still dirty:\n\n\tif (!(buf_state & BM_DIRTY))\n\t{\n\t\t...\n\n\t\tif (!XLogRecPtrIsInvalid(lsn))\n\t\t\tPageSetLSN(page, lsn);\n\t}\n\n4. MarkBufferDirtyHint() completes.\n\n5. In the first session, FlushBuffer()->TerminateBufferIO() will not clear\nBM_DIRTY because MarkBufferDirtyHint() has eventually set\nBM_JUST_DIRTIED. Thus the hint bit change itself will be written by the next\ncall of FlushBuffer(). However page LSN is hasn't been updated so the\nrequirement that WAL must be flushed first is not met.\n\nI think that PageSetLSN() should be called regardless BM_DIRTY. Do I miss any\nsubtle detail?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 30 Oct 2019 14:44:18 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "On Wed, Oct 30, 2019 at 02:44:18PM +0100, Antonin Houska wrote:\n>Please consider this scenario (race conditions):\n>\n>1. FlushBuffer() has written the buffer but hasn't yet managed to clear the\n>BM_DIRTY flag (however BM_JUST_DIRTIED could be cleared by now).\n>\n>2. Another backend modified a hint bit and called MarkBufferDirtyHint().\n>\n>3. In MarkBufferDirtyHint(), if XLogHintBitIsNeeded() evaluates to true\n>(e.g. due to checksums enabled), new LSN is computed, however it's not\n>assigned to the page because the buffer is still dirty:\n>\n>\tif (!(buf_state & BM_DIRTY))\n>\t{\n>\t\t...\n>\n>\t\tif (!XLogRecPtrIsInvalid(lsn))\n>\t\t\tPageSetLSN(page, lsn);\n>\t}\n>\n>4. MarkBufferDirtyHint() completes.\n>\n>5. In the first session, FlushBuffer()->TerminateBufferIO() will not clear\n>BM_DIRTY because MarkBufferDirtyHint() has eventually set\n>BM_JUST_DIRTIED. Thus the hint bit change itself will be written by the next\n>call of FlushBuffer(). However page LSN is hasn't been updated so the\n>requirement that WAL must be flushed first is not met.\n>\n>I think that PageSetLSN() should be called regardless BM_DIRTY. Do I miss any\n>subtle detail?\n>\n\nIsn't this prevented by locking of the buffer header? Both FlushBuffer\nand MarkBufferDirtyHint do obtain that lock. I see MarkBufferDirtyHint\ndoes a bit of work before, but that's related to BM_PERMANENT.\n\nIf there really is a race condition, it shouldn't be that difficult to\ntrigger it by adding a sleep or a breakpoint, I guess.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 30 Oct 2019 20:42:37 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\n> On Wed, Oct 30, 2019 at 02:44:18PM +0100, Antonin Houska wrote:\n> >Please consider this scenario (race conditions):\n> >\n> >1. FlushBuffer() has written the buffer but hasn't yet managed to clear the\n> >BM_DIRTY flag (however BM_JUST_DIRTIED could be cleared by now).\n> >\n> >2. Another backend modified a hint bit and called MarkBufferDirtyHint().\n> >\n> >3. In MarkBufferDirtyHint(), if XLogHintBitIsNeeded() evaluates to true\n> >(e.g. due to checksums enabled), new LSN is computed, however it's not\n> >assigned to the page because the buffer is still dirty:\n> >\n> >\tif (!(buf_state & BM_DIRTY))\n> >\t{\n> >\t\t...\n> >\n> >\t\tif (!XLogRecPtrIsInvalid(lsn))\n> >\t\t\tPageSetLSN(page, lsn);\n> >\t}\n> >\n> >4. MarkBufferDirtyHint() completes.\n> >\n> >5. In the first session, FlushBuffer()->TerminateBufferIO() will not clear\n> >BM_DIRTY because MarkBufferDirtyHint() has eventually set\n> >BM_JUST_DIRTIED. Thus the hint bit change itself will be written by the next\n> >call of FlushBuffer(). However page LSN is hasn't been updated so the\n> >requirement that WAL must be flushed first is not met.\n> >\n> >I think that PageSetLSN() should be called regardless BM_DIRTY. Do I miss any\n> >subtle detail?\n> >\n> \n> Isn't this prevented by locking of the buffer header? Both FlushBuffer\n> and MarkBufferDirtyHint do obtain that lock. I see MarkBufferDirtyHint\n> does a bit of work before, but that's related to BM_PERMANENT.\n> \n> If there really is a race condition, it shouldn't be that difficult to\n> trigger it by adding a sleep or a breakpoint, I guess.\n\nYes, I had verified it using gdb: inserted a row into a table, then attached\ngdb to checkpointer and stopped it when FlushBuffer() was about to call\nTerminateBufferIO(). Then, when scanning the table, I saw that\nMarkBufferDirtyHint() skipped the call of PageSetLSN(). Finally, before\ncheckpointer unlocked the buffer header in TerminateBufferIO(), buf_state was\n3553624066 ~ 0b11010011110100000000000000000010.\n\nWith BM_DIRTY defined as\n\n\t#define BM_DIRTY\t\t\t\t(1U << 23)\n\nthe flag appeared to be set.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Thu, 31 Oct 2019 09:43:47 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "On Wed, Oct 30, 2019 at 9:43 AM Antonin Houska <ah@cybertec.at> wrote:\n> 5. In the first session, FlushBuffer()->TerminateBufferIO() will not clear\n> BM_DIRTY because MarkBufferDirtyHint() has eventually set\n> BM_JUST_DIRTIED. Thus the hint bit change itself will be written by the next\n> call of FlushBuffer(). However page LSN is hasn't been updated so the\n> requirement that WAL must be flushed first is not met.\n\nThis part confuses me. Are you saying that MarkBufferDirtyHint() can\nset BM_JUST_DIRTIED without setting BM_DIRTY?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 11:23:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Oct 30, 2019 at 9:43 AM Antonin Houska <ah@cybertec.at> wrote:\n> > 5. In the first session, FlushBuffer()->TerminateBufferIO() will not clear\n> > BM_DIRTY because MarkBufferDirtyHint() has eventually set\n> > BM_JUST_DIRTIED. Thus the hint bit change itself will be written by the next\n> > call of FlushBuffer(). However page LSN is hasn't been updated so the\n> > requirement that WAL must be flushed first is not met.\n> \n> This part confuses me. Are you saying that MarkBufferDirtyHint() can\n> set BM_JUST_DIRTIED without setting BM_DIRTY?\n\nNo, I'm saying that MarkBufferDirtyHint() leaves BM_DIRTY set, as\nexpected. However, if things happen in the order I described, then LSN\nreturned by XLogSaveBufferForHint() is not assigned to the page.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Fri, 01 Nov 2019 18:51:26 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "On Thu, Oct 31, 2019 at 09:43:47AM +0100, Antonin Houska wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> Isn't this prevented by locking of the buffer header? Both FlushBuffer\n>> and MarkBufferDirtyHint do obtain that lock. I see MarkBufferDirtyHint\n>> does a bit of work before, but that's related to BM_PERMANENT.\n\nIn FlushBuffer() you have a window after fetching the buffer state and\nthe header is once unlocked until TerminateBufferIO() gets called\n(this would take again a lock on the buffer header), so it is\nlogically possible for the buffer to be marked as dirty once again\ncausing the update of the LSN on the page to be missed even if a\nbackup block has been written in WAL.\n\n> Yes, I had verified it using gdb: inserted a row into a table, then attached\n> gdb to checkpointer and stopped it when FlushBuffer() was about to call\n> TerminateBufferIO(). Then, when scanning the table, I saw that\n> MarkBufferDirtyHint() skipped the call of PageSetLSN(). Finally, before\n> checkpointer unlocked the buffer header in TerminateBufferIO(), buf_state was\n> 3553624066 ~ 0b11010011110100000000000000000010.\n\nSmall typo here: 11010011110100000000000000000010...\n\n> With BM_DIRTY defined as\n> \n> \t#define BM_DIRTY\t\t\t\t(1U << 23)\n> \n> the flag appeared to be set.\n\n... Still, the bit is set here.\n\nDoes something like the attached patch make sense? Reviews are\nwelcome.\n--\nMichael", "msg_date": "Mon, 11 Nov 2019 14:47:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n> Does something like the attached patch make sense? Reviews are\n> welcome.\n\nThis looks good to me.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 11 Nov 2019 10:03:14 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "At Mon, 11 Nov 2019 10:03:14 +0100, Antonin Houska <ah@cybertec.at> wrote in \n> Michael Paquier <michael@paquier.xyz> wrote:\n> > Does something like the attached patch make sense? Reviews are\n> > welcome.\n> \n> This looks good to me.\n\nI have a qustion.\n\nThe current code assumes that !BM_DIRTY means that the function is\ndirtying the page. But if !BM_JUST_DIRTIED, the function actually is\ngoing to re-dirty the page even if BM_DIRTY.\n\nIf this is correct, the trigger for stats update is not !BM_DIRTY but\n!BM_JUST_DIRTIED, or the fact that we passed the line of\nXLogSaveBufferForHint() could be the trigger, regardless whether the\nLSN is valid or not.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 Nov 2019 21:31:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 11 Nov 2019 10:03:14 +0100, Antonin Houska <ah@cybertec.at> wrote in \n> > Michael Paquier <michael@paquier.xyz> wrote:\n> > > Does something like the attached patch make sense? Reviews are\n> > > welcome.\n> > \n> > This looks good to me.\n> \n> I have a qustion.\n> \n> The current code assumes that !BM_DIRTY means that the function is\n> dirtying the page. But if !BM_JUST_DIRTIED, the function actually is\n> going to re-dirty the page even if BM_DIRTY.\n\nIt makes sense to me. I can imagine the following:\n\n1. FlushBuffer() cleared BM_JUST_DIRTIED, wrote the page to disk but hasn't\nyet cleared BM_DIRTY.\n\n2. Another backend changed a hint bit in shared memory and called\nMarkBufferDirtyHint().\n\nThus FlushBuffer() missed the current hint bit change, so we need to dirty the\npage again.\n\n> If this is correct, the trigger for stats update is not !BM_DIRTY but\n> !BM_JUST_DIRTIED, or the fact that we passed the line of\n> XLogSaveBufferForHint() could be the trigger, regardless whether the\n> LSN is valid or not.\n\nI agree.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 12 Nov 2019 14:27:16 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "On Mon, Nov 11, 2019 at 10:03:14AM +0100, Antonin Houska wrote:\n> This looks good to me.\n\nActually, no, this is not good. I have been studying more the patch,\nand after stressing more this code path with a cluster having\nchecksums enabled and shared_buffers at 1MB, I have been able to make\na couple of page's LSNs go backwards with pgbench -s 100. The cause\nwas simply that the page got flushed with a newer LSN than what was\nreturned by XLogSaveBufferForHint() before taking the buffer header\nlock, so updating only the LSN for a non-dirty page was simply\nguarding against that.\n--\nMichael", "msg_date": "Wed, 13 Nov 2019 21:17:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "On Wed, Nov 13, 2019 at 09:17:03PM +0900, Michael Paquier wrote:\n> Actually, no, this is not good. I have been studying more the patch,\n> and after stressing more this code path with a cluster having\n> checksums enabled and shared_buffers at 1MB, I have been able to make\n> a couple of page's LSNs go backwards with pgbench -s 100. The cause\n> was simply that the page got flushed with a newer LSN than what was\n> returned by XLogSaveBufferForHint() before taking the buffer header\n> lock, so updating only the LSN for a non-dirty page was simply\n> guarding against that.\n\nfor the reference attached is the trick I have used, adding an extra\nassertion check in PageSetLSN(). \n--\nMichael", "msg_date": "Thu, 14 Nov 2019 12:01:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "At Thu, 14 Nov 2019 12:01:29 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Nov 13, 2019 at 09:17:03PM +0900, Michael Paquier wrote:\n> > Actually, no, this is not good. I have been studying more the patch,\n> > and after stressing more this code path with a cluster having\n> > checksums enabled and shared_buffers at 1MB, I have been able to make\n> > a couple of page's LSNs go backwards with pgbench -s 100. The cause\n> > was simply that the page got flushed with a newer LSN than what was\n> > returned by XLogSaveBufferForHint() before taking the buffer header\n> > lock, so updating only the LSN for a non-dirty page was simply\n> > guarding against that.\n\nI thought of something like that but forgot to mention. But after\nthinking of that, couldn't the current code can do the same think even\nthough with a far small probability? Still a session with smaller hint\nLSN but didn't entered the header lock section yet can be cut-in by\nanother session with larger hint LSN.\n\n> for the reference attached is the trick I have used, adding an extra\n> assertion check in PageSetLSN(). \n\nI believe that all locations where Page-LSN is set is in the same\nbuffer-ex-lock section with XLogInsert.. but not sure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Nov 2019 16:59:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Nov 11, 2019 at 10:03:14AM +0100, Antonin Houska wrote:\n> > This looks good to me.\n> \n> Actually, no, this is not good. I have been studying more the patch,\n> and after stressing more this code path with a cluster having\n> checksums enabled and shared_buffers at 1MB, I have been able to make\n> a couple of page's LSNs go backwards with pgbench -s 100. The cause\n> was simply that the page got flushed with a newer LSN than what was\n> returned by XLogSaveBufferForHint() before taking the buffer header\n> lock, so updating only the LSN for a non-dirty page was simply\n> guarding against that.\n\nInteresting. Now that I know about the problem, I could have reproduced it\nusing gdb: MarkBufferDirtyHint() was called by 2 backends concurrently in such\na way that the first backend generates the LSN, but before it manages to\nassign it to the page, another backend generates another LSN and sets it.\n\nCan't we just apply the attached diff on the top of your patch?\n\nAlso I wonder how checksums helped you to discover the problem? Although I\ncould simulate the setting of lower LSN, I could not see any broken\nchecksum. And I wouldn't even expect that since FlushBuffer() copies the\nbuffer into backend local memory before it calculates the checksum.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Thu, 14 Nov 2019 15:48:31 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Mon, Nov 11, 2019 at 10:03:14AM +0100, Antonin Houska wrote:\n> > > This looks good to me.\n> > \n> > Actually, no, this is not good. I have been studying more the patch,\n> > and after stressing more this code path with a cluster having\n> > checksums enabled and shared_buffers at 1MB, I have been able to make\n> > a couple of page's LSNs go backwards with pgbench -s 100. The cause\n> > was simply that the page got flushed with a newer LSN than what was\n> > returned by XLogSaveBufferForHint() before taking the buffer header\n> > lock, so updating only the LSN for a non-dirty page was simply\n> > guarding against that.\n> \n> Interesting. Now that I know about the problem, I could have reproduced it\n> using gdb: MarkBufferDirtyHint() was called by 2 backends concurrently in such\n> a way that the first backend generates the LSN, but before it manages to\n> assign it to the page, another backend generates another LSN and sets it.\n> \n> Can't we just apply the attached diff on the top of your patch?\n\nI wanted to register the patch for the next CF so it's not forgotten, but see\nit's already there. Why have you set the status to \"withdrawn\"?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Fri, 20 Dec 2019 16:30:38 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" }, { "msg_contents": "On Fri, Dec 20, 2019 at 04:30:38PM +0100, Antonin Houska wrote:\n> I wanted to register the patch for the next CF so it's not forgotten, but see\n> it's already there. Why have you set the status to \"withdrawn\"?\n\nBecause my patch was incorrect, and I did not make enough bandwidth to\nthink more on the matter. I am actually not sure that what you are\nproposing is better.. If you wish to get that reviewed, could you add\na new CF entry?\n--\nMichael", "msg_date": "Sat, 21 Dec 2019 11:44:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MarkBufferDirtyHint() and LSN update" } ]
[ { "msg_contents": "HAVE_LONG_LONG_INT is now implied by the requirement for C99, so the \nseparate Autoconf check can be removed. The uses are almost all in ecpg \ncode, and AFAICT the check was originally added specifically for ecpg.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 30 Oct 2019 14:49:38 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove HAVE_LONG_LONG_INT" }, { "msg_contents": "On 2019-10-30 14:49, Peter Eisentraut wrote:\n> HAVE_LONG_LONG_INT is now implied by the requirement for C99, so the\n> separate Autoconf check can be removed. The uses are almost all in ecpg\n> code, and AFAICT the check was originally added specifically for ecpg.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 7 Nov 2019 13:31:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove HAVE_LONG_LONG_INT" } ]
[ { "msg_contents": "While fooling with the NetBSD-vs-libpython issue noted in a nearby\nthread, I observed that the core regression tests sometimes hang up\nin the \"stats\" test on this platform (NetBSD 8.1/amd64). Investigation\nfound that the stats collector process was sometimes exiting like\nthis:\n\n2019-10-29 19:38:14.563 EDT [7018] FATAL: could not read statistics message: No buffer space available\n2019-10-29 19:38:14.563 EDT [7932] LOG: statistics collector process (PID 7018) exited with exit code 1\n\nThe postmaster then restarts the collector, but possibly with a time\ndelay (see the PGSTAT_RESTART_INTERVAL respawn throttling logic).\nThis seems to interact badly with the wait-for-stats-collector logic\nin backend_read_statsfile, so that each cycle of the wait_for_stats()\ntest function takes a long time ... and we will do 300 of those\nunconditionally. (Possibly wait_for_stats ought to be modified so\nthat it pays attention to elapsed wall-clock time rather than\niterating for a fixed number of times?)\n\nNetBSD's recv() man page glosses ENOBUFS as \"A message was not\ndelivered because it would have overflowed the buffer\", but I don't\nbelieve that's actually what's happening. (Just to be sure,\nI added an Assert on the sending side that no message exceeds\nsizeof(PgStat_Msg). I wonder why we didn't have one already.)\nTrawling the NetBSD kernel code, it seems like ENOBUFS could get\nreturned as a result of transient shortages of kernel working\nmemory --- most of the uses of that error code seem to be on the\nsending side, but I found some that seem to be in receiving code.\n\nIn short: it's evidently possible to get ENOBUFS as a transient\nfailure condition on this platform, and having the stats collector\ndie seems like an overreaction. I'm inclined to have it log the\nerror and press on, instead. Looking at the POSIX spec for\nrecv() suggests that ENOMEM is another plausible transient failure,\nso maybe we should do the same with that. Roughly:\n\n if (len < 0)\n {\n+ /* silently ignore these cases (no data available) */\n if (errno == EAGAIN || errno == EWOULDBLOCK || errno == EINTR)\n break; /* out of inner loop */\n+ /* noisily ignore these cases (soft errors) */\n+ if (errno == ENOBUFS || errno == ENOMEM)\n+ {\n+ ereport(LOG,\n+ (errcode_for_socket_access(),\n+ errmsg(\"could not read statistics message: %m\")));\n+ break; /* out of inner loop */\n+ }\n+ /* hard failure, but maybe hara-kiri will fix it */\n ereport(ERROR,\n (errcode_for_socket_access(),\n errmsg(\"could not read statistics message: %m\")));\n }\n\nA variant idea is to treat all unexpected errnos as LOG-and-push-on,\nbut maybe we ought to have a limit on how many times we'll do that\nbefore erroring out.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Oct 2019 15:58:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgstat.c has brittle response to transient problems" } ]
[ { "msg_contents": "Hi,\n\nCurrently CREATE OR REPLACE VIEW command fails if the column names\nare changed. For example,\n\n =# CREATE VIEW test AS SELECT 0 AS a;\n =# CREATE OR REPLACE VIEW test AS SELECT 0 AS x;\n ERROR: cannot change name of view column \"a\" to \"x\"\n\nI'd like to propose the attached patch that allows CREATE OR REPLACE VIEW\nto rename the view columns. Thought?\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Thu, 31 Oct 2019 11:27:12 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n> Currently CREATE OR REPLACE VIEW command fails if the column names\n> are changed.\n\nThat is, I believe, intentional. It's an effective aid to catching\nmistakes in view redefinitions, such as misaligning the new set of\ncolumns relative to the old. That's particularly important given\nthat we allow you to add columns during CREATE OR REPLACE VIEW.\nConsider the oversimplified case where you start with\n\nCREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n\nand you want to add a column z, and you get sloppy and write\n\nCREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n\nIf we did not throw an error on this, references that formerly\npointed to column y would now point to z (as that's still attnum 2),\nwhich is highly unlikely to be what you wanted.\n\nThe right way to handle a column rename in a view is to do a separate\nALTER VIEW RENAME COLUMN, making it totally clear what you intend to\nhappen. (Right now, we make you spell that \"ALTER TABLE RENAME COLUMN\",\nbut it'd be reasonable to add that syntax to ALTER VIEW too.) I don't\nthink this functionality should be folded into redefinition of the content\nof the view. It'd add more opportunity for mistakes than anything else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 00:42:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Fujii Masao <masao.fujii@gmail.com> writes:\n> > Currently CREATE OR REPLACE VIEW command fails if the column names\n> > are changed.\n>\n> That is, I believe, intentional. It's an effective aid to catching\n> mistakes in view redefinitions, such as misaligning the new set of\n> columns relative to the old. That's particularly important given\n> that we allow you to add columns during CREATE OR REPLACE VIEW.\n> Consider the oversimplified case where you start with\n>\n> CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>\n> and you want to add a column z, and you get sloppy and write\n>\n> CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>\n> If we did not throw an error on this, references that formerly\n> pointed to column y would now point to z (as that's still attnum 2),\n> which is highly unlikely to be what you wanted.\n\nThis example makes me wonder if the addtion of column by\nCREATE OR REPLACE VIEW also has the same (or even worse) issue.\nThat is, it may increase the oppotunity for users' mistake.\nI'm thinking the case where users mistakenly added new column\ninto the view when replacing the view definition. This mistake can\nhappen because CREATE OR REPLACE VIEW allows new column to\nbe added. But what's the worse is that, currently there is no way to\ndrop the column from the view, except recreation of the view.\nNeither CREATE OR REPLACE VIEW nor ALTER TABLE support\nthe drop of the column from the view. So, to fix the mistake,\nusers would need to drop the view itself and recreate it. If there are\nsome objects depending the view, they also might need to be recreated.\nThis looks not good. Since the feature has been supported,\nit's too late to say that, though...\n\nAt least, the support for ALTER VIEW DROP COLUMN might be\nnecessary to alleviate that situation.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Thu, 31 Oct 2019 16:31:52 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n\n> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Fujii Masao <masao.fujii@gmail.com> writes:\n> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n> > > are changed.\n> >\n> > That is, I believe, intentional. It's an effective aid to catching\n> > mistakes in view redefinitions, such as misaligning the new set of\n> > columns relative to the old. That's particularly important given\n> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n> > Consider the oversimplified case where you start with\n> >\n> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n> >\n> > and you want to add a column z, and you get sloppy and write\n> >\n> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n> >\n> > If we did not throw an error on this, references that formerly\n> > pointed to column y would now point to z (as that's still attnum 2),\n> > which is highly unlikely to be what you wanted.\n>\n> This example makes me wonder if the addtion of column by\n> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n> That is, it may increase the oppotunity for users' mistake.\n> I'm thinking the case where users mistakenly added new column\n> into the view when replacing the view definition. This mistake can\n> happen because CREATE OR REPLACE VIEW allows new column to\n> be added. But what's the worse is that, currently there is no way to\n> drop the column from the view, except recreation of the view.\n> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n> the drop of the column from the view. So, to fix the mistake,\n> users would need to drop the view itself and recreate it. If there are\n> some objects depending the view, they also might need to be recreated.\n> This looks not good. Since the feature has been supported,\n> it's too late to say that, though...\n>\n> At least, the support for ALTER VIEW DROP COLUMN might be\n> necessary to alleviate that situation.\n>\n>\n- Is this intentional not implemented the \"RENAME COLUMN\" statement for\nVIEW because it is implemented for Materialized View? I have made just a\nsimilar\nchange to view and it works.\n\nALTER VIEW v RENAME COLUMN d to e;\n\n- For \"DROP COLUMN\" for VIEW is throwing error.\n\npostgres=# alter view v drop column e;\nERROR: \"v\" is not a table, composite type, or foreign table\n\n\n\nRegards,\n>\n> --\n> Fujii Masao\n>\n>\n>\n\n-- \nIbrar Ahmed", "msg_date": "Thu, 31 Oct 2019 15:58:49 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>\n>> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Fujii Masao <masao.fujii@gmail.com> writes:\n>> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n>> > > are changed.\n>> >\n>> > That is, I believe, intentional. It's an effective aid to catching\n>> > mistakes in view redefinitions, such as misaligning the new set of\n>> > columns relative to the old. That's particularly important given\n>> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n>> > Consider the oversimplified case where you start with\n>> >\n>> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>> >\n>> > and you want to add a column z, and you get sloppy and write\n>> >\n>> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>> >\n>> > If we did not throw an error on this, references that formerly\n>> > pointed to column y would now point to z (as that's still attnum 2),\n>> > which is highly unlikely to be what you wanted.\n>>\n>> This example makes me wonder if the addtion of column by\n>> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n>> That is, it may increase the oppotunity for users' mistake.\n>> I'm thinking the case where users mistakenly added new column\n>> into the view when replacing the view definition. This mistake can\n>> happen because CREATE OR REPLACE VIEW allows new column to\n>> be added. But what's the worse is that, currently there is no way to\n>> drop the column from the view, except recreation of the view.\n>> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n>> the drop of the column from the view. So, to fix the mistake,\n>> users would need to drop the view itself and recreate it. If there are\n>> some objects depending the view, they also might need to be recreated.\n>> This looks not good. Since the feature has been supported,\n>> it's too late to say that, though...\n>>\n>> At least, the support for ALTER VIEW DROP COLUMN might be\n>> necessary to alleviate that situation.\n>>\n>\n> - Is this intentional not implemented the \"RENAME COLUMN\" statement for\n> VIEW because it is implemented for Materialized View?\n\nNot sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\nsounds reasonable whether we support the rename of columns when replacing\nthe view definition, or not. Attached is the patch that adds support for\nALTER VIEW RENAME COLUMN command.\n\n> I have made just a similar\n> change to view and it works.\n\nYeah, ISTM that we made the same patch at the same time. You changed gram.y,\nbut I added the following changes additionally.\n\n- Update the doc\n- Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the columns\n- Update tab-complete.c\n- Add regression test\n\nOne issue I've not addressed yet is about the command tag of\n\"ALTER VIEW RENAME COLUMN\". Currently \"ALTER TABLE\" is returned as the tag\nlike ALTER MATERIALIZED VIEW RENAME COLUMN, but ISTM that \"ALTER VIEW\"\nis better. I started the discussion about the command tag of\n\"ALTER MATERIALIZED VIEW\" at another thread. I will update the patch according\nto the result of that discussion.\nhttps://postgr.es/m/CAHGQGwGUaC03FFdTFoHsCuDrrNvFvNVQ6xyd40==P25WvuBJjg@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Thu, 31 Oct 2019 21:01:02 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 5:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n\n> On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >\n> >\n> >\n> > On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com>\n> wrote:\n> >>\n> >> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> >\n> >> > Fujii Masao <masao.fujii@gmail.com> writes:\n> >> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n> >> > > are changed.\n> >> >\n> >> > That is, I believe, intentional. It's an effective aid to catching\n> >> > mistakes in view redefinitions, such as misaligning the new set of\n> >> > columns relative to the old. That's particularly important given\n> >> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n> >> > Consider the oversimplified case where you start with\n> >> >\n> >> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n> >> >\n> >> > and you want to add a column z, and you get sloppy and write\n> >> >\n> >> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n> >> >\n> >> > If we did not throw an error on this, references that formerly\n> >> > pointed to column y would now point to z (as that's still attnum 2),\n> >> > which is highly unlikely to be what you wanted.\n> >>\n> >> This example makes me wonder if the addtion of column by\n> >> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n> >> That is, it may increase the oppotunity for users' mistake.\n> >> I'm thinking the case where users mistakenly added new column\n> >> into the view when replacing the view definition. This mistake can\n> >> happen because CREATE OR REPLACE VIEW allows new column to\n> >> be added. But what's the worse is that, currently there is no way to\n> >> drop the column from the view, except recreation of the view.\n> >> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n> >> the drop of the column from the view. So, to fix the mistake,\n> >> users would need to drop the view itself and recreate it. If there are\n> >> some objects depending the view, they also might need to be recreated.\n> >> This looks not good. Since the feature has been supported,\n> >> it's too late to say that, though...\n> >>\n> >> At least, the support for ALTER VIEW DROP COLUMN might be\n> >> necessary to alleviate that situation.\n> >>\n> >\n> > - Is this intentional not implemented the \"RENAME COLUMN\" statement for\n> > VIEW because it is implemented for Materialized View?\n>\n> Not sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\n> sounds reasonable whether we support the rename of columns when replacing\n> the view definition, or not. Attached is the patch that adds support for\n> ALTER VIEW RENAME COLUMN command.\n>\n> > I have made just a similar\n> > change to view and it works.\n>\n> Yeah, ISTM that we made the same patch at the same time. You changed\n> gram.y,\n> but I added the following changes additionally.\n>\n> - Update the doc\n> - Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the\n> columns\n> - Update tab-complete.c\n> - Add regression test\n>\n>\nOh, I just sent the patch to ask, good you do that in all the places.\n\nOne issue I've not addressed yet is about the command tag of\n> \"ALTER VIEW RENAME COLUMN\". Currently \"ALTER TABLE\" is returned as the tag\n> like ALTER MATERIALIZED VIEW RENAME COLUMN, but ISTM that \"ALTER VIEW\"\n> is better. I started the discussion about the command tag of\n> \"ALTER MATERIALIZED VIEW\" at another thread. I will update the patch\n> according\n> to the result of that discussion.\n>\n> https://postgr.es/m/CAHGQGwGUaC03FFdTFoHsCuDrrNvFvNVQ6xyd40==P25WvuBJjg@mail.gmail.com\n>\n> Attached patch contain small change for ALTER MATERIALIZED VIEW.\n\n\n\n> Regards,\n>\n> --\n> Fujii Masao\n>\n\n\n-- \nIbrar Ahmed", "msg_date": "Thu, 31 Oct 2019 17:11:57 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 5:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Thu, Oct 31, 2019 at 5:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n>> On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n>> wrote:\n>> >\n>> >\n>> >\n>> > On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com>\n>> wrote:\n>> >>\n>> >> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >> >\n>> >> > Fujii Masao <masao.fujii@gmail.com> writes:\n>> >> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n>> >> > > are changed.\n>> >> >\n>> >> > That is, I believe, intentional. It's an effective aid to catching\n>> >> > mistakes in view redefinitions, such as misaligning the new set of\n>> >> > columns relative to the old. That's particularly important given\n>> >> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n>> >> > Consider the oversimplified case where you start with\n>> >> >\n>> >> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>> >> >\n>> >> > and you want to add a column z, and you get sloppy and write\n>> >> >\n>> >> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>> >> >\n>> >> > If we did not throw an error on this, references that formerly\n>> >> > pointed to column y would now point to z (as that's still attnum 2),\n>> >> > which is highly unlikely to be what you wanted.\n>> >>\n>> >> This example makes me wonder if the addtion of column by\n>> >> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n>> >> That is, it may increase the oppotunity for users' mistake.\n>> >> I'm thinking the case where users mistakenly added new column\n>> >> into the view when replacing the view definition. This mistake can\n>> >> happen because CREATE OR REPLACE VIEW allows new column to\n>> >> be added. But what's the worse is that, currently there is no way to\n>> >> drop the column from the view, except recreation of the view.\n>> >> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n>> >> the drop of the column from the view. So, to fix the mistake,\n>> >> users would need to drop the view itself and recreate it. If there are\n>> >> some objects depending the view, they also might need to be recreated.\n>> >> This looks not good. Since the feature has been supported,\n>> >> it's too late to say that, though...\n>> >>\n>> >> At least, the support for ALTER VIEW DROP COLUMN might be\n>> >> necessary to alleviate that situation.\n>> >>\n>> >\n>> > - Is this intentional not implemented the \"RENAME COLUMN\" statement for\n>> > VIEW because it is implemented for Materialized View?\n>>\n>> Not sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\n>> sounds reasonable whether we support the rename of columns when replacing\n>> the view definition, or not. Attached is the patch that adds support for\n>> ALTER VIEW RENAME COLUMN command.\n>>\n>> > I have made just a similar\n>> > change to view and it works.\n>>\n>> Yeah, ISTM that we made the same patch at the same time. You changed\n>> gram.y,\n>> but I added the following changes additionally.\n>>\n>> - Update the doc\n>> - Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the\n>> columns\n>> - Update tab-complete.c\n>> - Add regression test\n>>\n>>\n> Oh, I just sent the patch to ask, good you do that in all the places.\n>\n> One issue I've not addressed yet is about the command tag of\n>> \"ALTER VIEW RENAME COLUMN\". Currently \"ALTER TABLE\" is returned as the tag\n>> like ALTER MATERIALIZED VIEW RENAME COLUMN, but ISTM that \"ALTER VIEW\"\n>> is better. I started the discussion about the command tag of\n>> \"ALTER MATERIALIZED VIEW\" at another thread. I will update the patch\n>> according\n>> to the result of that discussion.\n>>\n>> https://postgr.es/m/CAHGQGwGUaC03FFdTFoHsCuDrrNvFvNVQ6xyd40==P25WvuBJjg@mail.gmail.com\n>>\n>> Attached patch contain small change for ALTER MATERIALIZED VIEW.\n>\n>\nHmm, my small change of \"ALTER MATERIALIZED VIEW\" does not work in some\ncases need more work on that.\n\n\n>\n>\n>> Regards,\n>>\n>> --\n>> Fujii Masao\n>>\n>\n>\n> --\n> Ibrar Ahmed\n>\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Oct 31, 2019 at 5:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Thu, Oct 31, 2019 at 5:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>\n>> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Fujii Masao <masao.fujii@gmail.com> writes:\n>> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n>> > > are changed.\n>> >\n>> > That is, I believe, intentional.  It's an effective aid to catching\n>> > mistakes in view redefinitions, such as misaligning the new set of\n>> > columns relative to the old.  That's particularly important given\n>> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n>> > Consider the oversimplified case where you start with\n>> >\n>> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>> >\n>> > and you want to add a column z, and you get sloppy and write\n>> >\n>> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>> >\n>> > If we did not throw an error on this, references that formerly\n>> > pointed to column y would now point to z (as that's still attnum 2),\n>> > which is highly unlikely to be what you wanted.\n>>\n>> This example makes me wonder if the addtion of column by\n>> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n>> That is, it may increase the oppotunity for users' mistake.\n>> I'm thinking the case where users mistakenly added new column\n>> into the view when replacing the view definition. This mistake can\n>> happen because CREATE OR REPLACE VIEW allows new column to\n>> be added. But what's the worse is that, currently there is no way to\n>> drop the column from the view, except recreation of the view.\n>> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n>> the drop of the column from the view. So, to fix the mistake,\n>> users would need to drop the view itself and recreate it. If there are\n>> some objects depending the view, they also might need to be recreated.\n>> This looks not good. Since the feature has been supported,\n>> it's too late to say that, though...\n>>\n>> At least, the support for ALTER VIEW DROP COLUMN might be\n>> necessary to alleviate that situation.\n>>\n>\n> - Is this intentional not implemented the \"RENAME COLUMN\" statement for\n> VIEW because it is implemented for Materialized View?\n\nNot sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\nsounds reasonable whether we support the rename of columns when replacing\nthe view definition, or not. Attached is the patch that adds support for\nALTER VIEW RENAME COLUMN command.\n\n> I have made just a similar\n> change to view and it works.\n\nYeah, ISTM that we made the same patch at the same time. You changed gram.y,\nbut I added the following changes additionally.\n\n- Update the doc\n- Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the columns\n- Update tab-complete.c\n- Add regression test\nOh, I just sent the patch to ask, good you do that in all the places. \nOne issue I've not addressed yet is about the command tag of\n\"ALTER VIEW RENAME COLUMN\". Currently \"ALTER TABLE\" is returned as the tag\nlike ALTER MATERIALIZED VIEW RENAME COLUMN, but ISTM that \"ALTER VIEW\"\nis better. I started the discussion about the command tag of\n\"ALTER MATERIALIZED VIEW\" at another thread. I will update the patch according\nto the result of that discussion.\nhttps://postgr.es/m/CAHGQGwGUaC03FFdTFoHsCuDrrNvFvNVQ6xyd40==P25WvuBJjg@mail.gmail.com\nAttached patch contain small change for ALTER MATERIALIZED VIEW.Hmm, my small change of \"ALTER MATERIALIZED VIEW\" does not work in some cases need more work on that.  \nRegards,\n\n-- \nFujii Masao\n-- Ibrar Ahmed\n-- Ibrar Ahmed", "msg_date": "Thu, 31 Oct 2019 17:28:00 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 5:28 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Thu, Oct 31, 2019 at 5:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>>\n>>\n>> On Thu, Oct 31, 2019 at 5:01 PM Fujii Masao <masao.fujii@gmail.com>\n>> wrote:\n>>\n>>> On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n>>> wrote:\n>>> >\n>>> >\n>>> >\n>>> > On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com>\n>>> wrote:\n>>> >>\n>>> >> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> >> >\n>>> >> > Fujii Masao <masao.fujii@gmail.com> writes:\n>>> >> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n>>> >> > > are changed.\n>>> >> >\n>>> >> > That is, I believe, intentional. It's an effective aid to catching\n>>> >> > mistakes in view redefinitions, such as misaligning the new set of\n>>> >> > columns relative to the old. That's particularly important given\n>>> >> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n>>> >> > Consider the oversimplified case where you start with\n>>> >> >\n>>> >> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>>> >> >\n>>> >> > and you want to add a column z, and you get sloppy and write\n>>> >> >\n>>> >> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>>> >> >\n>>> >> > If we did not throw an error on this, references that formerly\n>>> >> > pointed to column y would now point to z (as that's still attnum 2),\n>>> >> > which is highly unlikely to be what you wanted.\n>>> >>\n>>> >> This example makes me wonder if the addtion of column by\n>>> >> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n>>> >> That is, it may increase the oppotunity for users' mistake.\n>>> >> I'm thinking the case where users mistakenly added new column\n>>> >> into the view when replacing the view definition. This mistake can\n>>> >> happen because CREATE OR REPLACE VIEW allows new column to\n>>> >> be added. But what's the worse is that, currently there is no way to\n>>> >> drop the column from the view, except recreation of the view.\n>>> >> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n>>> >> the drop of the column from the view. So, to fix the mistake,\n>>> >> users would need to drop the view itself and recreate it. If there are\n>>> >> some objects depending the view, they also might need to be recreated.\n>>> >> This looks not good. Since the feature has been supported,\n>>> >> it's too late to say that, though...\n>>> >>\n>>> >> At least, the support for ALTER VIEW DROP COLUMN might be\n>>> >> necessary to alleviate that situation.\n>>> >>\n>>> >\n>>> > - Is this intentional not implemented the \"RENAME COLUMN\" statement for\n>>> > VIEW because it is implemented for Materialized View?\n>>>\n>>> Not sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\n>>> sounds reasonable whether we support the rename of columns when replacing\n>>> the view definition, or not. Attached is the patch that adds support for\n>>> ALTER VIEW RENAME COLUMN command.\n>>>\n>>> > I have made just a similar\n>>> > change to view and it works.\n>>>\n>>> Yeah, ISTM that we made the same patch at the same time. You changed\n>>> gram.y,\n>>> but I added the following changes additionally.\n>>>\n>>> - Update the doc\n>>> - Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the\n>>> columns\n>>> - Update tab-complete.c\n>>> - Add regression test\n>>>\n>>>\n>> Oh, I just sent the patch to ask, good you do that in all the places.\n>>\n>> One issue I've not addressed yet is about the command tag of\n>>> \"ALTER VIEW RENAME COLUMN\". Currently \"ALTER TABLE\" is returned as the\n>>> tag\n>>> like ALTER MATERIALIZED VIEW RENAME COLUMN, but ISTM that \"ALTER VIEW\"\n>>> is better. I started the discussion about the command tag of\n>>> \"ALTER MATERIALIZED VIEW\" at another thread. I will update the patch\n>>> according\n>>> to the result of that discussion.\n>>>\n>>> https://postgr.es/m/CAHGQGwGUaC03FFdTFoHsCuDrrNvFvNVQ6xyd40==P25WvuBJjg@mail.gmail.com\n>>>\n>>> Attached patch contain small change for ALTER MATERIALIZED VIEW.\n>>\n>>\n> Hmm, my small change of \"ALTER MATERIALIZED VIEW\" does not work in some\n> cases need more work on that.\n>\n>\n\nThe AlterObjectTypeCommandTag function just take one parameter, but to\nshow \"ALTER MATERIALIZED VIEW\" instead of ALTER TABLE we need to\npass \"relationType = OBJECT_MATVIEW\" along with \"renameType = OBJECT_COLUMN\"\nand handle that in the function. The \"AlterObjectTypeCommandTag\" function\nhas many\ncalls. Do you think just for the command tag, we should do all this change?\n\n\n>\n>>\n>>> Regards,\n>>>\n>>> --\n>>> Fujii Masao\n>>>\n>>\n>>\n>> --\n>> Ibrar Ahmed\n>>\n>\n>\n> --\n> Ibrar Ahmed\n>\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Oct 31, 2019 at 5:28 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Thu, Oct 31, 2019 at 5:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Thu, Oct 31, 2019 at 5:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>\n>> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Fujii Masao <masao.fujii@gmail.com> writes:\n>> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n>> > > are changed.\n>> >\n>> > That is, I believe, intentional.  It's an effective aid to catching\n>> > mistakes in view redefinitions, such as misaligning the new set of\n>> > columns relative to the old.  That's particularly important given\n>> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n>> > Consider the oversimplified case where you start with\n>> >\n>> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>> >\n>> > and you want to add a column z, and you get sloppy and write\n>> >\n>> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>> >\n>> > If we did not throw an error on this, references that formerly\n>> > pointed to column y would now point to z (as that's still attnum 2),\n>> > which is highly unlikely to be what you wanted.\n>>\n>> This example makes me wonder if the addtion of column by\n>> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n>> That is, it may increase the oppotunity for users' mistake.\n>> I'm thinking the case where users mistakenly added new column\n>> into the view when replacing the view definition. This mistake can\n>> happen because CREATE OR REPLACE VIEW allows new column to\n>> be added. But what's the worse is that, currently there is no way to\n>> drop the column from the view, except recreation of the view.\n>> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n>> the drop of the column from the view. So, to fix the mistake,\n>> users would need to drop the view itself and recreate it. If there are\n>> some objects depending the view, they also might need to be recreated.\n>> This looks not good. Since the feature has been supported,\n>> it's too late to say that, though...\n>>\n>> At least, the support for ALTER VIEW DROP COLUMN might be\n>> necessary to alleviate that situation.\n>>\n>\n> - Is this intentional not implemented the \"RENAME COLUMN\" statement for\n> VIEW because it is implemented for Materialized View?\n\nNot sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\nsounds reasonable whether we support the rename of columns when replacing\nthe view definition, or not. Attached is the patch that adds support for\nALTER VIEW RENAME COLUMN command.\n\n> I have made just a similar\n> change to view and it works.\n\nYeah, ISTM that we made the same patch at the same time. You changed gram.y,\nbut I added the following changes additionally.\n\n- Update the doc\n- Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the columns\n- Update tab-complete.c\n- Add regression test\nOh, I just sent the patch to ask, good you do that in all the places. \nOne issue I've not addressed yet is about the command tag of\n\"ALTER VIEW RENAME COLUMN\". Currently \"ALTER TABLE\" is returned as the tag\nlike ALTER MATERIALIZED VIEW RENAME COLUMN, but ISTM that \"ALTER VIEW\"\nis better. I started the discussion about the command tag of\n\"ALTER MATERIALIZED VIEW\" at another thread. I will update the patch according\nto the result of that discussion.\nhttps://postgr.es/m/CAHGQGwGUaC03FFdTFoHsCuDrrNvFvNVQ6xyd40==P25WvuBJjg@mail.gmail.com\nAttached patch contain small change for ALTER MATERIALIZED VIEW.Hmm, my small change of \"ALTER MATERIALIZED VIEW\" does not work in some cases need more work on that. The AlterObjectTypeCommandTag function just take one parameter, but toshow  \"ALTER MATERIALIZED VIEW\" instead of ALTER TABLE we need topass \"relationType = OBJECT_MATVIEW\" along with \"renameType = OBJECT_COLUMN\"and handle that in the function. The \"AlterObjectTypeCommandTag\" function has manycalls. Do you think just for the command tag, we should do all this change? \n \nRegards,\n\n-- \nFujii Masao\n-- Ibrar Ahmed\n-- Ibrar Ahmed\n-- Ibrar Ahmed", "msg_date": "Thu, 31 Oct 2019 17:34:22 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Fujii Masao <masao.fujii@gmail.com> writes:\n>>> Currently CREATE OR REPLACE VIEW command fails if the column names\n>>> are changed.\n\n>> That is, I believe, intentional. It's an effective aid to catching\n>> mistakes in view redefinitions, such as misaligning the new set of\n>> columns relative to the old. [example]\n\n> This example makes me wonder if the addtion of column by\n> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n> That is, it may increase the oppotunity for users' mistake.\n\nThe idea in CREATE OR REPLACE VIEW is to allow addition of new\ncolumns at the end, the same as you can do with tables. Checking\nthe column name matchups is a way to ensure that you actually do\nadd at the end, rather than insert, which wouldn't act as you\nexpect. Admittedly it's only heuristic.\n\nWe could, perhaps, have insisted that adding a column also requires\nspecial syntax, but we didn't. Consider for example a case where\nthe new column needs to come from an additionally joined table;\nthen you have to be able to edit the underlying view definition along\nwith adding the column. So that seems like kind of a pain in the\nneck to insist on.\n\n> But what's the worse is that, currently there is no way to\n> drop the column from the view, except recreation of the view.\n\nI think this has been discussed, as well. It's not necessarily\nsimple to drop a view output column. For example, if the view\nuses SELECT DISTINCT, removing an output column would have\nsemantic effects on the set of rows that can be returned, since\ndistinct-ness would now mean something else than it did before.\n\nIt's conceivable that we could enumerate all such effects and\nthen allow DROP COLUMN (probably replacing the output column\nwith a null constant) if none of them apply, but I can't get\nterribly excited about it. The field demand for such a feature\nhas not been high. I'd be a bit worried about bugs arising\nfrom failures to check attisdropped for views, too; so that\nthe cost of getting this working might be greater than it seems\non the surface.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 09:54:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 10:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Fujii Masao <masao.fujii@gmail.com> writes:\n> > On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Fujii Masao <masao.fujii@gmail.com> writes:\n> >>> Currently CREATE OR REPLACE VIEW command fails if the column names\n> >>> are changed.\n>\n> >> That is, I believe, intentional. It's an effective aid to catching\n> >> mistakes in view redefinitions, such as misaligning the new set of\n> >> columns relative to the old. [example]\n>\n> > This example makes me wonder if the addtion of column by\n> > CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n> > That is, it may increase the oppotunity for users' mistake.\n>\n> The idea in CREATE OR REPLACE VIEW is to allow addition of new\n> columns at the end, the same as you can do with tables. Checking\n> the column name matchups is a way to ensure that you actually do\n> add at the end, rather than insert, which wouldn't act as you\n> expect. Admittedly it's only heuristic.\n>\n> We could, perhaps, have insisted that adding a column also requires\n> special syntax, but we didn't. Consider for example a case where\n> the new column needs to come from an additionally joined table;\n> then you have to be able to edit the underlying view definition along\n> with adding the column. So that seems like kind of a pain in the\n> neck to insist on.\n>\n> > But what's the worse is that, currently there is no way to\n> > drop the column from the view, except recreation of the view.\n>\n> I think this has been discussed, as well. It's not necessarily\n> simple to drop a view output column. For example, if the view\n> uses SELECT DISTINCT, removing an output column would have\n> semantic effects on the set of rows that can be returned, since\n> distinct-ness would now mean something else than it did before.\n>\n> It's conceivable that we could enumerate all such effects and\n> then allow DROP COLUMN (probably replacing the output column\n> with a null constant) if none of them apply, but I can't get\n> terribly excited about it. The field demand for such a feature\n> has not been high. I'd be a bit worried about bugs arising\n> from failures to check attisdropped for views, too; so that\n> the cost of getting this working might be greater than it seems\n> on the surface.\n\nThanks for the explanation! Understood.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Fri, 1 Nov 2019 12:42:43 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "2019-10-31 21:01 に Fujii Masao さんは書きました:\n> On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> \n> wrote:\n>> \n>> \n>> \n>> On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com> \n>> wrote:\n>>> \n>>> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> >\n>>> > Fujii Masao <masao.fujii@gmail.com> writes:\n>>> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n>>> > > are changed.\n>>> >\n>>> > That is, I believe, intentional. It's an effective aid to catching\n>>> > mistakes in view redefinitions, such as misaligning the new set of\n>>> > columns relative to the old. That's particularly important given\n>>> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n>>> > Consider the oversimplified case where you start with\n>>> >\n>>> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>>> >\n>>> > and you want to add a column z, and you get sloppy and write\n>>> >\n>>> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>>> >\n>>> > If we did not throw an error on this, references that formerly\n>>> > pointed to column y would now point to z (as that's still attnum 2),\n>>> > which is highly unlikely to be what you wanted.\n>>> \n>>> This example makes me wonder if the addtion of column by\n>>> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n>>> That is, it may increase the oppotunity for users' mistake.\n>>> I'm thinking the case where users mistakenly added new column\n>>> into the view when replacing the view definition. This mistake can\n>>> happen because CREATE OR REPLACE VIEW allows new column to\n>>> be added. But what's the worse is that, currently there is no way to\n>>> drop the column from the view, except recreation of the view.\n>>> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n>>> the drop of the column from the view. So, to fix the mistake,\n>>> users would need to drop the view itself and recreate it. If there \n>>> are\n>>> some objects depending the view, they also might need to be \n>>> recreated.\n>>> This looks not good. Since the feature has been supported,\n>>> it's too late to say that, though...\n>>> \n>>> At least, the support for ALTER VIEW DROP COLUMN might be\n>>> necessary to alleviate that situation.\n>>> \n>> \n>> - Is this intentional not implemented the \"RENAME COLUMN\" statement \n>> for\n>> VIEW because it is implemented for Materialized View?\n> \n> Not sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\n> sounds reasonable whether we support the rename of columns when \n> replacing\n> the view definition, or not. Attached is the patch that adds support \n> for\n> ALTER VIEW RENAME COLUMN command.\n> \n>> I have made just a similar\n>> change to view and it works.\n> \n> Yeah, ISTM that we made the same patch at the same time. You changed \n> gram.y,\n> but I added the following changes additionally.\n> \n> - Update the doc\n> - Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the \n> columns\n> - Update tab-complete.c\n\n\nI review your patch, and then I found that tab complete of \"alter \nmaterialized view\" is also not enough.\nSo, I made a small patch referencing your patch.\n\nRegards,", "msg_date": "Wed, 06 Nov 2019 16:14:09 +0900", "msg_from": "btfujiitkp <btfujiitkp@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Thu, Oct 31, 2019 at 9:34 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Oct 31, 2019 at 5:28 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>\n>>\n>>\n>> On Thu, Oct 31, 2019 at 5:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>>\n>>>\n>>>\n>>> On Thu, Oct 31, 2019 at 5:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>>>\n>>>> On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>>> >\n>>>> >\n>>>> >\n>>>> > On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>>> >>\n>>>> >> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> >> >\n>>>> >> > Fujii Masao <masao.fujii@gmail.com> writes:\n>>>> >> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n>>>> >> > > are changed.\n>>>> >> >\n>>>> >> > That is, I believe, intentional. It's an effective aid to catching\n>>>> >> > mistakes in view redefinitions, such as misaligning the new set of\n>>>> >> > columns relative to the old. That's particularly important given\n>>>> >> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n>>>> >> > Consider the oversimplified case where you start with\n>>>> >> >\n>>>> >> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n>>>> >> >\n>>>> >> > and you want to add a column z, and you get sloppy and write\n>>>> >> >\n>>>> >> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n>>>> >> >\n>>>> >> > If we did not throw an error on this, references that formerly\n>>>> >> > pointed to column y would now point to z (as that's still attnum 2),\n>>>> >> > which is highly unlikely to be what you wanted.\n>>>> >>\n>>>> >> This example makes me wonder if the addtion of column by\n>>>> >> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n>>>> >> That is, it may increase the oppotunity for users' mistake.\n>>>> >> I'm thinking the case where users mistakenly added new column\n>>>> >> into the view when replacing the view definition. This mistake can\n>>>> >> happen because CREATE OR REPLACE VIEW allows new column to\n>>>> >> be added. But what's the worse is that, currently there is no way to\n>>>> >> drop the column from the view, except recreation of the view.\n>>>> >> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n>>>> >> the drop of the column from the view. So, to fix the mistake,\n>>>> >> users would need to drop the view itself and recreate it. If there are\n>>>> >> some objects depending the view, they also might need to be recreated.\n>>>> >> This looks not good. Since the feature has been supported,\n>>>> >> it's too late to say that, though...\n>>>> >>\n>>>> >> At least, the support for ALTER VIEW DROP COLUMN might be\n>>>> >> necessary to alleviate that situation.\n>>>> >>\n>>>> >\n>>>> > - Is this intentional not implemented the \"RENAME COLUMN\" statement for\n>>>> > VIEW because it is implemented for Materialized View?\n>>>>\n>>>> Not sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\n>>>> sounds reasonable whether we support the rename of columns when replacing\n>>>> the view definition, or not. Attached is the patch that adds support for\n>>>> ALTER VIEW RENAME COLUMN command.\n>>>>\n>>>> > I have made just a similar\n>>>> > change to view and it works.\n>>>>\n>>>> Yeah, ISTM that we made the same patch at the same time. You changed gram.y,\n>>>> but I added the following changes additionally.\n>>>>\n>>>> - Update the doc\n>>>> - Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the columns\n>>>> - Update tab-complete.c\n>>>> - Add regression test\n>>>>\n>>>\n>>> Oh, I just sent the patch to ask, good you do that in all the places.\n>>>\n>>>> One issue I've not addressed yet is about the command tag of\n>>>> \"ALTER VIEW RENAME COLUMN\". Currently \"ALTER TABLE\" is returned as the tag\n>>>> like ALTER MATERIALIZED VIEW RENAME COLUMN, but ISTM that \"ALTER VIEW\"\n>>>> is better. I started the discussion about the command tag of\n>>>> \"ALTER MATERIALIZED VIEW\" at another thread. I will update the patch according\n>>>> to the result of that discussion.\n>>>> https://postgr.es/m/CAHGQGwGUaC03FFdTFoHsCuDrrNvFvNVQ6xyd40==P25WvuBJjg@mail.gmail.com\n>>>>\n>>> Attached patch contain small change for ALTER MATERIALIZED VIEW.\n>>>\n>>\n>> Hmm, my small change of \"ALTER MATERIALIZED VIEW\" does not work in some cases need more work on that.\n>>\n>\n>\n> The AlterObjectTypeCommandTag function just take one parameter, but to\n> show \"ALTER MATERIALIZED VIEW\" instead of ALTER TABLE we need to\n> pass \"relationType = OBJECT_MATVIEW\" along with \"renameType = OBJECT_COLUMN\"\n> and handle that in the function. The \"AlterObjectTypeCommandTag\" function has many\n> calls. Do you think just for the command tag, we should do all this change?\n\nThanks for trying to address the issue!\nAs probably you've already noticed, the commit 979766c0af fixed the issue\nthanks to your review. So we can review the patch that I posted.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Wed, 6 Nov 2019 19:08:07 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Wed, Nov 6, 2019 at 4:14 PM btfujiitkp <btfujiitkp@oss.nttdata.com> wrote:\n>\n> 2019-10-31 21:01 に Fujii Masao さんは書きました:\n> > On Thu, Oct 31, 2019 at 7:59 PM Ibrar Ahmed <ibrar.ahmad@gmail.com>\n> > wrote:\n> >>\n> >>\n> >>\n> >> On Thu, Oct 31, 2019 at 12:32 PM Fujii Masao <masao.fujii@gmail.com>\n> >> wrote:\n> >>>\n> >>> On Thu, Oct 31, 2019 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> >\n> >>> > Fujii Masao <masao.fujii@gmail.com> writes:\n> >>> > > Currently CREATE OR REPLACE VIEW command fails if the column names\n> >>> > > are changed.\n> >>> >\n> >>> > That is, I believe, intentional. It's an effective aid to catching\n> >>> > mistakes in view redefinitions, such as misaligning the new set of\n> >>> > columns relative to the old. That's particularly important given\n> >>> > that we allow you to add columns during CREATE OR REPLACE VIEW.\n> >>> > Consider the oversimplified case where you start with\n> >>> >\n> >>> > CREATE VIEW v AS SELECT 1 AS x, 2 AS y;\n> >>> >\n> >>> > and you want to add a column z, and you get sloppy and write\n> >>> >\n> >>> > CREATE OR REPLACE VIEW v AS SELECT 1 AS x, 3 AS z, 2 AS y;\n> >>> >\n> >>> > If we did not throw an error on this, references that formerly\n> >>> > pointed to column y would now point to z (as that's still attnum 2),\n> >>> > which is highly unlikely to be what you wanted.\n> >>>\n> >>> This example makes me wonder if the addtion of column by\n> >>> CREATE OR REPLACE VIEW also has the same (or even worse) issue.\n> >>> That is, it may increase the oppotunity for users' mistake.\n> >>> I'm thinking the case where users mistakenly added new column\n> >>> into the view when replacing the view definition. This mistake can\n> >>> happen because CREATE OR REPLACE VIEW allows new column to\n> >>> be added. But what's the worse is that, currently there is no way to\n> >>> drop the column from the view, except recreation of the view.\n> >>> Neither CREATE OR REPLACE VIEW nor ALTER TABLE support\n> >>> the drop of the column from the view. So, to fix the mistake,\n> >>> users would need to drop the view itself and recreate it. If there\n> >>> are\n> >>> some objects depending the view, they also might need to be\n> >>> recreated.\n> >>> This looks not good. Since the feature has been supported,\n> >>> it's too late to say that, though...\n> >>>\n> >>> At least, the support for ALTER VIEW DROP COLUMN might be\n> >>> necessary to alleviate that situation.\n> >>>\n> >>\n> >> - Is this intentional not implemented the \"RENAME COLUMN\" statement\n> >> for\n> >> VIEW because it is implemented for Materialized View?\n> >\n> > Not sure that, but Tom's suggestion to support ALTER VIEW RENAME COLUMN\n> > sounds reasonable whether we support the rename of columns when\n> > replacing\n> > the view definition, or not. Attached is the patch that adds support\n> > for\n> > ALTER VIEW RENAME COLUMN command.\n> >\n> >> I have made just a similar\n> >> change to view and it works.\n> >\n> > Yeah, ISTM that we made the same patch at the same time. You changed\n> > gram.y,\n> > but I added the following changes additionally.\n> >\n> > - Update the doc\n> > - Add HINT message emit when CRAETE OR REPLACE VIEW fails to rename the\n> > columns\n> > - Update tab-complete.c\n>\n>\n> I review your patch, and then I found that tab complete of \"alter\n> materialized view\" is also not enough.\n> So, I made a small patch referencing your patch.\n\nGood catch! The patch basically looks good to me.\nBut I think that \"ALTER MATERIALIZED VIEW xxx <TAB>\" should output also\nDEPENDS ON EXTENSION, SET TABLESPACE, CLUSTER ON and RESET.\nSo I added such tab-completes to your patch. Patch attached.\n\nBarring any objection, I'm thinking to commit this patch.\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Thu, 14 Nov 2019 23:14:22 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "> Barring any objection, I'm thinking to commit this patch.\n> \n> Regards,\n\nBuild and All Test has passed .\nLooks good to me .\n\nRegards,\n\n\n", "msg_date": "Wed, 20 Nov 2019 13:11:16 +0900", "msg_from": "btkimurayuzk <btkimurayuzk@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" }, { "msg_contents": "On Wed, Nov 20, 2019 at 1:11 PM btkimurayuzk\n<btkimurayuzk@oss.nttdata.com> wrote:\n>\n> > Barring any objection, I'm thinking to commit this patch.\n> >\n> > Regards,\n>\n> Build and All Test has passed .\n> Looks good to me .\n\nThanks for reviewing the patch!\nI committed the following two patches.\n\n- Allow ALTER VIEW command to rename the column in the view.\n- Improve tab-completion for ALTER MATERIALIZED VIEW.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Thu, 21 Nov 2019 19:58:32 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow CREATE OR REPLACE VIEW to rename the columns" } ]
[ { "msg_contents": "Hi all,\n\nI wondered can we have a shortcut somewhat similar to following POC\nin recomputeNamespacePath () when the recomputed path is the same as the\nprevious baseSearchPath/activeSearchPath :\n\n== POC patch ==\ndiff --git a/src/backend/catalog/namespace.c\nb/src/backend/catalog/namespace.c\nindex e251f5a9fdc..b25ef489e47 100644\n--- a/src/backend/catalog/namespace.c\n+++ b/src/backend/catalog/namespace.c\n@@ -3813,6 +3813,9 @@ recomputeNamespacePath(void)\n !list_member_oid(oidlist, myTempNamespace))\n oidlist = lcons_oid(myTempNamespace, oidlist);\n\n+ /* TODO: POC */\n+ if (equal(oidlist, baseSearchPath))\n+ return;\n /*\n * Now that we've successfully built the new list of namespace OIDs,\nsave\n * it in permanent storage.\n== POC patch end ==\n\nIt can have two advantages as:\n\n1. Avoid unnecessary list_copy() in TopMemoryContext context &\n2. Global pointers like activeSearchPath/baseSearchPath will not change if\nsome\n implementation end up with cascaded call to recomputeNamespacePath().\n\nThoughts/Comments?\n\nRegards,\nAmul\n\nHi all,I wondered can we have a shortcut somewhat similar to following POCin recomputeNamespacePath () when the recomputed path is the same as theprevious baseSearchPath/activeSearchPath :== POC patch ==diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.cindex e251f5a9fdc..b25ef489e47 100644--- a/src/backend/catalog/namespace.c+++ b/src/backend/catalog/namespace.c@@ -3813,6 +3813,9 @@ recomputeNamespacePath(void)        !list_member_oid(oidlist, myTempNamespace))        oidlist = lcons_oid(myTempNamespace, oidlist); +   /* TODO: POC */+   if (equal(oidlist, baseSearchPath))+       return;    /*     * Now that we've successfully built the new list of namespace OIDs, save     * it in permanent storage.== POC patch end ==It can have two advantages as:1. Avoid unnecessary list_copy() in TopMemoryContext context &2. Global pointers like activeSearchPath/baseSearchPath will not change if some    implementation end up with cascaded call to recomputeNamespacePath().Thoughts/Comments?Regards,Amul", "msg_date": "Thu, 31 Oct 2019 14:11:03 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Can avoid list_copy in recomputeNamespacePath() conditionally?" }, { "msg_contents": "amul sul <sulamul@gmail.com> writes:\n> I wondered can we have a shortcut somewhat similar to following POC\n> in recomputeNamespacePath () when the recomputed path is the same as the\n> previous baseSearchPath/activeSearchPath :\n> + /* TODO: POC */\n> + if (equal(oidlist, baseSearchPath))\n> + return;\n\nThere's an awful lot missing from that sketch; all of the remaining\nsteps still need to be done:\n\n\tbaseCreationNamespace = firstNS;\n\tbaseTempCreationPending = temp_missing;\n\n\t/* Mark the path valid. */\n\tbaseSearchPathValid = true;\n\tnamespaceUser = roleid;\n\n\t/* And make it active. */\n\tactiveSearchPath = baseSearchPath;\n\tactiveCreationNamespace = baseCreationNamespace;\n\tactiveTempCreationPending = baseTempCreationPending;\n\n\t/* Clean up. */\n\tpfree(rawname);\n\tlist_free(namelist);\n\tlist_free(oidlist);\n\nMore to the point, I think the onus would be on the patch submitter\nto prove that the extra complexity had some measurable benefit.\nI really doubt that it would, since the list_copy is surely trivial\ncompared to the catalog lookup work we had to do to compute the OID\nlist above here.\n\nIt'd likely be more useful to see if you could reduce the number of\nplaces where we have to invalidate the path in the first place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Nov 2019 10:31:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can avoid list_copy in recomputeNamespacePath() conditionally?" }, { "msg_contents": "On Sat, Nov 2, 2019 at 8:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> amul sul <sulamul@gmail.com> writes:\n> > I wondered can we have a shortcut somewhat similar to following POC\n> > in recomputeNamespacePath () when the recomputed path is the same as the\n> > previous baseSearchPath/activeSearchPath :\n> > + /* TODO: POC */\n> > + if (equal(oidlist, baseSearchPath))\n> > + return;\n>\n> There's an awful lot missing from that sketch; all of the remaining\n> steps still need to be done:\n>\n>\nYou are correct, but that was intentionally skipped to avoid longer post\ndescriptions for the initial discussion. Sorry for being little lazy.\n\n\n> baseCreationNamespace = firstNS;\n> baseTempCreationPending = temp_missing;\n>\n> /* Mark the path valid. */\n> baseSearchPathValid = true;\n> namespaceUser = roleid;\n>\n> /* And make it active. */\n> activeSearchPath = baseSearchPath;\n> activeCreationNamespace = baseCreationNamespace;\n> activeTempCreationPending = baseTempCreationPending;\n>\n> /* Clean up. */\n> pfree(rawname);\n> list_free(namelist);\n> list_free(oidlist);\n>\n> More to the point, I think the onus would be on the patch submitter\n> to prove that the extra complexity had some measurable benefit.\n> I really doubt that it would, since the list_copy is surely trivial\n> compared to the catalog lookup work we had to do to compute the OID\n> list above here.\n>\n\nAgree.\n\n\n> It'd likely be more useful to see if you could reduce the number of\n> places where we have to invalidate the path in the first place.\n>\n\nUnderstood, let me check.\n\nRegards,\nAmul\n\nOn Sat, Nov 2, 2019 at 8:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:amul sul <sulamul@gmail.com> writes:\n> I wondered can we have a shortcut somewhat similar to following POC\n> in recomputeNamespacePath () when the recomputed path is the same as the\n> previous baseSearchPath/activeSearchPath :\n> +   /* TODO: POC */\n> +   if (equal(oidlist, baseSearchPath))\n> +       return;\n\nThere's an awful lot missing from that sketch; all of the remaining\nsteps still need to be done:\nYou are correct, but that was intentionally skipped to avoid longer postdescriptions for the initial discussion. Sorry for being little lazy. \n        baseCreationNamespace = firstNS;\n        baseTempCreationPending = temp_missing;\n\n        /* Mark the path valid. */\n        baseSearchPathValid = true;\n        namespaceUser = roleid;\n\n        /* And make it active. */\n        activeSearchPath = baseSearchPath;\n        activeCreationNamespace = baseCreationNamespace;\n        activeTempCreationPending = baseTempCreationPending;\n\n        /* Clean up. */\n        pfree(rawname);\n        list_free(namelist);\n        list_free(oidlist);\n\nMore to the point, I think the onus would be on the patch submitter\nto prove that the extra complexity had some measurable benefit.\nI really doubt that it would, since the list_copy is surely trivial\ncompared to the catalog lookup work we had to do to compute the OID\nlist above here.Agree. \nIt'd likely be more useful to see if you could reduce the number of\nplaces where we have to invalidate the path in the first place.Understood, let me check.Regards,Amul", "msg_date": "Mon, 4 Nov 2019 10:33:25 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can avoid list_copy in recomputeNamespacePath() conditionally?" } ]
[ { "msg_contents": "AFAICT, these build options were only useful to maintain compatibility \nfor version-0 functions, but those are no longer supported, so these \noptions can be removed. There is a fair amount of code all over the \nplace to support these options, so the cleanup is quite significant.\n\nThe current behavior became the default in PG9.3. Note that this does \nnot affect on-disk storage. The only upgrade issue that I can see is \nthat pg_upgrade refuses to upgrade incompatible clusters if you have \ncontrib/isn installed. But hopefully everyone who is affected by that \nwill have upgraded at least once since PG9.2 already.\n\nfloat4 is now always pass-by-value; the pass-by-reference code path is \ncompletely removed.\n\nfloat8 and related types are now hardcoded to pass-by-value or \npass-by-reference depending on whether the build is 64- or 32-bit, as \nwas previously also the default.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 31 Oct 2019 09:50:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove configure --disable-float4-byval and --disable-float8-byval" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> float4 is now always pass-by-value; the pass-by-reference code path is \n> completely removed.\n\nI think this is OK.\n\n> float8 and related types are now hardcoded to pass-by-value or \n> pass-by-reference depending on whether the build is 64- or 32-bit, as \n> was previously also the default.\n\nI'm less happy with doing this. It makes it impossible to test the\npass-by-reference code paths without actually firing up a 32-bit\nenvironment. It'd be fine to document --disable-float8-byval as\na developer-only option (it might be so already), but I don't want\nto lose it completely. I fail to see any advantage in getting rid\nof it, anyway, since we do still have to maintain both code paths.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 09:36:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Thu, Oct 31, 2019 at 9:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > float8 and related types are now hardcoded to pass-by-value or\n> > pass-by-reference depending on whether the build is 64- or 32-bit, as\n> > was previously also the default.\n>\n> I'm less happy with doing this. It makes it impossible to test the\n> pass-by-reference code paths without actually firing up a 32-bit\n> environment. It'd be fine to document --disable-float8-byval as\n> a developer-only option (it might be so already), but I don't want\n> to lose it completely. I fail to see any advantage in getting rid\n> of it, anyway, since we do still have to maintain both code paths.\n\nCould we get around this by making Datum 8 bytes everywhere?\n\nYears ago, we got rid of INT64_IS_BUSTED, so we're relying on 64-bit\ntypes working on all platforms. However, Datum on a system where\npointers are only 32 bits wide is also only 32 bits wide, so we can't\npass 64-bit quantities the same way we do elsewhere. But, why is the\nwidth of a Datum equal to the width of a pointer, anyway? It would\nsurely cost something to widen 1, 2, and 4 byte quantities to 8 bytes\nwhen packing them into datums on 32-bit platforms, but (1) we've long\nsince accepted that cost on 64-bit platforms, (2) it would save\npalloc/pfree cycles when packing 8 byte quantities into 4-byte values,\n(3) 32-bit platforms are now marginal and performance on those\nplatforms is not critical, and (4) it would simplify a lot of code and\nreduce future bugs.\n\nOn a related note, why do we store typbyval in the catalog anyway\ninstead of inferring it from typlen and maybe typalign? It seems like\na bad idea to record on disk the way we pass around values in memory,\nbecause it means that a change to how values are passed around in\nmemory has ramifications for on-disk compatibility.\n\nrhaas=# select typname, typlen, typbyval, typalign from pg_type where\ntyplen in (1,2,4,8) != typbyval;\n typname | typlen | typbyval | typalign\n----------+--------+----------+----------\n macaddr8 | 8 | f | i\n(1 row)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 10:41:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 1, 2019 at 7:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Could we get around this by making Datum 8 bytes everywhere?\n\nI really like that idea.\n\nEven Raspberry Pi devices (which can cost as little as $35) use 64-bit\nARM processors. It's abundantly clear that 32-bit platforms do not\nmatter enough to justify keeping all the SIZEOF_DATUM crud around.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 Nov 2019 08:22:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Nov 1, 2019 at 7:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Could we get around this by making Datum 8 bytes everywhere?\n\n> I really like that idea.\n\n> Even Raspberry Pi devices (which can cost as little as $35) use 64-bit\n> ARM processors. It's abundantly clear that 32-bit platforms do not\n> matter enough to justify keeping all the SIZEOF_DATUM crud around.\n\nThis line of argument seems to me to be the moral equivalent of\n\"let's drop 32-bit support altogether\". I'm not entirely on board\nwith that. Certainly, a lot of the world is 64-bit these days,\nbut people are still building small systems and they might want\na database; preferably one that hasn't been detuned to the extent\nthat it barely manages to run at all on such a platform. Making\na whole lot of internal APIs 64-bit would be a pretty big hit for\na 32-bit platform --- more instructions, more memory consumed for\nthings like Datum arrays, all in a memory space that's not that big.\n\nIt seems especially insane to conclude that we should pull the plug\non such use-cases just to get rid of one obscure configure option.\nIf we were expending any significant devel effort on supporting\n32-bit platforms, I might be ready to drop support, but we're not.\n(Robert's proposal looks to me like it's actually creating new work\nto do, not saving work.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 01 Nov 2019 14:00:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 1, 2019 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This line of argument seems to me to be the moral equivalent of\n> \"let's drop 32-bit support altogether\". I'm not entirely on board\n> with that.\n\nI don't think that those two things are equivalent at all. There may\neven be workloads that will benefit when run on 32-bit hardware.\nHaving to palloc() and pfree() with 8 byte integers is probably very\nslow.\n\n> Certainly, a lot of the world is 64-bit these days,\n> but people are still building small systems and they might want\n> a database; preferably one that hasn't been detuned to the extent\n> that it barely manages to run at all on such a platform.\n\nEven the very low end is 64-bit these days. $100 smartphones have\n64-bit CPUs and 4GB of ram. AFAICT, anything still being produced that\nis recognizable as a general purpose CPU (e.g. by having virtual\nmemory) is 64-bit. We're talking about a change that can't affect\nusers until late 2020 at the earliest.\n\nI accept that there are some number of users that still have 32-bit\nPostgres installations, probably because they happened to have a\n32-bit machine close at hand. The economics of running a database\nserver on a 32-bit machine are already awful, though.\n\n> It seems especially insane to conclude that we should pull the plug\n> on such use-cases just to get rid of one obscure configure option.\n> If we were expending any significant devel effort on supporting\n> 32-bit platforms, I might be ready to drop support, but we're not.\n> (Robert's proposal looks to me like it's actually creating new work\n> to do, not saving work.)\n\nThe mental burden of considering \"SIZEOF_DATUM == 4\" and\n\"USE_FLOAT8_BYVAL\" is a real cost for us. We maintain non-trivial\n32-bit only code for numeric abbreviated keys. We also have to worry\nabout pfree()'ing memory when USE_FLOAT8_BYVAL within\nheapam_index_validate_scan(). How confident are we that there isn't\nsome place that leaks memory on !USE_FLOAT8_BYVAL builds because\nsomebody forgot to add a pfree() in an #ifdef block?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 Nov 2019 12:15:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 1, 2019 at 3:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I don't think that those two things are equivalent at all. There may\n> even be workloads that will benefit when run on 32-bit hardware.\n> Having to palloc() and pfree() with 8 byte integers is probably very\n> slow.\n\nYeah! I mean, users who are using only 4-byte or smaller pass-by-value\nquantities will be harmed, especially in cases where they are storing\na lot of them at the same time (e.g. sorting) and especially if they\ndouble their space consumption and run out of their very limited\nsupply of memory. Those are all worthwhile considerations and perhaps\nstrong arguments against my proposal. But, people using 8-byte\noughta-be-pass-by-value quantities are certainly being harmed by the\npresent system. It's not a black-and-white thing, like, oh, 8-byte\ndatums on 32-bit system is awful all the time. Or at least, I don't\nthink it is.\n\n> The mental burden of considering \"SIZEOF_DATUM == 4\" and\n> \"USE_FLOAT8_BYVAL\" is a real cost for us. We maintain non-trivial\n> 32-bit only code for numeric abbreviated keys. We also have to worry\n> about pfree()'ing memory when USE_FLOAT8_BYVAL within\n> heapam_index_validate_scan(). How confident are we that there isn't\n> some place that leaks memory on !USE_FLOAT8_BYVAL builds because\n> somebody forgot to add a pfree() in an #ifdef block?\n\nYeah, I've had to fight with this multiple times, and so have other\npeople. It's a nuisance, and it causes bugs, certainly in draft\npatches, sometimes in committed ones. It's not \"free.\" If it's a cost\nworth paying, ok, but is it?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 16:19:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 1, 2019 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah! I mean, users who are using only 4-byte or smaller pass-by-value\n> quantities will be harmed, especially in cases where they are storing\n> a lot of them at the same time (e.g. sorting) and especially if they\n> double their space consumption and run out of their very limited\n> supply of memory.\n\nI think that you meant treble, not double. You're forgetting about the\npalloc() header overhead. ;-)\n\nDoing even slightly serious work on a 32-bit machine is penny wise and\npound foolish. I also believe that we should support minority\nplatforms reasonably well, including 32-bit platforms, because it's\nalways a good idea to try to meet people where they are. Your proposal\ndoesn't seem like it really gives up on that goal.\n\n> Those are all worthwhile considerations and perhaps\n> strong arguments against my proposal. But, people using 8-byte\n> oughta-be-pass-by-value quantities are certainly being harmed by the\n> present system. It's not a black-and-white thing, like, oh, 8-byte\n> datums on 32-bit system is awful all the time. Or at least, I don't\n> think it is.\n\nRight.\n\n> Yeah, I've had to fight with this multiple times, and so have other\n> people. It's a nuisance, and it causes bugs, certainly in draft\n> patches, sometimes in committed ones. It's not \"free.\" If it's a cost\n> worth paying, ok, but is it?\n\n#ifdef crud is something that we should go out of our way to eliminate\non general principle. All good portable C codebases go to great\nlengths to encapsulate platform differences, if necessary by adding a\ncompatibility layer. One of the worst things about the OpenSSL\ncodebase is that it makes writing portable code everybody's problem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 Nov 2019 13:49:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 01, 2019 at 02:00:10PM -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n>> Even Raspberry Pi devices (which can cost as little as $35) use 64-bit\n>> ARM processors. It's abundantly clear that 32-bit platforms do not\n>> matter enough to justify keeping all the SIZEOF_DATUM crud around.\n> \n> This line of argument seems to me to be the moral equivalent of\n> \"let's drop 32-bit support altogether\". I'm not entirely on board\n> with that. Certainly, a lot of the world is 64-bit these days,\n> but people are still building small systems and they might want\n> a database; preferably one that hasn't been detuned to the extent\n> that it barely manages to run at all on such a platform. Making\n> a whole lot of internal APIs 64-bit would be a pretty big hit for\n> a 32-bit platform --- more instructions, more memory consumed for\n> things like Datum arrays, all in a memory space that's not that big.\n\nI don't agree as well with the line of arguments to just remove 32b\nsupport. The newest models of PI indeed use 64b ARM processors, but\nthe first model, as well as the PI zero are on 32b if I recall\ncorrectly, and I would like to believe that these are still widely\nused.\n--\nMichael", "msg_date": "Sat, 2 Nov 2019 11:47:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On 2019-11-02 11:47:26 +0900, Michael Paquier wrote:\n> On Fri, Nov 01, 2019 at 02:00:10PM -0400, Tom Lane wrote:\n> > Peter Geoghegan <pg@bowt.ie> writes:\n> >> Even Raspberry Pi devices (which can cost as little as $35) use 64-bit\n> >> ARM processors. It's abundantly clear that 32-bit platforms do not\n> >> matter enough to justify keeping all the SIZEOF_DATUM crud around.\n> > \n> > This line of argument seems to me to be the moral equivalent of\n> > \"let's drop 32-bit support altogether\". I'm not entirely on board\n> > with that. Certainly, a lot of the world is 64-bit these days,\n> > but people are still building small systems and they might want\n> > a database; preferably one that hasn't been detuned to the extent\n> > that it barely manages to run at all on such a platform. Making\n> > a whole lot of internal APIs 64-bit would be a pretty big hit for\n> > a 32-bit platform --- more instructions, more memory consumed for\n> > things like Datum arrays, all in a memory space that's not that big.\n> \n> I don't agree as well with the line of arguments to just remove 32b\n> support.\n\nNobody is actually proposing that, as far as I can tell? I mean, by all\nmeans argue that the overhead is too high, but just claiming that\nslowing down 32bit systems by a measurable fraction is morally\nequivalent to removing 32bit support seems pointlessly facetious.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Nov 2019 20:14:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 1, 2019 at 7:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > This line of argument seems to me to be the moral equivalent of\n> > \"let's drop 32-bit support altogether\". I'm not entirely on board\n> > with that. Certainly, a lot of the world is 64-bit these days,\n> > but people are still building small systems and they might want\n> > a database; preferably one that hasn't been detuned to the extent\n> > that it barely manages to run at all on such a platform. Making\n> > a whole lot of internal APIs 64-bit would be a pretty big hit for\n> > a 32-bit platform --- more instructions, more memory consumed for\n> > things like Datum arrays, all in a memory space that's not that big.\n>\n> I don't agree as well with the line of arguments to just remove 32b\n> support.\n\nClearly you didn't read what I actually wrote, Michael.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 Nov 2019 21:41:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On 2019-10-31 14:36, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> float4 is now always pass-by-value; the pass-by-reference code path is\n>> completely removed.\n> \n> I think this is OK.\n\nOK, here is a patch for just this part, and we can continue the \ndiscussion on the rest in the meantime.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 2 Nov 2019 08:39:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On 2019-11-01 15:41, Robert Haas wrote:\n> On a related note, why do we store typbyval in the catalog anyway\n> instead of inferring it from typlen and maybe typalign? It seems like\n> a bad idea to record on disk the way we pass around values in memory,\n> because it means that a change to how values are passed around in\n> memory has ramifications for on-disk compatibility.\n\nThis sounds interesting. It would remove a pg_upgrade hazard (in the \nlong run).\n\nThere is some backward compatibility to be concerned about. This change \nwould require extension authors to change their code to insert #ifdef \nUSE_FLOAT8_BYVAL or similar, where currently their code might only \nsupport one method or the other.\n\n> rhaas=# select typname, typlen, typbyval, typalign from pg_type where\n> typlen in (1,2,4,8) != typbyval;\n\nThere are also typlen=6 types. Who knew. ;-)\n\n> typname | typlen | typbyval | typalign\n> ----------+--------+----------+----------\n> macaddr8 | 8 | f | i\n> (1 row)\n\nThis might be a case of the above issue: It's easier to just make it \npass by reference always than deal with a bunch of #ifdefs.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 2 Nov 2019 08:46:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-11-01 15:41, Robert Haas wrote:\n>> On a related note, why do we store typbyval in the catalog anyway\n>> instead of inferring it from typlen and maybe typalign? It seems like\n>> a bad idea to record on disk the way we pass around values in memory,\n>> because it means that a change to how values are passed around in\n>> memory has ramifications for on-disk compatibility.\n\n> This sounds interesting. It would remove a pg_upgrade hazard (in the \n> long run).\n\n> There is some backward compatibility to be concerned about.\n\nYeah. The point here is that typbyval specifies what the C functions\nconcerned with the datatype are expecting. We can't just up and say\n\"we're going to decide that for you\".\n\nI do get the point that supporting two different typbyval options\nfor float8 and related types is a nontrivial cost.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Nov 2019 11:26:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "Hi,\n\nOn 2019-11-02 08:46:12 +0100, Peter Eisentraut wrote:\n> On 2019-11-01 15:41, Robert Haas wrote:\n> > On a related note, why do we store typbyval in the catalog anyway\n> > instead of inferring it from typlen and maybe typalign? It seems like\n> > a bad idea to record on disk the way we pass around values in memory,\n> > because it means that a change to how values are passed around in\n> > memory has ramifications for on-disk compatibility.\n>\n> This sounds interesting. It would remove a pg_upgrade hazard (in the long\n> run).\n>\n> There is some backward compatibility to be concerned about. This change\n> would require extension authors to change their code to insert #ifdef\n> USE_FLOAT8_BYVAL or similar, where currently their code might only support\n> one method or the other.\n\nI think we really ought to remove the difference behind macros. That is,\nfor types that could be either, provide macros that fetch function\narguments into local memory, independent of whether the argument is a\nbyval or byref type. I.e. instead of having separate #ifdef\nUSE_FLOAT8_BYVALs for DatumGetFloat8(), DatumGetInt64(), ... we should\nprovide that logic in one centralized set of macros.\n\nThe fact that USE_FLOAT8_BYVAL has to creep into various functions imo\nis the reasons why people are unhappy about it.\n\nOne way to do this would be something roughly like this sketch:\n\n/* allow to force types to be byref, for testing purposes */\n#if USE_FLOAT8_BYVAL\n#define DatumForTypeIsByval(type) (sizeof(type) <= SIZEOF_DATUM)\n#else\n#define DatumForTypeIsByval(type) (sizeof(type) <= sizeof(int))\n#endif\n\n/* yes, I dream of being C++ once grown up */\n#define DefineSmallFixedWidthDatumTypeConversions(type, TypeToDatumName, DatumToTypeName) \\\n static inline type \\\n TypeToDatumName (type value) \\\n { \\\n if (DatumForTypeIsByval(type)) \\\n { \\\n Datum tmp; \\\n \\\n /* ensure correct conversion, consider e.g. float */ \\\n memcpy(&tmp, &value, sizeof(type)); \\\n \\\n return tmp; \\\n } \\\n else \\\n { \\\n type *tmp = (type *) palloc(sizeof(datum)); \\\n\n *tmp = value;\n\n return PointerGetDatum(tmp); \\\n } \\\n } \\\n \\\n static inline type \\\n DatumToTypeName (Datum datum) \\\n { \\\n if (DatumForTypeIsByval(type)) \\\n { \\\n type tmp; \\\n \\\n /* ensure correct conversion */ \\\n memcpy(&tmp, &datum, sizeof(type)); \\\n \\\n return tmp; \\\n } \\\n else \\\n return *(type *) DatumGetPointer(type); \\\n } \\\n\nAnd then have\n\nDefineSmallFixedWidthDatumTypeConversions(\n float8,\n Float8GetDatum,\n DatumGetFloat8);\n\nDefineSmallFixedWidthDatumTypeConversions(\n int64,\n Int64GetDatum,\n DatumGetInt64);\n\nAnd now also\n\nDefineSmallFixedWidthDatumTypeConversions(\n macaddr,\n Macaddr8GetDatum,\n DatumGetMacaddr8);\n\n(there's probably plenty of bugs in the above, it's just a sketch)\n\n\nWe don't have to break types / extensions with inferring byval for fixed\nwidth types. Instead we can change CREATE TYPE's PASSEDBYVALUE to accept\nan optional parameter 'auto', allowing to opt in.\n\n\n> > rhaas=# select typname, typlen, typbyval, typalign from pg_type where\n> > typlen in (1,2,4,8) != typbyval;\n>\n> There are also typlen=6 types. Who knew. ;-)\n\nThere's a recent thread about them :)\n\n\n> > typname | typlen | typbyval | typalign\n> > ----------+--------+----------+----------\n> > macaddr8 | 8 | f | i\n> > (1 row)\n>\n> This might be a case of the above issue: It's easier to just make it pass by\n> reference always than deal with a bunch of #ifdefs.\n\nIndeed. And that's a bad sign imo.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 2 Nov 2019 17:00:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 01, 2019 at 09:41:58PM -0700, Peter Geoghegan wrote:\n> On Fri, Nov 1, 2019 at 7:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I don't agree as well with the line of arguments to just remove 32b\n>> support.\n> \n> Clearly you didn't read what I actually wrote, Michael.\n\nSorry, that was an misunderstanding from my side.\n--\nMichael", "msg_date": "Thu, 7 Nov 2019 15:47:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Sat, Nov 2, 2019 at 8:00 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we really ought to remove the difference behind macros. That is,\n> for types that could be either, provide macros that fetch function\n> arguments into local memory, independent of whether the argument is a\n> byval or byref type. I.e. instead of having separate #ifdef\n> USE_FLOAT8_BYVALs for DatumGetFloat8(), DatumGetInt64(), ... we should\n> provide that logic in one centralized set of macros.\n>\n> The fact that USE_FLOAT8_BYVAL has to creep into various functions imo\n> is the reasons why people are unhappy about it.\n\nI think I'm *more* unhappy about the fact that it affects a bunch of\nthings that are not about whether float8 is passed byval. I mean, you\nmention DatumGetInt64() above, but why in the world does a setting\nwith \"float8\" in the name affect how we pass int64? The thing is like\nkudzu, getting into all sorts of things that it has no business\naffecting - at least if you judge by the name - and for no really\nclear reason.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 13 Nov 2019 07:36:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On 2019-11-02 08:39, Peter Eisentraut wrote:\n> On 2019-10-31 14:36, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> float4 is now always pass-by-value; the pass-by-reference code path is\n>>> completely removed.\n>>\n>> I think this is OK.\n> \n> OK, here is a patch for just this part, and we can continue the\n> discussion on the rest in the meantime.\n\nI have committed this part.\n\nI will rebase and continue developing the rest of the patches based on \nthe discussion so far.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 Nov 2019 19:20:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Thu, Nov 21, 2019 at 07:20:28PM +0100, Peter Eisentraut wrote:\n> I have committed this part.\n> \n> I will rebase and continue developing the rest of the patches based on the\n> discussion so far.\n\nBased on that I am marking the patch as committed in the CF. The rest\nof the patch set could have a new entry.\n--\nMichael", "msg_date": "Mon, 25 Nov 2019 11:31:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "My revised proposal is to remove --disable-float8-byval as a configure \noption but keep it as an option in pg_config_manual.h. It is no longer \nuseful as a user-facing option, but as was pointed out, it is somewhat \nuseful for developers, so pg_config_manual.h seems like the right place.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 Nov 2019 21:27:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> My revised proposal is to remove --disable-float8-byval as a configure \n> option but keep it as an option in pg_config_manual.h. It is no longer \n> useful as a user-facing option, but as was pointed out, it is somewhat \n> useful for developers, so pg_config_manual.h seems like the right place.\n\nOK, works for me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Nov 2019 15:33:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On 2019-11-26 21:33, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> My revised proposal is to remove --disable-float8-byval as a configure\n>> option but keep it as an option in pg_config_manual.h. It is no longer\n>> useful as a user-facing option, but as was pointed out, it is somewhat\n>> useful for developers, so pg_config_manual.h seems like the right place.\n> \n> OK, works for me.\n\ndone\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 27 Nov 2019 13:30:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Nov 1, 2019 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Nov 1, 2019 at 3:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I don't think that those two things are equivalent at all. There may\n> > even be workloads that will benefit when run on 32-bit hardware.\n> > Having to palloc() and pfree() with 8 byte integers is probably very\n> > slow.\n>\n> Yeah! I mean, users who are using only 4-byte or smaller pass-by-value\n> quantities will be harmed, especially in cases where they are storing\n> a lot of them at the same time (e.g. sorting) and especially if they\n> double their space consumption and run out of their very limited\n> supply of memory.\n\nApparently Linux has almost no upstream resources for testing 32-bit\nx86, and it shows:\n\nhttps://lwn.net/ml/oss-security/CALCETrW1z0gCLFJz-1Jwj_wcT3+axXkP_wOCxY8JkbSLzV80GA@mail.gmail.com/\n\nI think that this kind of thing argues for minimizing the amount of\ncode that can only be tested on a small minority of the computers that\nare in general use today. If no Postgres hacker regularly runs the\ncode, then its chances of having bugs are far higher. Having coverage\nin the buildfarm certainly helps, but it's no substitute.\n\nSticking with the !USE_FLOAT8_BYVAL example, it's easy to imagine\nsomebody forgetting to add a !USE_FLOAT8_BYVAL block, that contains\nthe required pfree(). Now you have a memory leak that only affects a\nsmall minority of platforms. How likely is it that buildfarm coverage\nwill help somebody detect that problem?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 12 Dec 2019 14:06:03 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On 2019-12-12 23:06, Peter Geoghegan wrote:\n> Apparently Linux has almost no upstream resources for testing 32-bit\n> x86, and it shows:\n\nBut isn't 32-bit Windows still a thing? Or does that work differently?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Dec 2019 13:33:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "On Fri, Dec 13, 2019 at 6:33 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-12-12 23:06, Peter Geoghegan wrote:\n> > Apparently Linux has almost no upstream resources for testing 32-bit\n> > x86, and it shows:\n>\n> But isn't 32-bit Windows still a thing? Or does that work differently?\n\nWell, again, I think the proposal here is not get rid of 32-bit\nsupport, but to have less code that only gets regularly tested on\n32-bit machines. If we made datums 8 bytes everywhere, we would have\nless such code, and very likely fewer bugs. And as pointed out\nupthread, although some things might perform worse for the remaining\nsupply of 32-bit users, other things might perform better. I'm not\n100% sure that it would work out to a win overall, but I think there's\na good chance, especially when you factor in the reduced bug surface.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 13 Dec 2019 09:43:39 -0600", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, again, I think the proposal here is not get rid of 32-bit\n> support, but to have less code that only gets regularly tested on\n> 32-bit machines.\n\nThat seems like generally a good plan. But as to the specific idea...\n\n> If we made datums 8 bytes everywhere, we would have\n> less such code, and very likely fewer bugs.\n\n... it's not entirely clear to me that it'd be possible to do this\nwithout causing a storm of \"cast from pointer to integer of different\nsize\" (and vice versa) warnings on 32-bit machines. That would be a\ndeal-breaker independently of any performance considerations, IMO.\nSo if anyone wants to pursue this, finding a way around that might be\nthe first thing to look at.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Dec 2019 11:25:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove configure --disable-float4-byval and\n --disable-float8-byval" } ]
[ { "msg_contents": "Hi,\n\nI want to know how postgres stores catalog relations in cache in-depth. Is\nthere any documentation for that?\n\nHi,I want to know how postgres stores catalog relations in cache in-depth. Is there any documentation for that?", "msg_date": "Thu, 31 Oct 2019 15:19:23 +0530", "msg_from": "Natarajan R <nataraj3098@gmail.com>", "msg_from_op": true, "msg_subject": "Postgres cache" }, { "msg_contents": "On Thu, Oct 31, 2019 at 03:19:23PM +0530, Natarajan R wrote:\n>Hi,\n>\n>I want to know how postgres stores catalog relations in cache in-depth. Is\n>there any documentation for that?\n\nNot sure what exactly you mean by \"cache\" - whether shared buffers (as a\nshared general database cache) or syscache/catcache, i.e. the special\ncache of catalog records each backend maintains privately.\n\nI'm not aware of exhaustive developer docs explaining these parts, but\nfor shared buffers you might want to look at\n\n src/backend/storage/buffer/README\n\nwhile for catcache/syscache you probably need to look at the code and\ncomments in the related files, particularly in \n\n src/backend/utils/cache/syscache.c\n src/backend/utils/cache/relcache.c\n\nNot sure if there's a better source of information :-(\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 31 Oct 2019 22:24:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Postgres cache" } ]
[ { "msg_contents": "Hi,\n\nThe command tag of ALTER MATERIALIZED VIEW is basically\n\"ALTER MATERIALIZED VIEW\". For example,\n\n =# ALTER MATERIALIZED VIEW test ALTER COLUMN j SET STATISTICS 100;\n ALTER MATERIALIZED VIEW\n =# ALTER MATERIALIZED VIEW test OWNER TO CURRENT_USER;\n ALTER MATERIALIZED VIEW\n =# ALTER MATERIALIZED VIEW test RENAME TO hoge;\n ALTER MATERIALIZED VIEW\n\nThis is ok and looks intuitive to users. But I found that the command tag of\nALTER MATERIALIZED VIEW RENAME COLUMN is \"ALTER TABLE\", not \"ALTER VIEW\".\n\n =# ALTER MATERIALIZED VIEW hoge RENAME COLUMN j TO x;\n ALTER TABLE\n\nIs this intentional? Or bug?\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Thu, 31 Oct 2019 19:38:35 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n> ... I found that the command tag of\n> ALTER MATERIALIZED VIEW RENAME COLUMN is \"ALTER TABLE\", not \"ALTER VIEW\".\n\n> =# ALTER MATERIALIZED VIEW hoge RENAME COLUMN j TO x;\n> ALTER TABLE\n\n> Is this intentional? Or bug?\n\nSeems like an oversight.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 09:56:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "On Thu, Oct 31, 2019 at 6:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Fujii Masao <masao.fujii@gmail.com> writes:\n> > ... I found that the command tag of\n> > ALTER MATERIALIZED VIEW RENAME COLUMN is \"ALTER TABLE\", not \"ALTER VIEW\".\n>\n> > =# ALTER MATERIALIZED VIEW hoge RENAME COLUMN j TO x;\n> > ALTER TABLE\n>\n> > Is this intentional? Or bug?\n>\n> Seems like an oversight.\n>\n> regards, tom lane\n>\n>\n>\nThe same issue is with ALTER FOREIGN TABLE\n\n# ALTER FOREIGN TABLE ft RENAME COLUMN a to t;\n\nALTER TABLE\n\n\n# ALTER MATERIALIZED VIEW mv RENAME COLUMN a to r;\n\nALTER TABLE\n\n\n\nAttached patch fixes that for ALTER VIEW , ALTER MATERIALIZED VIEW and\nALTER FOREIGN TABLE\n\n\n# ALTER MATERIALIZED VIEW mv RENAME COLUMN a to r;\n\nALTER MATERIALIZED VIEW\n\n\n# ALTER FOREIGN TABLE ft RENAME COLUMN a to t;\n\nALTER FOREIGN TABLE\n\n\n-- \nIbrar Ahmed", "msg_date": "Fri, 1 Nov 2019 02:33:53 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "On Fri, Nov 1, 2019 at 6:34 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Oct 31, 2019 at 6:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Fujii Masao <masao.fujii@gmail.com> writes:\n>> > ... I found that the command tag of\n>> > ALTER MATERIALIZED VIEW RENAME COLUMN is \"ALTER TABLE\", not \"ALTER VIEW\".\n>>\n>> > =# ALTER MATERIALIZED VIEW hoge RENAME COLUMN j TO x;\n>> > ALTER TABLE\n>>\n>> > Is this intentional? Or bug?\n>>\n>> Seems like an oversight.\n\nThanks for the check!\n\n> The same issue is with ALTER FOREIGN TABLE\n\nYes.\n\n> Attached patch fixes that for ALTER VIEW , ALTER MATERIALIZED VIEW and ALTER FOREIGN TABLE\n\nYou introduced subtype in your patch, but I think it's better and simpler\nto just give relationType to AlterObjectTypeCommandTag()\nif renaming the columns (i.e., renameType = OBJECT_COLUMN).\n\nTo avoid this kind of oversight about command tag, I'd like to add regression\ntests to make sure that SQL returns valid and correct command tag.\nBut currently there seems no mechanism for such test, in regression\ntest. Right??\nMaybe we will need that mechanism.\n\nRegards,\n\n-- \nFujii Masao", "msg_date": "Fri, 1 Nov 2019 12:00:31 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "On Fri, Nov 1, 2019 at 8:00 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n\n> On Fri, Nov 1, 2019 at 6:34 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >\n> >\n> >\n> > On Thu, Oct 31, 2019 at 6:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> Fujii Masao <masao.fujii@gmail.com> writes:\n> >> > ... I found that the command tag of\n> >> > ALTER MATERIALIZED VIEW RENAME COLUMN is \"ALTER TABLE\", not \"ALTER\n> VIEW\".\n> >>\n> >> > =# ALTER MATERIALIZED VIEW hoge RENAME COLUMN j TO x;\n> >> > ALTER TABLE\n> >>\n> >> > Is this intentional? Or bug?\n> >>\n> >> Seems like an oversight.\n>\n> Thanks for the check!\n>\n> > The same issue is with ALTER FOREIGN TABLE\n>\n> Yes.\n>\n> > Attached patch fixes that for ALTER VIEW , ALTER MATERIALIZED VIEW and\n> ALTER FOREIGN TABLE\n>\n> You introduced subtype in your patch, but I think it's better and simpler\n> to just give relationType to AlterObjectTypeCommandTag()\n> if renaming the columns (i.e., renameType = OBJECT_COLUMN).\n>\n> That's works perfectly along with future oversight about the command tag.\n\n\n> To avoid this kind of oversight about command tag, I'd like to add\n> regression\n> tests to make sure that SQL returns valid and correct command tag.\n> But currently there seems no mechanism for such test, in regression\n> test. Right??\n>\n\nDo we really need a regression test cases for such small oversights?\n\n\n> Maybe we will need that mechanism.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n>\n\n\n-- \nIbrar Ahmed\n\nOn Fri, Nov 1, 2019 at 8:00 AM Fujii Masao <masao.fujii@gmail.com> wrote:On Fri, Nov 1, 2019 at 6:34 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n>\n>\n> On Thu, Oct 31, 2019 at 6:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Fujii Masao <masao.fujii@gmail.com> writes:\n>> > ... I found that the command tag of\n>> > ALTER MATERIALIZED VIEW RENAME COLUMN is \"ALTER TABLE\", not \"ALTER VIEW\".\n>>\n>> >     =# ALTER MATERIALIZED VIEW hoge RENAME COLUMN j TO x;\n>> >     ALTER TABLE\n>>\n>> > Is this intentional? Or bug?\n>>\n>> Seems like an oversight.\n\nThanks for the check!\n\n> The same issue is with ALTER FOREIGN TABLE\n\nYes.\n\n> Attached patch fixes that for ALTER VIEW , ALTER MATERIALIZED VIEW and ALTER FOREIGN TABLE\n\nYou introduced subtype in your patch, but I think it's better and simpler\nto just give relationType to AlterObjectTypeCommandTag()\nif renaming the columns (i.e., renameType = OBJECT_COLUMN).\nThat's works perfectly along with future oversight about the command tag. \nTo avoid this kind of oversight about command tag, I'd like to add regression\ntests to make sure that SQL returns valid and correct command tag.\nBut currently there seems no mechanism for such test, in regression\ntest. Right?? Do we really need a regression test cases for such small oversights? \nMaybe we will need that mechanism.\n\nRegards,\n\n-- \nFujii Masao\n-- Ibrar Ahmed", "msg_date": "Fri, 1 Nov 2019 14:17:03 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "On Fri, Nov 01, 2019 at 02:17:03PM +0500, Ibrar Ahmed wrote:\n> Do we really need a regression test cases for such small oversights?\n\nIt is possible to get the command tags using an event trigger... But\nthat sounds hack-ish.\n--\nMichael", "msg_date": "Sat, 2 Nov 2019 16:40:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "On Sat, Nov 2, 2019 at 4:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Nov 01, 2019 at 02:17:03PM +0500, Ibrar Ahmed wrote:\n> > Do we really need a regression test cases for such small oversights?\n>\n> It is possible to get the command tags using an event trigger... But\n> that sounds hack-ish.\n\nYes, so if simple test mechanism to check command tag exists,\nit would be helpful.\n\nI'm thinking to commit the patch. But I have one question; is it ok to\nback-patch? Since the patch changes the command tags for some commands,\nfor example, which might break the existing event trigger functions\nusing TG_TAG if we back-patch it. Or we should guarantee the compatibility of\ncommand tag within the same major version?\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Tue, 5 Nov 2019 14:39:51 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n> I'm thinking to commit the patch. But I have one question; is it ok to\n> back-patch? Since the patch changes the command tags for some commands,\n> for example, which might break the existing event trigger functions\n> using TG_TAG if we back-patch it. Or we should guarantee the compatibility of\n> command tag within the same major version?\n\nI would not back-patch this. I don't think it's enough of a bug\nto justify taking any compatibility risks for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 Nov 2019 09:19:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" }, { "msg_contents": "On Tue, Nov 5, 2019 at 11:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Fujii Masao <masao.fujii@gmail.com> writes:\n> > I'm thinking to commit the patch. But I have one question; is it ok to\n> > back-patch? Since the patch changes the command tags for some commands,\n> > for example, which might break the existing event trigger functions\n> > using TG_TAG if we back-patch it. Or we should guarantee the compatibility of\n> > command tag within the same major version?\n>\n> I would not back-patch this. I don't think it's enough of a bug\n> to justify taking any compatibility risks for.\n\n+1\nI committed the patch only to the master. Thanks!\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Wed, 6 Nov 2019 12:56:44 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: The command tag of \"ALTER MATERIALIZED VIEW RENAME COLUMN\"" } ]
[ { "msg_contents": "Dear hackers,\n\nAs declared last month, I propose again the new ECPG grammar, DECLARE STATEMENT.\nThis had been committed once, but it removed from PG12 because of\nsome problems. \nIn this mail, I want to report some problems that previous implementation has,\nproduce a new solution, and attach a WIP patch.\n\n[Basic function, Grammar, and Use case]\nThis statement will be used for the purpose of designating a connection easily.\nPlease see below:\nhttps://www.postgresql.org/message-id/flat/4E72940DA2BF16479384A86D54D0988A4D80D3C9@G01JPEXMBKW04\nThe Oracle's manual will also help your understanding:\nhttps://docs.oracle.com/en/database/oracle/oracle-database/19/lnpcc/embedded-SQL-statements-and-directives.html#GUID-0A30B7B4-BD91-42EA-AACE-2E9CBF7E9C1A\n\n[Issues]\nThat's why this feature has been reverted.\n1. The namespace of the identifier was not clear. If you use a same identifier for other SQL statements,\n these interfered each other and statements might be executed at the unexpected connection.\n2. Declaring at the outside of functions was not allowed. This specification is quite different from the other \n declarative statements, so some users might be confused.\n For instance, the following example was rejected.\n```\nEXEC SQL DECLARE stmt STATEMENT;\n\nint\nmain()\n{\n...\n\tEXEC SQL DECLARE cur CURSOR FOR stmt;\n...\n}\n```\n3. These specifications were not compatible with other DBMSs.\n\n[Solutions]\nThe namespace is set to be a file unit. This follows other DBMSs.\nWhen the DECLARE SATATEMENT statement is read, the name, identifier\nand the related connection are recorded.\nAnd if you use the declared identifier in order to prepare or declare cursor,\nthe fourth argument for ECPGdo(it represents the connection) will be overwritten.\nThis declaration is enabled only the precompile phase.\n\n [Limitations]\nThe declaration must be appeared before using it.\nThis also follows Pro*C precompiler.\n\nA WIP patch is attached. Confirm that all ECPG tests have passed,\nhowever, some documents are not included.\nThey will be added later.\nI applied the pgindent as a test, but it might be failed because this is the\nfirst time for me.\n\nBest regards\n\nHayato Kuroda\nFUJITSU LIMITED\nE-Mail:kuroda.hayato@fujitsu.com", "msg_date": "Thu, 31 Oct 2019 12:29:30 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "Hi,\n\nOn Thu, Oct 31, 2019 at 12:29:30PM +0000, kuroda.hayato@fujitsu.com wrote:\n>Dear hackers,\n>\n>As declared last month, I propose again the new ECPG grammar, DECLARE STATEMENT.\n>This had been committed once, but it removed from PG12 because of\n>some problems.\n>In this mail, I want to report some problems that previous implementation has,\n>produce a new solution, and attach a WIP patch.\n>\n>[Basic function, Grammar, and Use case]\n>This statement will be used for the purpose of designating a connection easily.\n>Please see below:\n>https://www.postgresql.org/message-id/flat/4E72940DA2BF16479384A86D54D0988A4D80D3C9@G01JPEXMBKW04\n>The Oracle's manual will also help your understanding:\n>https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpcc/embedded-SQL-statements-and-directives.html#GUID-0A30B7B4-BD91-42EA-AACE-2E9CBF7E9C1A\n>\n>[Issues]\n>That's why this feature has been reverted.\n>1. The namespace of the identifier was not clear. If you use a same identifier for other SQL statements,\n> these interfered each other and statements might be executed at the unexpected connection.\n>2. Declaring at the outside of functions was not allowed. This specification is quite different from the other\n> declarative statements, so some users might be confused.\n> For instance, the following example was rejected.\n>```\n>EXEC SQL DECLARE stmt STATEMENT;\n>\n>int\n>main()\n>{\n>...\n>\tEXEC SQL DECLARE cur CURSOR FOR stmt;\n>...\n>}\n>```\n>3. These specifications were not compatible with other DBMSs.\n>\n>[Solutions]\n>The namespace is set to be a file unit. This follows other DBMSs.\n>When the DECLARE SATATEMENT statement is read, the name, identifier\n>and the related connection are recorded.\n>And if you use the declared identifier in order to prepare or declare cursor,\n>the fourth argument for ECPGdo(it represents the connection) will be overwritten.\n>This declaration is enabled only the precompile phase.\n>\n> [Limitations]\n>The declaration must be appeared before using it.\n>This also follows Pro*C precompiler.\n>\n>A WIP patch is attached. Confirm that all ECPG tests have passed,\n>however, some documents are not included.\n>They will be added later.\n>I applied the pgindent as a test, but it might be failed because this is the\n>first time for me.\n>\n\nI see there were no reviews of this new patch, with the feature\nreimplemented after it was reverted from PG12 in September :-(\n\nI'm not an ecpg expert (in fact I've never even used it), so my review\nis pretty superficial, but I only found a couple of minor whitespace\nissues (adding/removing a line/tab) - see the attached file.\n\nKuroda-san, you mentioned the patch is WIP. What other bits you think\nare missing / need improvement? I see you mentioned some documentation\nis missing - I suppose that's one of the missing pieces?\n\n\nFor the record, there were two threads discussing the implementation [1]\nand then the revert [2].\n\n[1] https://www.postgresql.org/message-id/flat/1F66B161998C704BABF8989B8A2AC0A313AA41%40G01JPEXMBYT05\n[2] https://www.postgresql.org/message-id/flat/TY2PR01MB2443EC8286995378AEB7D9F8F5B10%40TY2PR01MB2443.jpnprd01.prod.outlook.com\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jan 2020 03:52:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "On Sun, Jan 12, 2020 at 03:52:48AM +0100, Tomas Vondra wrote:\n> ...\n>\n>I'm not an ecpg expert (in fact I've never even used it), so my review\n>is pretty superficial, but I only found a couple of minor whitespace\n>issues (adding/removing a line/tab) - see the attached file.\n>\n\nMeh, forgot to attach the file ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 12 Jan 2020 04:41:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "On 1/11/20 10:41 PM, Tomas Vondra wrote:\n> On Sun, Jan 12, 2020 at 03:52:48AM +0100, Tomas Vondra wrote:\n>> ...\n>>\n>> I'm not an ecpg expert (in fact I've never even used it), so my review\n>> is pretty superficial, but I only found a couple of minor whitespace\n>> issues (adding/removing a line/tab) - see the attached file.\n>>\n> \n> Meh, forgot to attach the file ...\n\nAny thoughts on Tomas' comments?\n\nA big part of moving a patch forward is keeping the thread active and \nanswering comments/review.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 30 Mar 2020 12:53:29 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "> On 30 Mar 2020, at 18:53, David Steele <david@pgmasters.net> wrote:\n> \n> On 1/11/20 10:41 PM, Tomas Vondra wrote:\n>> On Sun, Jan 12, 2020 at 03:52:48AM +0100, Tomas Vondra wrote:\n>>> ...\n>>> \n>>> I'm not an ecpg expert (in fact I've never even used it), so my review\n>>> is pretty superficial, but I only found a couple of minor whitespace\n>>> issues (adding/removing a line/tab) - see the attached file.\n>>> \n>> Meh, forgot to attach the file ...\n> \n> Any thoughts on Tomas' comments?\n> \n> A big part of moving a patch forward is keeping the thread active and answering comments/review.\n\nThis patch has now been silent for quite a while, unless someone is interested\nenough to bring it forward it seems about time to close it.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 15 Sep 2020 12:07:29 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "> This patch has now been silent for quite a while, unless someone is\n> interested\n> enough to bring it forward it seems about time to close it.\n\nI am interested but still short on time. I will definitely look into it\nas soon as I find some spare minutes.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De\nMichael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\n\n\n\n", "msg_date": "Tue, 15 Sep 2020 12:31:57 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "Dear Tomas, Daniel, Michael, \r\n\r\nI missed your e-mails, and I apologize the very late reply. \r\nI want you to thank keeping the thread.\r\n\r\n> I'm not an ecpg expert (in fact I've never even used it), so my review\r\n> is pretty superficial, but I only found a couple of minor whitespace\r\n> issues (adding/removing a line/tab) - see the attached file.\r\n\r\nThanks, I fixed it.\r\n\r\n> Kuroda-san, you mentioned the patch is WIP. What other bits you think\r\n> are missing / need improvement? I see you mentioned some documentation\r\n> is missing - I suppose that's one of the missing pieces?\r\n\r\nAll functionalities I expect has been already implemented in the previous patch, \r\nand I thought that only doc and reviews were needed.\r\n\r\nFinally I attach new patch. This patch contains source changes, a test code,\r\nand documentation changes. This one is not WIP.\r\n\r\nI will try to review other topics on the next Commitfest.\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n-----Original Message-----\r\nFrom: Michael Meskes <meskes@postgresql.org> \r\nSent: Tuesday, September 15, 2020 7:32 PM\r\nTo: pgsql-hackers@lists.postgresql.org\r\nSubject: Re: ECPG: proposal for new DECLARE STATEMENT\r\n\r\n> This patch has now been silent for quite a while, unless someone is\r\n> interested\r\n> enough to bring it forward it seems about time to close it.\r\n\r\nI am interested but still short on time. I will definitely look into it\r\nas soon as I find some spare minutes.\r\n\r\nMichael\r\n-- \r\nMichael Meskes\r\nMichael at Fam-Meskes dot De\r\nMichael at Meskes dot (De|Com|Net|Org)\r\nMeskes at (Debian|Postgresql) dot Org", "msg_date": "Fri, 23 Oct 2020 06:25:25 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nRecently I have been doing some work on ecpg. So I review this patch. No problem was found.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 16 Nov 2020 12:52:40 +0000", "msg_from": "Shawn Wang <shawn.wang.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ECPG: proposal for new DECLARE STATEMENT" }, { "msg_contents": "Dear Hackers,\r\n\r\nI know I'm asking a big favor, but could you review(and commit) the patch?\r\nThe status has become RFC last Nov., but no one checked this after that.\r\nMaybe Meskes is quite busy and have no time to review it.\r\n\r\nThe main part of the patch is about 200 lines(It means not so long), and maybe\r\nI have reviewed other patches more than it.\r\n\r\nI will review more, so I'm happy if this commits until the end of next CF.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n", "msg_date": "Wed, 27 Jan 2021 09:18:28 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: ECPG: proposal for new DECLARE STATEMENT" } ]
[ { "msg_contents": "Hi,\n\nThe attached patch adds an -a / --appname command line switch to \npg_basebackup, pg_receivewal and pg_recvlogical.\n\nThis is useful when f.ex. pg_receivewal needs to connect as a \nsynchronous client (synchronous_standby_names),\n\n pg_receivewal -h myhost -p 5432 -S replica1 -a replica1 --synchronous \n-D /wal\n\nI'll add the patch to the CommitFest for discussion, as there is overlap \nwith the -d switch.\n\nBest regards,\n Jesper", "msg_date": "Thu, 31 Oct 2019 08:52:58 -0400", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Application name for pg_basebackup and friends" }, { "msg_contents": "On 31/10/2019 14:52, Jesper Pedersen wrote:\n> Hi,\n> \n> The attached patch adds an -a / --appname command line switch to\n> pg_basebackup, pg_receivewal and pg_recvlogical.\n> \n> This is useful when f.ex. pg_receivewal needs to connect as a\n> synchronous client (synchronous_standby_names),\n> \n> pg_receivewal -h myhost -p 5432 -S replica1 -a replica1 --synchronous\n> -D /wal\n> \n> I'll add the patch to the CommitFest for discussion, as there is overlap\n> with the -d switch.\n\nYou can already set application name with the environment variable or on \nthe database connections string:\n\npg_receivewal -D /wal -d \"host=myhost application_name=myreceiver\"\n\nI don't think we need a new comand line switch for it.\n\n- Heikki\n\n\n", "msg_date": "Thu, 31 Oct 2019 15:10:35 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Application name for pg_basebackup and friends" }, { "msg_contents": "On Thu, Oct 31, 2019 at 03:10:35PM +0200, Heikki Linnakangas wrote:\n> You can already set application name with the environment variable or on the\n> database connections string:\n> \n> pg_receivewal -D /wal -d \"host=myhost application_name=myreceiver\"\n> \n> I don't think we need a new comand line switch for it.\n\n+1.\n--\nMichael", "msg_date": "Sat, 2 Nov 2019 16:45:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Application name for pg_basebackup and friends" }, { "msg_contents": "On Sat, Nov 02, 2019 at 04:45:40PM +0900, Michael Paquier wrote:\n> On Thu, Oct 31, 2019 at 03:10:35PM +0200, Heikki Linnakangas wrote:\n>> You can already set application name with the environment variable or on the\n>> database connections string:\n>> \n>> pg_receivewal -D /wal -d \"host=myhost application_name=myreceiver\"\n>> \n>> I don't think we need a new comand line switch for it.\n> \n> +1.\n\nPlease note that I have marked this patch as rejected in the CF app,\nper the arguments upthread.\n--\nMichael", "msg_date": "Thu, 7 Nov 2019 15:51:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Application name for pg_basebackup and friends" }, { "msg_contents": "On 11/7/19 1:51 AM, Michael Paquier wrote:\n>>> I don't think we need a new comand line switch for it.\n>>\n>> +1.\n> \n> Please note that I have marked this patch as rejected in the CF app,\n> per the arguments upthread.\n\nOk, agreed.\n\nThanks for the feedback !\n\nBest regards,\n Jesper\n\n\n\n", "msg_date": "Thu, 7 Nov 2019 12:36:42 -0500", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": true, "msg_subject": "Re: Application name for pg_basebackup and friends" } ]
[ { "msg_contents": "Hi\n\nI almost finished patch optimizing non volatile function calls.\n\nselect f(t.n) from t where f(t.n) > 10 and f(t.n) < 100; needs 3 calls of\nf() for each tuple,\nafter applying patch only 1.\n\nAny pros and cons ?\n\nHiI almost finished patch optimizing non volatile function calls.select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100;  needs 3 calls of f() for each tuple,after applying patch only 1.Any pros and cons  ?", "msg_date": "Thu, 31 Oct 2019 15:06:13 +0100", "msg_from": "Andrzej Barszcz <abusinf@gmail.com>", "msg_from_op": true, "msg_subject": "function calls optimization" }, { "msg_contents": "Hi, \n\nOn October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz <abusinf@gmail.com> wrote:\n>Hi\n>\n>I almost finished patch optimizing non volatile function calls.\n>\n>select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100; needs 3 calls\n>of\n>f() for each tuple,\n>after applying patch only 1.\n>\n>Any pros and cons ?\n\nDepends on the actual way of implementing this proposal. Think we need more details than what you idea here.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 31 Oct 2019 07:25:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz <abusinf@gmail.com> wrote:\n>> I almost finished patch optimizing non volatile function calls.\n>> \n>> select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100; needs 3 calls\n>> of\n>> f() for each tuple,\n>> after applying patch only 1.\n>> \n>> Any pros and cons ?\n\n> Depends on the actual way of implementing this proposal. Think we need more details than what you idea here.\n\nWe've typically supposed that the cost of searching for duplicate\nsubexpressions would outweigh the benefits of sometimes finding them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 10:45:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Hi, \n\nOn October 31, 2019 7:45:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz\n><abusinf@gmail.com> wrote:\n>>> I almost finished patch optimizing non volatile function calls.\n>>> \n>>> select f(t.n) from t where f(t.n) > 10 and f(t.n) < 100; needs 3\n>calls\n>>> of\n>>> f() for each tuple,\n>>> after applying patch only 1.\n>>> \n>>> Any pros and cons ?\n>\n>> Depends on the actual way of implementing this proposal. Think we\n>need more details than what you idea here.\n>\n>We've typically supposed that the cost of searching for duplicate\n>subexpressions would outweigh the benefits of sometimes finding them.\n\nBased on profiles I've seen I'm not sure that's the right choice. Both for when the calls are expensive (say postgis stuff), and for when a lot of rows are processed.\n\nI think one part of doing this in a realistic manner is an efficient search for redundant expressions. The other, also non trivial, is how to even represent references to the results of expressions in other parts of the expression tree / other expressions.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 31 Oct 2019 07:53:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Hi\n\nOn October 31, 2019 7:53:20 AM PDT, Andres Freund <andres@anarazel.de> wrote:\n>On October 31, 2019 7:45:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>We've typically supposed that the cost of searching for duplicate\n>>subexpressions would outweigh the benefits of sometimes finding them.\n>\n>Based on profiles I've seen I'm not sure that's the right choice. Both\n>for when the calls are expensive (say postgis stuff), and for when a\n>lot of rows are processed.\n>\n>I think one part of doing this in a realistic manner is an efficient\n>search for redundant expressions. \n\nOne way to improve the situation - which is applicable in a lot of situations, e.g. setrefs.c etc - would be to compute hashes for (sub-) expression trees. Which makes it a lot easier to bail out early when trees are not the same, and also easier to build an efficient way to find redundant expressions. There's some complexity in invalidating such hashes once computed, and when to first compute them, obviously.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 31 Oct 2019 08:02:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "On 10/31/19 3:45 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On October 31, 2019 7:06:13 AM PDT, Andrzej Barszcz <abusinf@gmail.com> wrote:\n>>> Any pros and cons ?\n> \n>> Depends on the actual way of implementing this proposal. Think we need more details than what you idea here.\n> \n> We've typically supposed that the cost of searching for duplicate\n> subexpressions would outweigh the benefits of sometimes finding them.\n\nThat is an important concern, but given how SQL does not make it \nconvenient to re-use partial results of calculations I think such \nqueries are quite common in real world workloads.\n\nSo if we can make it cheap enough I think that it is a worthwhile \noptimization.\n\nAndreas\n\n\n", "msg_date": "Thu, 31 Oct 2019 16:05:28 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On October 31, 2019 7:45:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We've typically supposed that the cost of searching for duplicate\n>> subexpressions would outweigh the benefits of sometimes finding them.\n\n> Based on profiles I've seen I'm not sure that's the right choice. Both for when the calls are expensive (say postgis stuff), and for when a lot of rows are processed.\n\nYeah, if your mental model of a function call is some remarkably expensive\nPostGIS geometry manipulation, it's easy to justify doing a lot of work\nto try to detect duplicates. But most functions in most queries are\nmore like int4pl or btint8cmp, and it's going to be extremely remarkable\nif you can make back the planner costs of checking for duplicate usages\nof those.\n\nPossibly this could be finessed by only trying to find duplicates of\nfunctions that have high cost estimates. Not sure how high is high\nenough.\n\n> I think one part of doing this in a realistic manner is an efficient\n> search for redundant expressions. The other, also non trivial, is how to\n> even represent re eferences to the results of expressions in other parts of the expression tree / other expressions.\n\nYup, both of those would be critical to do right.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 11:06:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Hi, \n\nOn October 31, 2019 8:06:50 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On October 31, 2019 7:45:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us>\n>wrote:\n>>> We've typically supposed that the cost of searching for duplicate\n>>> subexpressions would outweigh the benefits of sometimes finding\n>them.\n>\n>> Based on profiles I've seen I'm not sure that's the right choice.\n>Both for when the calls are expensive (say postgis stuff), and for when\n>a lot of rows are processed.\n>\n>Yeah, if your mental model of a function call is some remarkably\n>expensive\n>PostGIS geometry manipulation, it's easy to justify doing a lot of work\n>to try to detect duplicates. But most functions in most queries are\n>more like int4pl or btint8cmp, and it's going to be extremely\n>remarkable\n>if you can make back the planner costs of checking for duplicate usages\n>of those.\n\nWell, if it's an expression containing those individuals cheap calls on a seqscan on a large table below an aggregate, it'd likely still be a win. But we don't, to my knowledge, really have a good way to model optimizations like this that should only be done if either expensive or have a high loop count.\n\nI guess one ugly way to deal with this would be to eliminate redundancies very late, e.g. during setrefs (where a better data structure for matching expressions would be good anyway), as we already know all the costs. \n\nBut ideally we would want to do be able to take such savings into account earlier, when considering different paths. I suspect that it might be a good enough vehicle to tackle the rest of the work however, at least initially.\n\nWe could also \"just\" do such elimination during expression \"compilation\", but it'd be better to not have to do something as complicated as this for every execution of a prepared statement.\n\n\n>> I think one part of doing this in a realistic manner is an efficient\n>> search for redundant expressions. The other, also non trivial, is how\n>to\n>> even represent re eferences to the results of expressions in other\n>parts of the expression tree / other expressions.\n>\n>Yup, both of those would be critical to do right.\n\nPotentially related note: for nodes like seqscan, combining the qual and projection processing into one expression seems to be a noticable win (at least when taking care do emit two different sets of deform expression steps). Wonder if something like that would take care of avoiding the need for cross expression value passing in enough places.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 31 Oct 2019 08:20:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Potentially related note: for nodes like seqscan, combining the qual and projection processing into one expression seems to be a noticable win (at least when taking care do emit two different sets of deform expression steps).\n\nThere's just one problem: that violates SQL semantics, and not in\na small way.\n\n\tselect 1/x from tab where x <> 0\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 31 Oct 2019 11:45:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Hi, \n\nOn October 31, 2019 8:45:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> Potentially related note: for nodes like seqscan, combining the qual\n>and projection processing into one expression seems to be a noticable\n>win (at least when taking care do emit two different sets of deform\n>expression steps).\n>\n>There's just one problem: that violates SQL semantics, and not in\n>a small way.\n>\n>\tselect 1/x from tab where x <> 0\n\nThe expression would obviously have to return early, before projecting, when not matching the qual? I'm basically just thinking of first executing the steps for the qual, and in the success case execute the projection steps before returning the quals positive result. \n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 31 Oct 2019 08:49:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "x <> 0 is evaluated first, 1/x only when x <> 0, not ?\n\nczw., 31 paź 2019 o 16:45 Tom Lane <tgl@sss.pgh.pa.us> napisał(a):\n\n> Andres Freund <andres@anarazel.de> writes:\n> > Potentially related note: for nodes like seqscan, combining the qual and\n> projection processing into one expression seems to be a noticable win (at\n> least when taking care do emit two different sets of deform expression\n> steps).\n>\n> There's just one problem: that violates SQL semantics, and not in\n> a small way.\n>\n> select 1/x from tab where x <> 0\n>\n> regards, tom lane\n>\n\nx <> 0 is evaluated first, 1/x only when x <> 0, not ?czw., 31 paź 2019 o 16:45 Tom Lane <tgl@sss.pgh.pa.us> napisał(a):Andres Freund <andres@anarazel.de> writes:\n> Potentially related note: for nodes like seqscan, combining the qual and projection processing into one expression seems to be a noticable win (at least when taking care do emit two different sets of deform expression steps).\n\nThere's just one problem: that violates SQL semantics, and not in\na small way.\n\n        select 1/x from tab where x <> 0\n\n                        regards, tom lane", "msg_date": "Thu, 31 Oct 2019 16:51:11 +0100", "msg_from": "Andrzej Barszcz <abusinf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Hi, \n\nOn October 31, 2019 8:51:11 AM PDT, Andrzej Barszcz <abusinf@gmail.com> wrote:\n>x <> 0 is evaluated first, 1/x only when x <> 0, not ?\n>\n>czw., 31 paź 2019 o 16:45 Tom Lane <tgl@sss.pgh.pa.us> napisał(a):\n>\n>> Andres Freund <andres@anarazel.de> writes:\n>> > Potentially related note: for nodes like seqscan, combining the\n>qual and\n>> projection processing into one expression seems to be a noticable win\n>(at\n>> least when taking care do emit two different sets of deform\n>expression\n>> steps).\n>>\n>> There's just one problem: that violates SQL semantics, and not in\n>> a small way.\n>>\n>> select 1/x from tab where x <> 0\n\nOn postgres lists the policy is to reply below the quoted bit, and to trim the quote appropriately.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 31 Oct 2019 08:52:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "I recently implemented something closely related to this. Combining and\nmigrating expensive STABLE user-defined functions to the FROM clause, where\nthe function is evaluated as a lateral join (or \"cross apply\"). I'm\ndefining expensive as 50x times more expensive than the default function\ncost.\n\nFor functions that return multiple outputs and where the query uses (...).*\nnotation, this will, for example, consolidate all of the calls in the *\nexpansion into a single call. It also looks in the WHERE clause and HAVING\nclause, and combines those references, too. Currently it requires the\nfunction to be in a top-level AND condition, if it appears in a predicate.\n\nI think I can get permission for contributing it back. If there's an\ninterest in it, let me know.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Thu, 31 Oct 2019 08:56:22 -0700 (MST)", "msg_from": "Jim Finnerty <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "Hi\n\nI need advice.\nResetExprContext(econtext) is defined as\nMemoryContextReset((econtext)->ecxt_per_tuple_memory).\nI can register callback in MemoryContext but it is always cleaned on every\ncall to MemoryContextReset().\nHow to reset some fields of ExprContext ( living in per_query_memory ) when\nResetExprContext is called ?\n\nczw., 31 paź 2019 o 16:52 Andres Freund <andres@anarazel.de> napisał(a):\n\n> Hi,\n>\n> On October 31, 2019 8:51:11 AM PDT, Andrzej Barszcz <abusinf@gmail.com>\n> wrote:\n> >x <> 0 is evaluated first, 1/x only when x <> 0, not ?\n> >\n> >czw., 31 paź 2019 o 16:45 Tom Lane <tgl@sss.pgh.pa.us> napisał(a):\n> >\n> >> Andres Freund <andres@anarazel.de> writes:\n> >> > Potentially related note: for nodes like seqscan, combining the\n> >qual and\n> >> projection processing into one expression seems to be a noticable win\n> >(at\n> >> least when taking care do emit two different sets of deform\n> >expression\n> >> steps).\n> >>\n> >> There's just one problem: that violates SQL semantics, and not in\n> >> a small way.\n> >>\n> >> select 1/x from tab where x <> 0\n>\n> On postgres lists the policy is to reply below the quoted bit, and to trim\n> the quote appropriately.\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n\nHiI need advice. ResetExprContext(econtext) is defined as MemoryContextReset((econtext)->ecxt_per_tuple_memory).I can register callback in MemoryContext but it is always cleaned on every call to MemoryContextReset().How to reset some fields of ExprContext ( living in per_query_memory ) when ResetExprContext is called ?czw., 31 paź 2019 o 16:52 Andres Freund <andres@anarazel.de> napisał(a):Hi, \n\nOn October 31, 2019 8:51:11 AM PDT, Andrzej Barszcz <abusinf@gmail.com> wrote:\n>x <> 0 is evaluated first, 1/x only when x <> 0, not ?\n>\n>czw., 31 paź 2019 o 16:45 Tom Lane <tgl@sss.pgh.pa.us> napisał(a):\n>\n>> Andres Freund <andres@anarazel.de> writes:\n>> > Potentially related note: for nodes like seqscan, combining the\n>qual and\n>> projection processing into one expression seems to be a noticable win\n>(at\n>> least when taking care do emit two different sets of deform\n>expression\n>> steps).\n>>\n>> There's just one problem: that violates SQL semantics, and not in\n>> a small way.\n>>\n>>         select 1/x from tab where x <> 0\n\nOn postgres lists the policy is to reply below the quoted bit, and to trim the quote appropriately.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Mon, 18 Nov 2019 15:20:48 +0100", "msg_from": "Andrzej Barszcz <abusinf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "On Thu, Oct 31, 2019 at 11:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n>\n> Possibly this could be finessed by only trying to find duplicates of\n> functions that have high cost estimates. Not sure how high is high\n> enough.\n\n\ncan we just add a flag on pg_proc to show if the cost is high or not, if\nuser are not happy with that, they can change it by updating the value?\nbased on that most of the function call cost are low, this way may be\nhelpful for the searching of duplicate expressions.\n\nOn Thu, Oct 31, 2019 at 11:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nPossibly this could be finessed by only trying to find duplicates of\nfunctions that have high cost estimates.  Not sure how high is high\nenough. can we just add a flag on pg_proc to show if the cost is high or not,  if user are not happy with that,  they can change it by updating the value?  based on that most of the function call cost are low,   this way may be helpful for the searching of duplicate expressions.", "msg_date": "Thu, 21 Nov 2019 09:05:14 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: function calls optimization" }, { "msg_contents": "I think your first thought was good.\nHow high ? I think it's a matter of convention, certainly more than default\n100.\n\n\n\nczw., 21 lis 2019 o 02:05 Andy Fan <zhihui.fan1213@gmail.com> napisał(a):\n\n>\n>\n> On Thu, Oct 31, 2019 at 11:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>>\n>>\n>> Possibly this could be finessed by only trying to find duplicates of\n>> functions that have high cost estimates. Not sure how high is high\n>> enough.\n>\n>\n> can we just add a flag on pg_proc to show if the cost is high or not, if\n> user are not happy with that, they can change it by updating the value?\n> based on that most of the function call cost are low, this way may be\n> helpful for the searching of duplicate expressions.\n>\n\nI think your first thought was good.  How high ? I think it's a matter of convention, certainly more than default 100.  czw., 21 lis 2019 o 02:05 Andy Fan <zhihui.fan1213@gmail.com> napisał(a):On Thu, Oct 31, 2019 at 11:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nPossibly this could be finessed by only trying to find duplicates of\nfunctions that have high cost estimates.  Not sure how high is high\nenough. can we just add a flag on pg_proc to show if the cost is high or not,  if user are not happy with that,  they can change it by updating the value?  based on that most of the function call cost are low,   this way may be helpful for the searching of duplicate expressions.", "msg_date": "Thu, 21 Nov 2019 09:37:46 +0100", "msg_from": "Andrzej Barszcz <abusinf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: function calls optimization" } ]
[ { "msg_contents": "This is the first of a number of patches to enhance SSL functionality,\nparticularly w.r.t. passphrases.\n\n\nThis patch provides a hook for a function that can supply an SSL\npassphrase. The hook can be filled in by a shared preloadable module. In\norder for that to be effective, the startup order is modified slightly.\nThere is a test attached that builds and uses one trivial\nimplementation, which just takes a configuration setting and rot13's it\nbefore supplying the result as the passphrase.\n\n\ncheers\n\n\nandrew\n\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 31 Oct 2019 11:37:04 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "ssl passphrase callback" }, { "msg_contents": "On Thu, Oct 31, 2019 at 11:37 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> This patch provides a hook for a function that can supply an SSL\n> passphrase. The hook can be filled in by a shared preloadable module. In\n> order for that to be effective, the startup order is modified slightly.\n> There is a test attached that builds and uses one trivial\n> implementation, which just takes a configuration setting and rot13's it\n> before supplying the result as the passphrase.\n\nIt seems to me that it would be a lot better to have an example in\ncontrib that does something which might be of actual use to users,\nsuch as running a shell command and reading the passphrase from\nstdout.\n\nFeatures that are only accessible by writing C code are, in general,\nnot as desirable as features which can be accessed via SQL or\nconfiguration.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 11:01:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 11/1/19 11:01 AM, Robert Haas wrote:\n> On Thu, Oct 31, 2019 at 11:37 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> This patch provides a hook for a function that can supply an SSL\n>> passphrase. The hook can be filled in by a shared preloadable module. In\n>> order for that to be effective, the startup order is modified slightly.\n>> There is a test attached that builds and uses one trivial\n>> implementation, which just takes a configuration setting and rot13's it\n>> before supplying the result as the passphrase.\n> It seems to me that it would be a lot better to have an example in\n> contrib that does something which might be of actual use to users,\n> such as running a shell command and reading the passphrase from\n> stdout.\n>\n> Features that are only accessible by writing C code are, in general,\n> not as desirable as features which can be accessed via SQL or\n> configuration.\n>\n\n\nWell, I tried to provide the most trivial and simple test I could come\nup with. Running a shell command can already be accomplished via the\nssl_passphrase_command setting.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 1 Nov 2019 13:57:29 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Sat, Nov 2, 2019 at 6:57 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> On 11/1/19 11:01 AM, Robert Haas wrote:\n> > On Thu, Oct 31, 2019 at 11:37 AM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >> This patch provides a hook for a function that can supply an SSL\n> >> passphrase. The hook can be filled in by a shared preloadable module. In\n> >> order for that to be effective, the startup order is modified slightly.\n> >> There is a test attached that builds and uses one trivial\n> >> implementation, which just takes a configuration setting and rot13's it\n> >> before supplying the result as the passphrase.\n> > It seems to me that it would be a lot better to have an example in\n> > contrib that does something which might be of actual use to users,\n> > such as running a shell command and reading the passphrase from\n> > stdout.\n> >\n> > Features that are only accessible by writing C code are, in general,\n> > not as desirable as features which can be accessed via SQL or\n> > configuration.\n>\n> Well, I tried to provide the most trivial and simple test I could come\n> up with. Running a shell command can already be accomplished via the\n> ssl_passphrase_command setting.\n\nIt looks like the new declarations in libpq-be.h are ifdef'd out in a\nnon-USE_SSL build, but then we still try to build the new test module\nand it fails:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.64071\n\n\n", "msg_date": "Tue, 5 Nov 2019 10:43:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Fri, Nov 1, 2019 at 01:57:29PM -0400, Andrew Dunstan wrote:\n> \n> On 11/1/19 11:01 AM, Robert Haas wrote:\n> > On Thu, Oct 31, 2019 at 11:37 AM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >> This patch provides a hook for a function that can supply an SSL\n> >> passphrase. The hook can be filled in by a shared preloadable module. In\n> >> order for that to be effective, the startup order is modified slightly.\n> >> There is a test attached that builds and uses one trivial\n> >> implementation, which just takes a configuration setting and rot13's it\n> >> before supplying the result as the passphrase.\n> > It seems to me that it would be a lot better to have an example in\n> > contrib that does something which might be of actual use to users,\n> > such as running a shell command and reading the passphrase from\n> > stdout.\n> >\n> > Features that are only accessible by writing C code are, in general,\n> > not as desirable as features which can be accessed via SQL or\n> > configuration.\n> >\n> \n> \n> Well, I tried to provide the most trivial and simple test I could come\n> up with. Running a shell command can already be accomplished via the\n> ssl_passphrase_command setting.\n\nWhat is the value of a shared library over a shell command? We had this\ndiscussion in relation to archive_command years ago, and decided on a\nshell command as the best API.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 6 Nov 2019 20:23:56 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On 11/4/19 4:43 PM, Thomas Munro wrote:\n>\n> It looks like the new declarations in libpq-be.h are ifdef'd out in a\n> non-USE_SSL build, but then we still try to build the new test module\n> and it fails:\n>\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.64071\n\n\n\nI think this updated patch should fix things.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 7 Nov 2019 12:30:50 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Thu, 7 Nov 2019 at 10:24, Bruce Momjian <bruce@momjian.us> wrote:\n\n\n> What is the value of a shared library over a shell command? We had this\n> discussion in relation to archive_command years ago, and decided on a\n> shell command as the best API.\n>\n\nI don't recall such a discussion, but I can give perspective:\n\n* shell command offered the widest and simplest API for integration, which\nwas the most important consideration for a backup API. That choice caused\ndifficulty with the need to pass information to the external command, e.g.\n%f %p\n\n* shared library is more appropriate for a security-related module, so\nusers can't see how it is configured, as well as being more\ntightly integrated so it can be better tailored to various uses\n\nSummary is that the choice is not random, nor mere preference\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 7 Nov 2019 at 10:24, Bruce Momjian <bruce@momjian.us> wrote: What is the value of a shared library over a shell command?  We had this\ndiscussion in relation to archive_command years ago, and decided on a\nshell command as the best API.I don't recall such a discussion, but I can give perspective:* shell command offered the widest and simplest API for integration, which was the most important consideration for a backup API. That choice caused difficulty with the need to pass information to the external command, e.g. %f %p* shared library is more appropriate for a security-related module, so users can't see how it is configured, as well as being more tightly integrated so it can be better tailored to various usesSummary is that the choice is not random, nor mere preference-- Simon Riggs                http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 8 Nov 2019 23:12:08 +0900", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Fri, Nov 08, 2019 at 11:12:08PM +0900, Simon Riggs wrote:\n>On Thu, 7 Nov 2019 at 10:24, Bruce Momjian <bruce@momjian.us> wrote:\n>\n>\n>> What is the value of a shared library over a shell command? We had\n>> this discussion in relation to archive_command years ago, and decided\n>> on a shell command as the best API.\n>>\n>\n>I don't recall such a discussion, but I can give perspective:\n>\n>* shell command offered the widest and simplest API for integration,\n>which was the most important consideration for a backup API. That\n>choice caused difficulty with the need to pass information to the\n>external command, e.g. %f %p\n>\n\nIt's not clear to me why simple API for integration would be less\nvaluable for this feature. Also, I'm sure passing data to/from shell\ncommand may be tricky, but presumably we have figured how to do that.\n\n>* shared library is more appropriate for a security-related module, so\n>users can't see how it is configured, as well as being more\n>tightly integrated so it can be better tailored to various uses\n>\n\nI don't follow. Why would there be a significant difference between a\nshell command/script and shared library in this respect? If you don't\nwant the users to see the config, just store it in a separate file and\nit's about the same as storing it in the .so library.\n\nIs there something that can be done with a .so library but can't be done\nwith a shell command (which may just call a binary, with all the config\nincluded, making it equal to the .so solution)?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 9 Nov 2019 12:52:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Nov 1, 2019 at 01:57:29PM -0400, Andrew Dunstan wrote:\n> >\n> > On 11/1/19 11:01 AM, Robert Haas wrote:\n> > > On Thu, Oct 31, 2019 at 11:37 AM Andrew Dunstan\n> > > <andrew.dunstan@2ndquadrant.com> wrote:\n> > >> This patch provides a hook for a function that can supply an SSL\n> > >> passphrase. The hook can be filled in by a shared preloadable module.\n> In\n> > >> order for that to be effective, the startup order is modified\n> slightly.\n> > >> There is a test attached that builds and uses one trivial\n> > >> implementation, which just takes a configuration setting and rot13's\n> it\n> > >> before supplying the result as the passphrase.\n> > > It seems to me that it would be a lot better to have an example in\n> > > contrib that does something which might be of actual use to users,\n> > > such as running a shell command and reading the passphrase from\n> > > stdout.\n> > >\n> > > Features that are only accessible by writing C code are, in general,\n> > > not as desirable as features which can be accessed via SQL or\n> > > configuration.\n> > >\n> >\n> >\n> > Well, I tried to provide the most trivial and simple test I could come\n> > up with. Running a shell command can already be accomplished via the\n> > ssl_passphrase_command setting.\n>\n> What is the value of a shared library over a shell command?\n\n\nFor one, platforms where shell commands are a lot less convenient, such as\nWindows.\n\n\n\n> We had this\n> discussion in relation to archive_command years ago, and decided on a\n> shell command as the best API.\n>\n>\nI don't recall that from back then, but that was a long time ago.\n\nBut it's interesting that you mention it, given the number of people I have\nbeen discussing that with recently, under the topic of changing it from a\nshell command into a shared library API (with there being a shell command\nas one possible implementation of course).\n\nOne of the main reasons there being to be easily able to transfer more\nstate and give results other than just an exit code, no need to deal with\nparameter escaping etc. Which probably wouldn't matter as much to an SSL\npassphrase command, but still.\n\n//Magnus\n\nOn Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, Nov  1, 2019 at 01:57:29PM -0400, Andrew Dunstan wrote:\n> \n> On 11/1/19 11:01 AM, Robert Haas wrote:\n> > On Thu, Oct 31, 2019 at 11:37 AM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >> This patch provides a hook for a function that can supply an SSL\n> >> passphrase. The hook can be filled in by a shared preloadable module. In\n> >> order for that to be effective, the startup order is modified slightly.\n> >> There is a test attached that builds and uses one trivial\n> >> implementation, which just takes a configuration setting and rot13's it\n> >> before supplying the result as the passphrase.\n> > It seems to me that it would be a lot better to have an example in\n> > contrib that does something which might be of actual use to users,\n> > such as running a shell command and reading the passphrase from\n> > stdout.\n> >\n> > Features that are only accessible by writing C code are, in general,\n> > not as desirable as features which can be accessed via SQL or\n> > configuration.\n> >\n> \n> \n> Well, I tried to provide the most trivial and simple test I could come\n> up with. Running a shell command can already be accomplished via the\n> ssl_passphrase_command setting.\n\nWhat is the value of a shared library over a shell command?For one, platforms where shell commands are a lot less convenient, such as Windows.   We had this\ndiscussion in relation to archive_command years ago, and decided on a\nshell command as the best API.I don't recall that from back then, but that was a long time ago.But it's interesting that you mention it, given the number of people I have been discussing that with recently, under the topic of changing it from a shell command into a shared library API (with there being a shell command as one possible implementation of course). One of the main reasons there being to be easily able to transfer more state and give results other than just an exit code, no need to deal with parameter escaping etc. Which probably wouldn't matter as much to an SSL passphrase command, but still.//Magnus", "msg_date": "Sun, 10 Nov 2019 13:01:17 -0600", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On 11/7/19 12:30 PM, Andrew Dunstan wrote:\n> On 11/4/19 4:43 PM, Thomas Munro wrote:\n>> It looks like the new declarations in libpq-be.h are ifdef'd out in a\n>> non-USE_SSL build, but then we still try to build the new test module\n>> and it fails:\n>>\n>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.64071\n>\n>\n> I think this updated patch should fix things.\n>\n>\n\n\n\nThis time with a typo fixed to keep the cfbot happy.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 11 Nov 2019 14:19:44 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Sun, Nov 10, 2019 at 01:01:17PM -0600, Magnus Hagander wrote:\n> On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n> � We had this\n> discussion in relation to archive_command years ago, and decided on a\n> shell command as the best API.\n>\n> I don't recall that from back then, but that was a long time ago.\n> \n> But it's interesting that you mention it, given the number of people I have\n> been discussing that with recently, under the topic of changing it from a shell\n> command into a shared library API (with there being a shell command as one\n> possible implementation of course).�\n> \n> One of the main reasons there being to be easily able to transfer more state\n> and give results other than just an exit code, no need to deal with parameter\n> escaping etc. Which probably wouldn't matter as much to an SSL passphrase\n> command, but still.\n\nI get the callback-is-easier issue with shared objects, but are we\nexpecting to pass in more information here than we do for\narchive_command? I would think not. What I am saying is that if we\ndon't think passing things in works, we should fix all these external\ncommands, or something. I don't see why ssl_passphrase_command is\ndifferent, except that it is new. Or is it related to _securely_\npassing something?\n\nAlso, why was this patch posted without any discussion of these issues?\nShouldn't we ideally discuss the API first?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 12 Nov 2019 21:51:33 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Tue, Nov 12, 2019 at 09:51:33PM -0500, Bruce Momjian wrote:\n> On Sun, Nov 10, 2019 at 01:01:17PM -0600, Magnus Hagander wrote:\n> > On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > � We had this\n> > discussion in relation to archive_command years ago, and decided on a\n> > shell command as the best API.\n> >\n> > I don't recall that from back then, but that was a long time ago.\n> > \n> > But it's interesting that you mention it, given the number of people I have\n> > been discussing that with recently, under the topic of changing it from a shell\n> > command into a shared library API (with there being a shell command as one\n> > possible implementation of course).�\n> > \n> > One of the main reasons there being to be easily able to transfer more state\n> > and give results other than just an exit code, no need to deal with parameter\n> > escaping etc. Which probably wouldn't matter as much to an SSL passphrase\n> > command, but still.\n> \n> I get the callback-is-easier issue with shared objects, but are we\n> expecting to pass in more information here than we do for\n> archive_command? I would think not. What I am saying is that if we\n> don't think passing things in works, we should fix all these external\n> commands, or something. I don't see why ssl_passphrase_command is\n> different, except that it is new. Or is it related to _securely_\n> passing something?\n> \n> Also, why was this patch posted without any discussion of these issues?\n> Shouldn't we ideally discuss the API first?\n\nI wonder if every GUC that takes an OS command should allow a shared\nobject to be specified --- maybe control that if the command string\nstarts with a # or something.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 13 Nov 2019 08:08:23 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, 13 Nov 2019 at 13:08, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Nov 12, 2019 at 09:51:33PM -0500, Bruce Momjian wrote:\n> > On Sun, Nov 10, 2019 at 01:01:17PM -0600, Magnus Hagander wrote:\n> > > On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> > > One of the main reasons there being to be easily able to transfer more\n> state\n> > > and give results other than just an exit code, no need to deal with\n> parameter\n> > > escaping etc. Which probably wouldn't matter as much to an SSL\n> passphrase\n> > > command, but still.\n> >\n> > I get the callback-is-easier issue with shared objects, but are we\n> > expecting to pass in more information here than we do for\n> > archive_command? I would think not. What I am saying is that if we\n> > don't think passing things in works, we should fix all these external\n> > commands, or something. I don't see why ssl_passphrase_command is\n> > different, except that it is new.\n\n\n\n> Or is it related to _securely_passing something?\n>\n\nYes\n\n\n> > Also, why was this patch posted without any discussion of these issues?\n> > Shouldn't we ideally discuss the API first?\n>\n> I wonder if every GUC that takes an OS command should allow a shared\n> object to be specified --- maybe control that if the command string\n> starts with a # or something.\n>\n\nVery good idea\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 13 Nov 2019 at 13:08, Bruce Momjian <bruce@momjian.us> wrote:On Tue, Nov 12, 2019 at 09:51:33PM -0500, Bruce Momjian wrote:\n> On Sun, Nov 10, 2019 at 01:01:17PM -0600, Magnus Hagander wrote:\n> > On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > One of the main reasons there being to be easily able to transfer more state\n> > and give results other than just an exit code, no need to deal with parameter\n> > escaping etc. Which probably wouldn't matter as much to an SSL passphrase\n> > command, but still.\n> \n> I get the callback-is-easier issue with shared objects, but are we\n> expecting to pass in more information here than we do for\n> archive_command?  I would think not.  What I am saying is that if we\n> don't think passing things in works, we should fix all these external\n> commands, or something.   I don't see why ssl_passphrase_command is\n> different, except that it is new.  Or is it related to _securely_passing something?Yes \n> Also, why was this patch posted without any discussion of these issues?\n> Shouldn't we ideally discuss the API first?\n\nI wonder if every GUC that takes an OS command should allow a shared\nobject to be specified --- maybe control that if the command string\nstarts with a # or something.Very good idea -- Simon Riggs                http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 13 Nov 2019 13:20:43 +0000", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 11/13/19 8:08 AM, Bruce Momjian wrote:\n>\n>>\n>> Also, why was this patch posted without any discussion of these issues?\n>> Shouldn't we ideally discuss the API first?\n\n\nThis is a very tiny patch. I don't think it's unusual to post such\nthings without prior discussion. I would agree with your point if it\nwere thousands of lines instead of 20 or so lines of core code.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 13 Nov 2019 14:48:01 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, Nov 13, 2019 at 01:20:43PM +0000, Simon Riggs wrote:\n>On Wed, 13 Nov 2019 at 13:08, Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Tue, Nov 12, 2019 at 09:51:33PM -0500, Bruce Momjian wrote:\n>> > On Sun, Nov 10, 2019 at 01:01:17PM -0600, Magnus Hagander wrote:\n>> > > On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>> > > One of the main reasons there being to be easily able to transfer more\n>> state\n>> > > and give results other than just an exit code, no need to deal with\n>> parameter\n>> > > escaping etc. Which probably wouldn't matter as much to an SSL\n>> passphrase\n>> > > command, but still.\n>> >\n>> > I get the callback-is-easier issue with shared objects, but are we\n>> > expecting to pass in more information here than we do for\n>> > archive_command? I would think not. What I am saying is that if we\n>> > don't think passing things in works, we should fix all these external\n>> > commands, or something. I don't see why ssl_passphrase_command is\n>> > different, except that it is new.\n>\n>\n>\n>> Or is it related to _securely_passing something?\n>>\n>\n>Yes\n>\n\nI think it would be beneficial to explain why shared object is more\nsecure than an OS command. Perhaps it's common knowledge, but it's not\nquite obvious to me.\n\n>\n>> > Also, why was this patch posted without any discussion of these issues?\n>> > Shouldn't we ideally discuss the API first?\n>>\n>> I wonder if every GUC that takes an OS command should allow a shared\n>> object to be specified --- maybe control that if the command string\n>> starts with a # or something.\n>>\n>\n>Very good idea\n>\n\nIf it's about securely passing sensitive information (i.e. passphrase)\nas was suggested, then I think that only applies to fairly small number\nof GUCs.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 13 Nov 2019 21:23:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, Nov 13, 2019 at 02:48:01PM -0500, Andrew Dunstan wrote:\n>\n>On 11/13/19 8:08 AM, Bruce Momjian wrote:\n>>\n>>>\n>>> Also, why was this patch posted without any discussion of these issues?\n>>> Shouldn't we ideally discuss the API first?\n>\n>\n>This is a very tiny patch. I don't think it's unusual to post such\n>things without prior discussion. I would agree with your point if it\n>were thousands of lines instead of 20 or so lines of core code.\n>\n\nNot sure that's really true. I think patches should provide some basic\nexplanation why the feature is desirable, no matter the number of lines.\n\nAlso, there were vague references to issues with passing parameters to\narchive_command. A link to details, past discussion, or brief\nexplanation would be nice.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 13 Nov 2019 21:34:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, Nov 13, 2019 at 01:20:43PM +0000, Simon Riggs wrote:\n> >On Wed, 13 Nov 2019 at 13:08, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> >> On Tue, Nov 12, 2019 at 09:51:33PM -0500, Bruce Momjian wrote:\n> >> > On Sun, Nov 10, 2019 at 01:01:17PM -0600, Magnus Hagander wrote:\n> >> > > On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us>\n> wrote:\n> >>\n> >> > > One of the main reasons there being to be easily able to transfer\n> more\n> >> state\n> >> > > and give results other than just an exit code, no need to deal with\n> >> parameter\n> >> > > escaping etc. Which probably wouldn't matter as much to an SSL\n> >> passphrase\n> >> > > command, but still.\n> >> >\n> >> > I get the callback-is-easier issue with shared objects, but are we\n> >> > expecting to pass in more information here than we do for\n> >> > archive_command? I would think not. What I am saying is that if we\n> >> > don't think passing things in works, we should fix all these external\n> >> > commands, or something. I don't see why ssl_passphrase_command is\n> >> > different, except that it is new.\n> >\n> >\n> >\n> >> Or is it related to _securely_passing something?\n> >>\n> >\n> >Yes\n> >\n>\n> I think it would be beneficial to explain why shared object is more\n> secure than an OS command. Perhaps it's common knowledge, but it's not\n> quite obvious to me.\n>\n\nYeah, that probably wouldn't hurt. It's also securely passing from more\nthan one perspective -- both from the \"cannot be eavesdropped\" (like\nputting the password on the commandline for example) and the requirement\nfor escaping.\n\n\n>\n> >\n> >> > Also, why was this patch posted without any discussion of these\n> issues?\n> >> > Shouldn't we ideally discuss the API first?\n> >>\n> >> I wonder if every GUC that takes an OS command should allow a shared\n> >> object to be specified --- maybe control that if the command string\n> >> starts with a # or something.\n> >>\n> >\n> >Very good idea\n> >\n>\n> If it's about securely passing sensitive information (i.e. passphrase)\n> as was suggested, then I think that only applies to fairly small number\n> of GUCs.\n>\n\nThere aren't exactly a large number of GUCs that take OS commands in total.\nConsistency itself certainly has some value.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Wed, Nov 13, 2019 at 01:20:43PM +0000, Simon Riggs wrote:\n>On Wed, 13 Nov 2019 at 13:08, Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Tue, Nov 12, 2019 at 09:51:33PM -0500, Bruce Momjian wrote:\n>> > On Sun, Nov 10, 2019 at 01:01:17PM -0600, Magnus Hagander wrote:\n>> > > On Wed, Nov 6, 2019 at 7:24 PM Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>> > > One of the main reasons there being to be easily able to transfer more\n>> state\n>> > > and give results other than just an exit code, no need to deal with\n>> parameter\n>> > > escaping etc. Which probably wouldn't matter as much to an SSL\n>> passphrase\n>> > > command, but still.\n>> >\n>> > I get the callback-is-easier issue with shared objects, but are we\n>> > expecting to pass in more information here than we do for\n>> > archive_command?  I would think not.  What I am saying is that if we\n>> > don't think passing things in works, we should fix all these external\n>> > commands, or something.   I don't see why ssl_passphrase_command is\n>> > different, except that it is new.\n>\n>\n>\n>> Or is it related to _securely_passing something?\n>>\n>\n>Yes\n>\n\nI think it would be beneficial to explain why shared object is more\nsecure than an OS command. Perhaps it's common knowledge, but it's not\nquite obvious to me.Yeah, that probably wouldn't hurt. It's also securely passing from more than one perspective -- both from the \"cannot be eavesdropped\" (like putting the password on the commandline for example) and the requirement for escaping. \n\n>\n>> > Also, why was this patch posted without any discussion of these issues?\n>> > Shouldn't we ideally discuss the API first?\n>>\n>> I wonder if every GUC that takes an OS command should allow a shared\n>> object to be specified --- maybe control that if the command string\n>> starts with a # or something.\n>>\n>\n>Very good idea\n>\n\nIf it's about securely passing sensitive information (i.e. passphrase)\nas was suggested, then I think that only applies to fairly small number\nof GUCs.There aren't exactly a large number of GUCs that take OS commands in total. Consistency itself certainly has some value. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 14 Nov 2019 11:42:05 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, Nov 13, 2019 at 3:23 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I think it would be beneficial to explain why shared object is more\n> secure than an OS command. Perhaps it's common knowledge, but it's not\n> quite obvious to me.\n\nExternal command args can be viewed by other OS users (not just the\npostgres user). For non-sensitive arguments (ex: WAL filename?) that's\nnot an issue but if you plan on passing in something potentially\nsecret value from the backend to the external OS command, that value\nwould be exposed:\n\nTerminal 1 (run a command as some other user):\n$ sudo -u nobody sleep 5\n\nTerminal 2 (view command args as a different non-super user):\n$ ps -u nobody -o args\nCOMMAND\nsleep 5\n\nA shared library would not have this problem as the backend directly\nexecutes the library in the same process.\n\nHas the idea of using environment variables (rather than command line\nargs) for external commands been brought up before? I couldn't find\nanything in the mailing list archives.\n\nEnvironment variables have the advantage of only being readable by the\nprocess owner and super user. They also naturally have a \"name\" and do\nnot have escaping or quoting issues.\n\nFor example, archive_command %p could be exposed as\nPG_ARG_ARCHIVE_PATH and %f could be exposed as\nPG_ARG_ARCHIVE_FILENAME. Then you could have a script use them via:\n\n#!/usr/bin/env bash\nset -euo pipefail\nmain () {\n local archive_dir=\"/path/to/archive_dir/\"\n local archive_file=\"${archive_dir}${PG_ARG_ARCHIVE_FILENAME}\"\n test ! -f \"${archive_file}\" && cp -- \"${PG_ARG_ARCHIVE_PATH}\"\n\"${archive_file}\"\n}\nmain \"$@\"\n\nIt's not particularly useful for that basic archive case but if\nthere's something like PG_ARG_SUPER_SECRET then the executed command\ncould receive that value without it being exposed. That'd be useful\nfor something like a callout to an external KMS (key management\nsystem).\n\nNothing stops something like this with being used in tandem with\nstring substitution to create the full commands. That'd give backwards\ncompatibility too. The main limitation compared to a shared library is\nthat you'd still have to explicitly pick and name the exposed argument\nenvironment variables (i.e. like picking the set of %? substitutions).\nIf there's a generic shared-library-for-external-commands approach\nthen there could be a built-in library that provides this type of\n\"expose as env vars\" functionality.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\n\n", "msg_date": "Thu, 14 Nov 2019 08:54:27 -0500", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Thu, Nov 14, 2019 at 11:42:05AM +0100, Magnus Hagander wrote:\n> On Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> I think it would be beneficial to explain why shared object is more\n> secure than an OS command. Perhaps it's common knowledge, but it's not\n> quite obvious to me.\n> \n> \n> Yeah, that probably wouldn't hurt. It's also securely passing from more than\n> one perspective -- both from the \"cannot be eavesdropped\" (like putting the\n> password on the commandline for example) and the requirement for escaping.\n\nI think a bigger issue is that if you want to give people the option of\nusing a shell command or a shared object, and if you use two commands to\ncontrol it, it isn't clear what happens if both are defined. By using\nsome character prefix to control if a shared object is used, you can use\na single variable and there is no confusion over having two variables\nand their conflicting behavior.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 14 Nov 2019 11:07:52 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 11/14/19 11:07 AM, Bruce Momjian wrote:\n> On Thu, Nov 14, 2019 at 11:42:05AM +0100, Magnus Hagander wrote:\n>> On Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>> I think it would be beneficial to explain why shared object is more\n>> secure than an OS command. Perhaps it's common knowledge, but it's not\n>> quite obvious to me.\n>>\n>>\n>> Yeah, that probably wouldn't hurt. It's also securely passing from more than\n>> one perspective -- both from the \"cannot be eavesdropped\" (like putting the\n>> password on the commandline for example) and the requirement for escaping.\n> I think a bigger issue is that if you want to give people the option of\n> using a shell command or a shared object, and if you use two commands to\n> control it, it isn't clear what happens if both are defined. By using\n> some character prefix to control if a shared object is used, you can use\n> a single variable and there is no confusion over having two variables\n> and their conflicting behavior.\n>\n\n\nI'm  not sure how that would work in the present instance. The shared\npreloaded module installs a function and defines the params it wants. If\nwe somehow unify the params with ssl_passphrase_command that could look\nicky, and the module would have to parse the settings string. That's not\na problem for the sample module which only needs one param, but it will\nbe for other more complex implementations.\n\nI'm quite open to suggestions, but I want things to be tolerably clean.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 14 Nov 2019 11:34:24 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Thu, Nov 14, 2019 at 11:34:24AM -0500, Andrew Dunstan wrote:\n>\n>On 11/14/19 11:07 AM, Bruce Momjian wrote:\n>> On Thu, Nov 14, 2019 at 11:42:05AM +0100, Magnus Hagander wrote:\n>>> On Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>>> I think it would be beneficial to explain why shared object is more\n>>> secure than an OS command. Perhaps it's common knowledge, but it's not\n>>> quite obvious to me.\n>>>\n>>>\n>>> Yeah, that probably wouldn't hurt. It's also securely passing from more than\n>>> one perspective -- both from the \"cannot be eavesdropped\" (like putting the\n>>> password on the commandline for example) and the requirement for escaping.\n>> I think a bigger issue is that if you want to give people the option of\n>> using a shell command or a shared object, and if you use two commands to\n>> control it, it isn't clear what happens if both are defined. By using\n>> some character prefix to control if a shared object is used, you can use\n>> a single variable and there is no confusion over having two variables\n>> and their conflicting behavior.\n>>\n>\n>\n>I'm� not sure how that would work in the present instance. The shared\n>preloaded module installs a function and defines the params it wants. If\n>we somehow unify the params with ssl_passphrase_command that could look\n>icky, and the module would have to parse the settings string. That's not\n>a problem for the sample module which only needs one param, but it will\n>be for other more complex implementations.\n>\n>I'm quite open to suggestions, but I want things to be tolerably clean.\n>\n\nI agree it's better to have two separate GUCs - one for command, one for\nshared object, and documented order of precedence. I suppose we may log\na warning when both are specified, or perhaps \"reset\" the value with\nlower order of precedence.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 Nov 2019 17:52:50 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Thu, Nov 14, 2019 at 11:34:24AM -0500, Andrew Dunstan wrote:\n> \n> On 11/14/19 11:07 AM, Bruce Momjian wrote:\n> > On Thu, Nov 14, 2019 at 11:42:05AM +0100, Magnus Hagander wrote:\n> >> On Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> >> I think it would be beneficial to explain why shared object is more\n> >> secure than an OS command. Perhaps it's common knowledge, but it's not\n> >> quite obvious to me.\n> >>\n> >>\n> >> Yeah, that probably wouldn't hurt. It's also securely passing from more than\n> >> one perspective -- both from the \"cannot be eavesdropped\" (like putting the\n> >> password on the commandline for example) and the requirement for escaping.\n> > I think a bigger issue is that if you want to give people the option of\n> > using a shell command or a shared object, and if you use two commands to\n> > control it, it isn't clear what happens if both are defined. By using\n> > some character prefix to control if a shared object is used, you can use\n> > a single variable and there is no confusion over having two variables\n> > and their conflicting behavior.\n> >\n> \n> \n> I'm� not sure how that would work in the present instance. The shared\n> preloaded module installs a function and defines the params it wants. If\n> we somehow unify the params with ssl_passphrase_command that could look\n> icky, and the module would have to parse the settings string. That's not\n> a problem for the sample module which only needs one param, but it will\n> be for other more complex implementations.\n> \n> I'm quite open to suggestions, but I want things to be tolerably clean.\n\nI was assuming if the variable starts with a #, it is a shared object,\nif not, it is a shell command:\n\n\tssl_passphrase_command='#/lib/x.so'\n\tssl_passphrase_command='my_command a b c'\n\nCan you show what you are talking about?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 14 Nov 2019 11:53:09 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On 2019-Nov-14, Bruce Momjian wrote:\n\n> I was assuming if the variable starts with a #, it is a shared object,\n> if not, it is a shell command:\n> \n> \tssl_passphrase_command='#/lib/x.so'\n> \tssl_passphrase_command='my_command a b c'\n\nNote that the proposed patch doesn't use a separate GUC -- it just uses\nshared_preload_libraries, and then it is the library that's in charge of\nsetting up the function. We probably wouldn't like to have multiple\nsettings that all do the same thing, such as recovery target (which\nseems to be a plentiful source of confusion).\n\nChanging the interface so that the user has to specify the function name\n(not the library name) in ssl_passphrase_command closes that ambiguity\nhole.\n\nNote that if you specify only the library name, it becomes redundant\nw.r.t. shared_preload_libraries; you could have more than one library\nsetting the function callback and it's hard to see which one wins.\n\nI think something like this would do it:\n ssl_passphrase_command='#superlib.so,my_rot13_passphrase'\n\nThis way, the library can still create any custom GUCs it pleases/needs,\nbut there's no possible confusion as to the function that's going to be\ncalled.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 Nov 2019 14:07:58 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Thu, Nov 14, 2019 at 02:07:58PM -0300, Alvaro Herrera wrote:\n> On 2019-Nov-14, Bruce Momjian wrote:\n> \n> > I was assuming if the variable starts with a #, it is a shared object,\n> > if not, it is a shell command:\n> > \n> > \tssl_passphrase_command='#/lib/x.so'\n> > \tssl_passphrase_command='my_command a b c'\n> \n> Note that the proposed patch doesn't use a separate GUC -- it just uses\n> shared_preload_libraries, and then it is the library that's in charge of\n> setting up the function. We probably wouldn't like to have multiple\n> settings that all do the same thing, such as recovery target (which\n> seems to be a plentiful source of confusion).\n> \n> Changing the interface so that the user has to specify the function name\n> (not the library name) in ssl_passphrase_command closes that ambiguity\n> hole.\n> \n> Note that if you specify only the library name, it becomes redundant\n> w.r.t. shared_preload_libraries; you could have more than one library\n> setting the function callback and it's hard to see which one wins.\n> \n> I think something like this would do it:\n> ssl_passphrase_command='#superlib.so,my_rot13_passphrase'\n> \n> This way, the library can still create any custom GUCs it pleases/needs,\n> but there's no possible confusion as to the function that's going to be\n> called.\n\nYeah, I was unclear how the function name would be specified. I thought\nit would just be hard-coded, but I like the above better. I am still\nunclear how parameters are passed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 14 Nov 2019 12:15:44 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 11/14/19 12:07 PM, Alvaro Herrera wrote:\n> On 2019-Nov-14, Bruce Momjian wrote:\n>\n>> I was assuming if the variable starts with a #, it is a shared object,\n>> if not, it is a shell command:\n>>\n>> \tssl_passphrase_command='#/lib/x.so'\n>> \tssl_passphrase_command='my_command a b c'\n> Note that the proposed patch doesn't use a separate GUC -- it just uses\n> shared_preload_libraries, and then it is the library that's in charge of\n> setting up the function. We probably wouldn't like to have multiple\n> settings that all do the same thing, such as recovery target (which\n> seems to be a plentiful source of confusion).\n>\n> Changing the interface so that the user has to specify the function name\n> (not the library name) in ssl_passphrase_command closes that ambiguity\n> hole.\n>\n> Note that if you specify only the library name, it becomes redundant\n> w.r.t. shared_preload_libraries; you could have more than one library\n> setting the function callback and it's hard to see which one wins.\n>\n> I think something like this would do it:\n> ssl_passphrase_command='#superlib.so,my_rot13_passphrase'\n>\n> This way, the library can still create any custom GUCs it pleases/needs,\n> but there's no possible confusion as to the function that's going to be\n> called.\n\n\nI guess this would work. There would have to be a deal of code to load\nthe library and lookup the symbol. Do we really think it's worth it?\nLeveraging shared_preload_libraries makes this comparatively simple.\n\n\nAlso, calling this 'ssl_passphrase_command' seems a little odd.\n\n\nA simpler way to handle it might be simply to error out and refuse to\nstart if both ssl_passphrase_function is set and ssl_passphrase_command\nis set.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 14 Nov 2019 14:29:23 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On 2019-Nov-14, Andrew Dunstan wrote:\n\n> I guess this would work. There would have to be a deal of code to load\n> the library and lookup the symbol. Do we really think it's worth it?\n> Leveraging shared_preload_libraries makes this comparatively simple.\n\nUsing the generic interface has the drawback that the user can make more\nmistakes. I think that's part of Bruce's issue with it (although I may\nmisinterpret.)\n\nI think if you add most of it as a new entry point in dfmgr.c (where you\ncan leverage internal_library_load) and returns a function pointer to\nthe user specified function, it's all that much additional code.\n\n(I don't think you can use load_external_function as is, because it\nassumes fmgr V1 calling convention, which I'm not sure serves your case.\nBut then maybe it does. And if not, then those 10 lines should be very\nsimilar to the code you'd need to add.)\n\n> A simpler way to handle it might be simply to error out and refuse to\n> start if both ssl_passphrase_function is set and ssl_passphrase_command\n> is set.\n\nYeah, you can do that too I guess, but I'm not sure I see that as simpler.\n\n> Also, calling this 'ssl_passphrase_command' seems a little odd.\n\nWe could just rename ssl_passphrase_command to something more\ngeneric, and add the existing name to map_old_guc_names to preserve\ncompatibility with pg12. Maybe the new name could be simply\nssl_passphrase or perhaps ssl_passphrase_{reader,getter,pinentry}.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 Nov 2019 17:21:51 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 11/14/19 3:21 PM, Alvaro Herrera wrote:\n> On 2019-Nov-14, Andrew Dunstan wrote:\n>\n>> I guess this would work. There would have to be a deal of code to load\n>> the library and lookup the symbol. Do we really think it's worth it?\n>> Leveraging shared_preload_libraries makes this comparatively simple.\n> Using the generic interface has the drawback that the user can make more\n> mistakes. I think that's part of Bruce's issue with it (although I may\n> misinterpret.)\n>\n> I think if you add most of it as a new entry point in dfmgr.c (where you\n> can leverage internal_library_load) and returns a function pointer to\n> the user specified function, it's all that much additional code.\n>\n> (I don't think you can use load_external_function as is, because it\n> assumes fmgr V1 calling convention, which I'm not sure serves your case.\n> But then maybe it does. And if not, then those 10 lines should be very\n> similar to the code you'd need to add.)\n\n\n\nIn the absence of further comment I will try to code up something along\nthese lines.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 15 Nov 2019 08:59:45 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Fri, 15 Nov 2019 at 00:34, Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\nwrote:\n\n>\n> On 11/14/19 11:07 AM, Bruce Momjian wrote:\n> > On Thu, Nov 14, 2019 at 11:42:05AM +0100, Magnus Hagander wrote:\n> >> On Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <\n> tomas.vondra@2ndquadrant.com>\n> >> I think it would be beneficial to explain why shared object is more\n> >> secure than an OS command. Perhaps it's common knowledge, but it's\n> not\n> >> quite obvious to me.\n> >>\n> >>\n> >> Yeah, that probably wouldn't hurt. It's also securely passing from more\n> than\n> >> one perspective -- both from the \"cannot be eavesdropped\" (like putting\n> the\n> >> password on the commandline for example) and the requirement for\n> escaping.\n> > I think a bigger issue is that if you want to give people the option of\n> > using a shell command or a shared object, and if you use two commands to\n> > control it, it isn't clear what happens if both are defined. By using\n> > some character prefix to control if a shared object is used, you can use\n> > a single variable and there is no confusion over having two variables\n> > and their conflicting behavior.\n> >\n>\n>\n> I'm not sure how that would work in the present instance. The shared\n> preloaded module installs a function and defines the params it wants. If\n> we somehow unify the params with ssl_passphrase_command that could look\n> icky, and the module would have to parse the settings string. That's not\n> a problem for the sample module which only needs one param, but it will\n> be for other more complex implementations.\n>\n> I'm quite open to suggestions, but I want things to be tolerably clean.\n\n\nIf someone wants a shell command wrapper, they can load a contrib that\ndelegates the hook to a shell command. So we can just ship a contrib, which\nacts both as test coverage for the feature, and a shell-command-support\nwrapper for anyone who desires that.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 15 Nov 2019 at 00:34, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 11/14/19 11:07 AM, Bruce Momjian wrote:\n> On Thu, Nov 14, 2019 at 11:42:05AM +0100, Magnus Hagander wrote:\n>> On Wed, Nov 13, 2019 at 9:23 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>>     I think it would be beneficial to explain why shared object is more\n>>     secure than an OS command. Perhaps it's common knowledge, but it's not\n>>     quite obvious to me.\n>>\n>>\n>> Yeah, that probably wouldn't hurt. It's also securely passing from more than\n>> one perspective -- both from the \"cannot be eavesdropped\" (like putting the\n>> password on the commandline for example) and the requirement for escaping.\n> I think a bigger issue is that if you want to give people the option of\n> using a shell command or a shared object, and if you use two commands to\n> control it, it isn't clear what happens if both are defined.  By using\n> some character prefix to control if a shared object is used, you can use\n> a single variable and there is no confusion over having two variables\n> and their conflicting behavior.\n>\n\n\nI'm  not sure how that would work in the present instance. The shared\npreloaded module installs a function and defines the params it wants. If\nwe somehow unify the params with ssl_passphrase_command that could look\nicky, and the module would have to parse the settings string. That's not\na problem for the sample module which only needs one param, but it will\nbe for other more complex implementations.\n\nI'm quite open to suggestions, but I want things to be tolerably clean.If someone wants a shell command wrapper, they can load a contrib that delegates the hook to a shell command. So we can just ship a contrib, which acts both as test coverage for the feature, and a shell-command-support wrapper for anyone who desires that. --  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 22 Nov 2019 13:19:24 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 11/15/19 8:59 AM, Andrew Dunstan wrote:\n> On 11/14/19 3:21 PM, Alvaro Herrera wrote:\n>> On 2019-Nov-14, Andrew Dunstan wrote:\n>>\n>>> I guess this would work. There would have to be a deal of code to load\n>>> the library and lookup the symbol. Do we really think it's worth it?\n>>> Leveraging shared_preload_libraries makes this comparatively simple.\n>> Using the generic interface has the drawback that the user can make more\n>> mistakes. I think that's part of Bruce's issue with it (although I may\n>> misinterpret.)\n>>\n>> I think if you add most of it as a new entry point in dfmgr.c (where you\n>> can leverage internal_library_load) and returns a function pointer to\n>> the user specified function, it's all that much additional code.\n>>\n>> (I don't think you can use load_external_function as is, because it\n>> assumes fmgr V1 calling convention, which I'm not sure serves your case.\n>> But then maybe it does. And if not, then those 10 lines should be very\n>> similar to the code you'd need to add.)\n\n\nI've just been looking at that. load_external_function() doesn't\nactually do anything V1-ish with the value, it just looks up the symbol\nusing dlsym and returns it cast to a PGFunction. Is there any reason I\ncan't just use that and cast it again to the callback function type?\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 6 Dec 2019 17:21:58 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I've just been looking at that. load_external_function() doesn't\n> actually do anything V1-ish with the value, it just looks up the symbol\n> using dlsym and returns it cast to a PGFunction. Is there any reason I\n> can't just use that and cast it again to the callback function type?\n\nTBH, I think this entire discussion has gone seriously off into the\nweeds. The original design where we just let a shared_preload_library\nfunction get into a hook is far superior to any of the overcomplicated\nkluges that are being discussed now. Something like this, for instance:\n\n>>> ssl_passphrase_command='#superlib.so,my_rot13_passphrase'\n\nmakes me positively ill. It introduces problems that we don't need,\nlike how to parse out the sub-parts of the string, and the\nquoting/escaping issues that will come along with that; while from\nthe user's perspective it replaces a simple and intellectually-coherent\nvariable definition with an unintelligible mess.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Dec 2019 18:20:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 12/6/19 6:20 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> I've just been looking at that. load_external_function() doesn't\n>> actually do anything V1-ish with the value, it just looks up the symbol\n>> using dlsym and returns it cast to a PGFunction. Is there any reason I\n>> can't just use that and cast it again to the callback function type?\n> TBH, I think this entire discussion has gone seriously off into the\n> weeds. The original design where we just let a shared_preload_library\n> function get into a hook is far superior to any of the overcomplicated\n> kluges that are being discussed now. Something like this, for instance:\n>\n>>>> ssl_passphrase_command='#superlib.so,my_rot13_passphrase'\n> makes me positively ill. It introduces problems that we don't need,\n> like how to parse out the sub-parts of the string, and the\n> quoting/escaping issues that will come along with that; while from\n> the user's perspective it replaces a simple and intellectually-coherent\n> variable definition with an unintelligible mess.\n>\n> \t\t\t\n\n\n\nYeah, you have a point.\n\n\nBruce was worried about what would happen if we defined both\nssl_passphrase_command and ssl_passphrase_callback. The submitted patch\nlet's the callback have precedence, but it might be cleaner to error out\nwith such a config. OTOH, that wouldn't be so nice on a reload, so it\nmight be better just to document the behaviour.\n\n\nHe was also worried that multiple shared libraries might try to provide\nthe hook. I think that's fairly fanciful, TBH. It comes into the\ncategory of \"Don't do that.\"\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 6 Dec 2019 19:32:32 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Bruce was worried about what would happen if we defined both\n> ssl_passphrase_command and ssl_passphrase_callback. The submitted patch\n> let's the callback have precedence, but it might be cleaner to error out\n> with such a config. OTOH, that wouldn't be so nice on a reload, so it\n> might be better just to document the behaviour.\n\nI think it would be up to the extension that's using the hook to\ndecide what to do if ssl_passphrase_command is set. It would not\nbe our choice, and it would certainly not fall to us to document it.\n\n> He was also worried that multiple shared libraries might try to provide\n> the hook. I think that's fairly fanciful, TBH. It comes into the\n> category of \"Don't do that.\"\n\nAgain, it's somebody else's problem. We have plenty of hooks that\nare of dubious use for multiple extensions, so why should this one be\nheld to a higher standard?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Dec 2019 12:16:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 12/7/19 12:16 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Bruce was worried about what would happen if we defined both\n>> ssl_passphrase_command and ssl_passphrase_callback. The submitted patch\n>> let's the callback have precedence, but it might be cleaner to error out\n>> with such a config. OTOH, that wouldn't be so nice on a reload, so it\n>> might be better just to document the behaviour.\n> I think it would be up to the extension that's using the hook to\n> decide what to do if ssl_passphrase_command is set. It would not\n> be our choice, and it would certainly not fall to us to document it.\n>\n>> He was also worried that multiple shared libraries might try to provide\n>> the hook. I think that's fairly fanciful, TBH. It comes into the\n>> category of \"Don't do that.\"\n> Again, it's somebody else's problem. We have plenty of hooks that\n> are of dubious use for multiple extensions, so why should this one be\n> held to a higher standard?\n>\n> \t\t\t\n\n\nWell that pretty much brings us back to the patch as submitted :-)\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 7 Dec 2019 16:03:27 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Well that pretty much brings us back to the patch as submitted :-)\n\nYeah, pretty nearly. Taking a quick look over the v3 patch, my\nonly quibble is that it doesn't provide any convenient way for the\nexternal module to make decisions about how to interact with\nssl_passphrase_command --- in particular, if it would like to allow\nthat to take precedence, it can't because there's no way for it to\ninvoke the static function ssl_external_passwd_cb.\n\nBut rather than expose that globally, maybe the theory ought to be\n\"set up the state as we'd normally do, then let loadable modules\nchoose to override it\". So I'm tempted to propose a hook function\nwith the signature\n\nvoid openssl_tls_init_hook(SSL_CTX *context, bool isServerStart);\n\nand invoke that somewhere in be_tls_init --- maybe fairly late,\nso that it can override other settings if it wants, not only the\nSSL_CTX_set_default_passwd_cb setting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Dec 2019 17:32:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Sat, 7 Dec 2019 at 07:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > I've just been looking at that. load_external_function() doesn't\n> > actually do anything V1-ish with the value, it just looks up the symbol\n> > using dlsym and returns it cast to a PGFunction. Is there any reason I\n> > can't just use that and cast it again to the callback function type?\n>\n> TBH, I think this entire discussion has gone seriously off into the\n> weeds. The original design where we just let a shared_preload_library\n> function get into a hook is far superior to any of the overcomplicated\n> kluges that are being discussed now. Something like this, for instance:\n>\n> >>> ssl_passphrase_command='#superlib.so,my_rot13_passphrase'\n>\n> makes me positively ill. It introduces problems that we don't need,\n> like how to parse out the sub-parts of the string, and the\n> quoting/escaping issues that will come along with that; while from\n> the user's perspective it replaces a simple and intellectually-coherent\n> variable definition with an unintelligible mess.\n>\n\n+1000 from me on that.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Sat, 7 Dec 2019 at 07:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I've just been looking at that. load_external_function() doesn't\n> actually do anything V1-ish with the value, it just looks up the symbol\n> using dlsym and returns it cast to a PGFunction. Is there any reason I\n> can't just use that and cast it again to the callback function type?\n\nTBH, I think this entire discussion has gone seriously off into the\nweeds.  The original design where we just let a shared_preload_library\nfunction get into a hook is far superior to any of the overcomplicated\nkluges that are being discussed now.  Something like this, for instance:\n\n>>> ssl_passphrase_command='#superlib.so,my_rot13_passphrase'\n\nmakes me positively ill.  It introduces problems that we don't need,\nlike how to parse out the sub-parts of the string, and the\nquoting/escaping issues that will come along with that; while from\nthe user's perspective it replaces a simple and intellectually-coherent\nvariable definition with an unintelligible mess.+1000 from me on that. --  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Mon, 9 Dec 2019 10:22:12 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Sun, Dec 8, 2019 at 9:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > Well that pretty much brings us back to the patch as submitted :-)\n>\n> Yeah, pretty nearly. Taking a quick look over the v3 patch, my\n> only quibble is that it doesn't provide any convenient way for the\n> external module to make decisions about how to interact with\n> ssl_passphrase_command --- in particular, if it would like to allow\n> that to take precedence, it can't because there's no way for it to\n> invoke the static function ssl_external_passwd_cb.\n>\n> But rather than expose that globally, maybe the theory ought to be\n> \"set up the state as we'd normally do, then let loadable modules\n> choose to override it\". So I'm tempted to propose a hook function\n> with the signature\n>\n> void openssl_tls_init_hook(SSL_CTX *context, bool isServerStart);\n>\n> and invoke that somewhere in be_tls_init --- maybe fairly late,\n> so that it can override other settings if it wants, not only the\n> SSL_CTX_set_default_passwd_cb setting.\n>\n\n\nNot sure if the placement is what you want, but maybe something like this?\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 22 Jan 2020 17:32:01 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Thu, Nov 14, 2019 at 8:54 AM Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n> Has the idea of using environment variables (rather than command line\n> args) for external commands been brought up before? I couldn't find\n> anything in the mailing list archives.\n\nPassing data through environment variables isn't secure. Try 'ps -E'\non MacOS, or something like 'ps axe' on Linux.\n\nIf we want to pass data securely to child processes, the way to do it\nis via stdin. Data sent back and forth via file descriptors can't\neasily be snooped by other users on the system.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Jan 2020 12:30:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, Jan 22, 2020 at 8:02 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> Not sure if the placement is what you want, but maybe something like this?\n\nHi Andrew, FYI this failed here:\n\nt/001_testfunc.pl .. Bailout called. Further testing stopped: pg_ctl\nstart failed\nFAILED--Further testing stopped: pg_ctl start failed\nMakefile:23: recipe for target 'prove-check' failed\n\nUnfortunately my robot is poorly trained and does not dump any of the\ninteresting logs for this case, but it looks like it's failing that\nway every time.\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/651756191\n\n\n", "msg_date": "Tue, 18 Feb 2020 16:30:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Tue, Feb 18, 2020 at 2:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jan 22, 2020 at 8:02 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n> > Not sure if the placement is what you want, but maybe something like this?\n>\n> Hi Andrew, FYI this failed here:\n>\n> t/001_testfunc.pl .. Bailout called. Further testing stopped: pg_ctl\n> start failed\n> FAILED--Further testing stopped: pg_ctl start failed\n> Makefile:23: recipe for target 'prove-check' failed\n>\n> Unfortunately my robot is poorly trained and does not dump any of the\n> interesting logs for this case, but it looks like it's failing that\n> way every time.\n>\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/651756191\n\n\nThanks for letting me know, I will investigate.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Feb 2020 07:10:20 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On Wed, Feb 19, 2020 at 7:10 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n> On Tue, Feb 18, 2020 at 2:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Wed, Jan 22, 2020 at 8:02 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> > > Not sure if the placement is what you want, but maybe something like this?\n> >\n> > Hi Andrew, FYI this failed here:\n> >\n> > t/001_testfunc.pl .. Bailout called. Further testing stopped: pg_ctl\n> > start failed\n> > FAILED--Further testing stopped: pg_ctl start failed\n> > Makefile:23: recipe for target 'prove-check' failed\n> >\n> > Unfortunately my robot is poorly trained and does not dump any of the\n> > interesting logs for this case, but it looks like it's failing that\n> > way every time.\n> >\n> > https://travis-ci.org/postgresql-cfbot/postgresql/builds/651756191\n>\n>\n> Thanks for letting me know, I will investigate.\n>\n\n\nThis should fix the issue, it happened when I switched to using a\npre-generated cert/key.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 19 Feb 2020 09:09:20 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On 2/18/20 11:39 PM, Andrew Dunstan wrote:\n> This should fix the issue, it happened when I switched to using a\n> pre-generated cert/key.\n\n# Review\n\nThe patch still applies and passes the test suite, both with openssl \nenabled and with it disabled.\n\nAs for the feature I agree that it is nice to expose this callback to \nextension writers and I agree with the approach taken. The other \nproposals up-thread seem over engineered to me. Maybe if it was a \ngeneral feature used in many places those proposals would be worth it, \nbut I am still skeptical even then. This approach is so much simpler.\n\nThe only real risk I see is that if people install multiple libraries \nfor this they will overwrite the hook for each other but we have other \ncases like that already so I think that is fine.\n\nThe patch moves secure_initialize() to after \nprocess_shared_preload_libraries() which in theory could break some \nextension but it does not seem like a likely thing for extensions to \nrely on. Or is it?\n\nAn idea would be to have the code in ssl_passphrase_func.c to warn if \nthe ssl_passphrase_command GUC is set to make it more useful as example \ncode for people implementing this hook.\n\n# Nitpicking\n\nThe certificate expires in 2030 while all other certificates used in \ntests expires in 2046. Should we be consistent?\n\nThere is text in server.crt and server.key, while other certificates and \nkeys used in the tests do not have this. Again, should we be consistent?\n\nEmpty first line in \nsrc/test/modules/ssl_passphrase_callback/t/001_testfunc.pl which should \nprobably just be removed or replaced with a shebang.\n\nThere is an extra space between the parentheses in the line below. Does \nthat follow our code style for Perl?\n\n+unless ( ($ENV{with_openssl} || 'no') eq 'yes')\n\nMissing space after comma in:\n\n+ok(-e \"$ddir/postmaster.pid\",\"postgres started\");\n\nMissing space after comma in:\n\n+ok(! -e \"$ddir/postmaster.pid\",\"postgres not started with bad passphrase\");\n\nAndreas\n\n\n", "msg_date": "Mon, 16 Mar 2020 03:14:57 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "Hello Andrew,\r\n\r\nFrom: Andreas Karlsson <andreas@proxel.se>\r\n> # Nitpicking\r\n> \r\n> The certificate expires in 2030 while all other certificates used in\r\n> tests expires in 2046. Should we be consistent?\r\n> \r\n> There is text in server.crt and server.key, while other certificates and\r\n> keys used in the tests do not have this. Again, should we be consistent?\r\n> \r\n> Empty first line in\r\n> src/test/modules/ssl_passphrase_callback/t/001_testfunc.pl which should\r\n> probably just be removed or replaced with a shebang.\r\n> \r\n> There is an extra space between the parentheses in the line below. Does\r\n> that follow our code style for Perl?\r\n> \r\n> +unless ( ($ENV{with_openssl} || 'no') eq 'yes')\r\n> \r\n> Missing space after comma in:\r\n> \r\n> +ok(-e \"$ddir/postmaster.pid\",\"postgres started\");\r\n> \r\n> Missing space after comma in:\r\n> \r\n> +ok(! -e \"$ddir/postmaster.pid\",\"postgres not started with bad passphrase\");\r\n> \r\n> Andreas\r\n> \r\n\r\nTrailing space:\r\n\r\n220 + X509v3 Subject Key Identifier:\r\n222 + X509v3 Authority Key Identifier:\r\n\r\nMissing \"d\"(password?):\r\n\r\n121 +/* init hook for SSL, the default sets the passwor callback if appropriate */\r\n\r\nRegards,\r\n\r\n--\r\nTakanori Asaba\r\n\r\n\r\n", "msg_date": "Thu, 19 Mar 2020 08:10:58 +0000", "msg_from": "\"asaba.takanori@fujitsu.com\" <asaba.takanori@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: ssl passphrase callback" }, { "msg_contents": "\nOn 3/15/20 10:14 PM, Andreas Karlsson wrote:\n> On 2/18/20 11:39 PM, Andrew Dunstan wrote:\n>> This should fix the issue, it happened when I switched to using a\n>> pre-generated cert/key.\n>\n> # Review\n>\n> The patch still applies and passes the test suite, both with openssl\n> enabled and with it disabled.\n>\n> As for the feature I agree that it is nice to expose this callback to\n> extension writers and I agree with the approach taken. The other\n> proposals up-thread seem over engineered to me. Maybe if it was a\n> general feature used in many places those proposals would be worth it,\n> but I am still skeptical even then. This approach is so much simpler.\n>\n> The only real risk I see is that if people install multiple libraries\n> for this they will overwrite the hook for each other but we have other\n> cases like that already so I think that is fine.\n\n\nRight, me too.\n\n\n>\n> The patch moves secure_initialize() to after\n> process_shared_preload_libraries() which in theory could break some\n> extension but it does not seem like a likely thing for extensions to\n> rely on. Or is it?\n\n\nI don't think so.\n\n\n>\n> An idea would be to have the code in ssl_passphrase_func.c to warn if\n> the ssl_passphrase_command GUC is set to make it more useful as\n> example code for people implementing this hook.\n\n\nI'll look at that. Should be possible.\n\n\n>\n> # Nitpicking\n>\n> The certificate expires in 2030 while all other certificates used in\n> tests expires in 2046. Should we be consistent?\n\n\nSure. will fix.\n\n\n>\n> There is text in server.crt and server.key, while other certificates\n> and keys used in the tests do not have this. Again, should we be\n> consistent?\n\n\nNot in server.key, but I will suppress it for the crt file.\n\n\n\n>\n> Empty first line in\n> src/test/modules/ssl_passphrase_callback/t/001_testfunc.pl which\n> should probably just be removed or replaced with a shebang.\n\n\nOK\n\n\n>\n> There is an extra space between the parentheses in the line below.\n> Does that follow our code style for Perl?\n>\n> +unless ( ($ENV{with_openssl} || 'no') eq 'yes')\n>\n> Missing space after comma in:\n>\n> +ok(-e \"$ddir/postmaster.pid\",\"postgres started\");\n>\n> Missing space after comma in:\n>\n> +ok(! -e \"$ddir/postmaster.pid\",\"postgres not started with bad\n> passphrase\");\n\n\nI'll make sure to run it through our perl indenter.\n\n\nThanks for the review.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 21 Mar 2020 09:15:53 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 3/19/20 4:10 AM, asaba.takanori@fujitsu.com wrote:\n\n\n\n> Trailing space:\n>\n> 220 + X509v3 Subject Key Identifier:\n> 222 + X509v3 Authority Key Identifier:\n\n\nWe're going to remove all the text, so this becomes moot.\n\n\n>\n> Missing \"d\"(password?):\n>\n> 121 +/* init hook for SSL, the default sets the passwor callback if appropriate */\n>\n\nWill fix, thanks.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 21 Mar 2020 09:18:26 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On 3/21/20 9:18 AM, Andrew Dunstan wrote:\n> On 3/19/20 4:10 AM, asaba.takanori@fujitsu.com wrote:\n>\n>\n>\n>> Trailing space:\n>>\n>> 220 + X509v3 Subject Key Identifier:\n>> 222 + X509v3 Authority Key Identifier:\n>\n> We're going to remove all the text, so this becomes moot.\n>\n>\n>> Missing \"d\"(password?):\n>>\n>> 121 +/* init hook for SSL, the default sets the passwor callback if appropriate */\n>>\n> Will fix, thanks.\n>\n>\n\n\nLatest patch attached, I think all comments have been addressed. I\npropose to push this later this coming week if there are no more comments.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 21 Mar 2020 20:08:19 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "On 3/22/20 1:08 AM, Andrew Dunstan wrote:\n> Latest patch attached, I think all comments have been addressed. I\n> propose to push this later this coming week if there are no more comments.\n\nI do not have any objections.\n\nAndreas\n\n\n\n", "msg_date": "Mon, 23 Mar 2020 01:58:14 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 3/22/20 1:08 AM, Andrew Dunstan wrote:\n>> Latest patch attached, I think all comments have been addressed. I\n>> propose to push this later this coming week if there are no more comments.\n\n> I do not have any objections.\n\nThis CF entry is still open, should it not be closed as committed?\n\nhttps://commitfest.postgresql.org/27/2338/\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Mar 2020 13:45:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ssl passphrase callback" }, { "msg_contents": "\nOn 3/28/20 1:45 PM, Tom Lane wrote:\n> Andreas Karlsson <andreas@proxel.se> writes:\n>> On 3/22/20 1:08 AM, Andrew Dunstan wrote:\n>>> Latest patch attached, I think all comments have been addressed. I\n>>> propose to push this later this coming week if there are no more comments.\n>> I do not have any objections.\n> This CF entry is still open, should it not be closed as committed?\n>\n> https://commitfest.postgresql.org/27/2338/\n>\n> \t\t\t\n\n\nDone, thanks for the reminder.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 28 Mar 2020 15:35:07 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: ssl passphrase callback" } ]
[ { "msg_contents": "Hi everyone,\n\nI was taking a look at pg_stat_statements module and noticed that it does\nnot collect any percentile metrics. I believe that It would be really handy\nto have those available and I'd love to contribute with this feature.\n\nThe basic idea is to accumulate the the query execution times using an\napproximation structure like q-digest or t-digest and add those results to\nthe pg_stat_statements table as fixed columns. Something like this\n\np90_time:\np95_time:\np99_time:\np70_time:\n...\n\nAnother solution is to persist de digest structure in a binary column and\nuse a function to extract the desired quantile ilke this SELECT\napprox_quantile(digest_times, 0.99) FROM pg_stat_statements\n\nWhat do you guys think?\nCheers,\n\nHi everyone,I was taking a look at pg_stat_statements module and noticed that it does not collect any percentile metrics. I believe that It would be really handy to have those available and I'd love to contribute with this feature. The basic idea is to accumulate the the query execution times using an approximation structure like q-digest or t-digest and add those results to the pg_stat_statements table as fixed columns. Something like thisp90_time:p95_time:p99_time:p70_time:...Another solution is to persist de digest structure in a binary column and use a function to extract the desired quantile ilke this SELECT approx_quantile(digest_times, 0.99) FROM pg_stat_statementsWhat do you guys think? Cheers,", "msg_date": "Thu, 31 Oct 2019 12:51:17 -0300", "msg_from": "Igor Calabria <igor.calabria@gmail.com>", "msg_from_op": true, "msg_subject": "Adding percentile metrics to pg_stat_statements module" }, { "msg_contents": "čt 31. 10. 2019 v 16:51 odesílatel Igor Calabria <igor.calabria@gmail.com>\nnapsal:\n\n> Hi everyone,\n>\n> I was taking a look at pg_stat_statements module and noticed that it does\n> not collect any percentile metrics. I believe that It would be really handy\n> to have those available and I'd love to contribute with this feature.\n>\n> The basic idea is to accumulate the the query execution times using an\n> approximation structure like q-digest or t-digest and add those results to\n> the pg_stat_statements table as fixed columns. Something like this\n>\n> p90_time:\n> p95_time:\n> p99_time:\n> p70_time:\n> ...\n>\n> Another solution is to persist de digest structure in a binary column and\n> use a function to extract the desired quantile ilke this SELECT\n> approx_quantile(digest_times, 0.99) FROM pg_stat_statements\n>\n> What do you guys think?\n>\n\n+ the idea\n\nBut I am not sure about performance and memory overhead\n\nPavel\n\n> Cheers,\n>\n>\n\nčt 31. 10. 2019 v 16:51 odesílatel Igor Calabria <igor.calabria@gmail.com> napsal:Hi everyone,I was taking a look at pg_stat_statements module and noticed that it does not collect any percentile metrics. I believe that It would be really handy to have those available and I'd love to contribute with this feature. The basic idea is to accumulate the the query execution times using an approximation structure like q-digest or t-digest and add those results to the pg_stat_statements table as fixed columns. Something like thisp90_time:p95_time:p99_time:p70_time:...Another solution is to persist de digest structure in a binary column and use a function to extract the desired quantile ilke this SELECT approx_quantile(digest_times, 0.99) FROM pg_stat_statementsWhat do you guys think? + the ideaBut I am not sure about performance and memory overheadPavelCheers,", "msg_date": "Thu, 31 Oct 2019 17:36:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding percentile metrics to pg_stat_statements module" }, { "msg_contents": "On Thu, Oct 31, 2019 at 12:51:17PM -0300, Igor Calabria wrote:\n>Hi everyone,\n>\n>I was taking a look at pg_stat_statements module and noticed that it does\n>not collect any percentile metrics. I believe that It would be really handy\n>to have those available and I'd love to contribute with this feature.\n>\n>The basic idea is to accumulate the the query execution times using an\n>approximation structure like q-digest or t-digest and add those results to\n>the pg_stat_statements table as fixed columns. Something like this\n>\n>p90_time:\n>p95_time:\n>p99_time:\n>p70_time:\n>...\n>\n>Another solution is to persist de digest structure in a binary column and\n>use a function to extract the desired quantile ilke this SELECT\n>approx_quantile(digest_times, 0.99) FROM pg_stat_statements\n>\n\nIMO having some sort of CDF approximation (being a q-digest or t-digest)\nwould be useful, although it'd probably need to be optional (mostly\nbecuase of memory consumption).\n\nI don't see why we would not store the digests themselves. Storing just\nsome selected percentiles would be pretty problematic due to losing a\nlot of information on restart. Also, pg_stat_statements is not a table\nbut a view on in-memory hash table.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 31 Oct 2019 20:32:47 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Adding percentile metrics to pg_stat_statements module" }, { "msg_contents": "Yeah, I agree that there's no reason to store the digests themselves and I\nreally liked the idea of it being optional.\nIf it turns out that memory consumption on real workloads is small enough,\nit could eventually be turned on by default.\n\nI'll start working on patch\n\nEm qui, 31 de out de 2019 às 16:32, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> escreveu:\n\n> On Thu, Oct 31, 2019 at 12:51:17PM -0300, Igor Calabria wrote:\n> >Hi everyone,\n> >\n> >I was taking a look at pg_stat_statements module and noticed that it does\n> >not collect any percentile metrics. I believe that It would be really\n> handy\n> >to have those available and I'd love to contribute with this feature.\n> >\n> >The basic idea is to accumulate the the query execution times using an\n> >approximation structure like q-digest or t-digest and add those results to\n> >the pg_stat_statements table as fixed columns. Something like this\n> >\n> >p90_time:\n> >p95_time:\n> >p99_time:\n> >p70_time:\n> >...\n> >\n> >Another solution is to persist de digest structure in a binary column and\n> >use a function to extract the desired quantile ilke this SELECT\n> >approx_quantile(digest_times, 0.99) FROM pg_stat_statements\n> >\n>\n> IMO having some sort of CDF approximation (being a q-digest or t-digest)\n> would be useful, although it'd probably need to be optional (mostly\n> becuase of memory consumption).\n>\n> I don't see why we would not store the digests themselves. Storing just\n> some selected percentiles would be pretty problematic due to losing a\n> lot of information on restart. Also, pg_stat_statements is not a table\n> but a view on in-memory hash table.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nYeah, I agree that there's no reason to store the digests themselves and I really liked the idea of it being optional. If it turns out that memory consumption on real workloads is small enough, it could eventually be turned on by default. I'll start working on patch Em qui, 31 de out de 2019 às 16:32, Tomas Vondra <tomas.vondra@2ndquadrant.com> escreveu:On Thu, Oct 31, 2019 at 12:51:17PM -0300, Igor Calabria wrote:\n>Hi everyone,\n>\n>I was taking a look at pg_stat_statements module and noticed that it does\n>not collect any percentile metrics. I believe that It would be really handy\n>to have those available and I'd love to contribute with this feature.\n>\n>The basic idea is to accumulate the the query execution times using an\n>approximation structure like q-digest or t-digest and add those results to\n>the pg_stat_statements table as fixed columns. Something like this\n>\n>p90_time:\n>p95_time:\n>p99_time:\n>p70_time:\n>...\n>\n>Another solution is to persist de digest structure in a binary column and\n>use a function to extract the desired quantile ilke this SELECT\n>approx_quantile(digest_times, 0.99) FROM pg_stat_statements\n>\n\nIMO having some sort of CDF approximation (being a q-digest or t-digest)\nwould be useful, although it'd probably need to be optional (mostly\nbecuase of memory consumption).\n\nI don't see why we would not store the digests themselves. Storing just\nsome selected percentiles would be pretty problematic due to losing a\nlot of information on restart. Also, pg_stat_statements is not a table\nbut a view on in-memory hash table.\n\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 1 Nov 2019 11:11:13 -0300", "msg_from": "Igor Calabria <igor.calabria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding percentile metrics to pg_stat_statements module" }, { "msg_contents": "On Fri, Nov 01, 2019 at 11:11:13AM -0300, Igor Calabria wrote:\n>Yeah, I agree that there's no reason to store the digests themselves and I\n>really liked the idea of it being optional.\n\nThat's not what I wrote. My point was that we *should* store the digests\nthemselves, otherwise we just introduce additional errors into the\nestimates, because it discards the weights/frequencies.\n\n>If it turns out that memory consumption on real workloads is small enough,\n>it could eventually be turned on by default.\n>\n\nMaybe, but it's not just about memory consumption. CPU matters too.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 1 Nov 2019 15:17:51 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Adding percentile metrics to pg_stat_statements module" }, { "msg_contents": ">\n> That's not what I wrote. My point was that we *should* store the digests\n> themselves, otherwise we just introduce additional errors into the\n> estimates, because it discards the weights/frequencies.\n\n\nSorry. I meant to write \"no reason to *not* store the digests\"\n\n\nEm sex, 1 de nov de 2019 às 11:17, Tomas Vondra <\ntomas.vondra@2ndquadrant.com> escreveu:\n\n> On Fri, Nov 01, 2019 at 11:11:13AM -0300, Igor Calabria wrote:\n> >Yeah, I agree that there's no reason to store the digests themselves and I\n> >really liked the idea of it being optional.\n>\n> That's not what I wrote. My point was that we *should* store the digests\n> themselves, otherwise we just introduce additional errors into the\n> estimates, because it discards the weights/frequencies.\n>\n> >If it turns out that memory consumption on real workloads is small enough,\n> >it could eventually be turned on by default.\n> >\n>\n> Maybe, but it's not just about memory consumption. CPU matters too.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nThat's not what I wrote. My point was that we *should* store the digests\nthemselves, otherwise we just introduce additional errors into the\nestimates, because it discards the weights/frequencies.Sorry. I meant to write \"no reason to not store the digests\" Em sex, 1 de nov de 2019 às 11:17, Tomas Vondra <tomas.vondra@2ndquadrant.com> escreveu:On Fri, Nov 01, 2019 at 11:11:13AM -0300, Igor Calabria wrote:\n>Yeah, I agree that there's no reason to store the digests themselves and I\n>really liked the idea of it being optional.\n\nThat's not what I wrote. My point was that we *should* store the digests\nthemselves, otherwise we just introduce additional errors into the\nestimates, because it discards the weights/frequencies.\n\n>If it turns out that memory consumption on real workloads is small enough,\n>it could eventually be turned on by default.\n>\n\nMaybe, but it's not just about memory consumption. CPU matters too.\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 1 Nov 2019 13:05:02 -0300", "msg_from": "Igor Calabria <igor.calabria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding percentile metrics to pg_stat_statements module" }, { "msg_contents": "On 10/31/19 8:32 PM, Tomas Vondra wrote:\n> IMO having some sort of CDF approximation (being a q-digest or t-digest)\n> would be useful, although it'd probably need to be optional (mostly\n> becuase of memory consumption).\n\n+1, I like this idea. If we are afraid of CPU cost we could imagine some kind of\nsampling or add the possibility to collect only for a specific queryid.\n\nI dreamed of this kind of feature for PoWA. Thus, it could make possible to\ncompare CDF between two days for example, before and after introducing a change.\n\nRegards,\n\n-- \nAdrien NAYRAT\n\n\n\n", "msg_date": "Sat, 2 Nov 2019 10:23:49 +0100", "msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>", "msg_from_op": false, "msg_subject": "Re: Adding percentile metrics to pg_stat_statements module" }, { "msg_contents": "Hello\n\n\nWhile looking at slow queries on pg_stat_statements. I was looking for percentile fields..\n\n\nIf we are worried about CPU cost, maybe it could be useful to turn in on when you have a high stddev_exec_time for the query ?\n\n\nRegards,\n\n________________________________\nDe : Adrien Nayrat <adrien.nayrat@anayrat.info>\nEnvoyé : samedi 2 novembre 2019 10:23:49\nÀ : Tomas Vondra; Igor Calabria\nCc : pgsql-hackers@postgresql.org\nObjet : Re: Adding percentile metrics to pg_stat_statements module\n\nOn 10/31/19 8:32 PM, Tomas Vondra wrote:\n> IMO having some sort of CDF approximation (being a q-digest or t-digest)\n> would be useful, although it'd probably need to be optional (mostly\n> becuase of memory consumption).\n\n+1, I like this idea. If we are afraid of CPU cost we could imagine some kind of\nsampling or add the possibility to collect only for a specific queryid.\n\nI dreamed of this kind of feature for PoWA. Thus, it could make possible to\ncompare CDF between two days for example, before and after introducing a change.\n\nRegards,\n\n--\nAdrien NAYRAT\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHello\n\n\nWhile looking at slow queries on pg_stat_statements. I was looking for percentile fields.. \n\n\nIf we are worried about CPU cost, maybe it could be useful to turn in on when you have a high stddev_exec_time for the query ?\n\n\nRegards,\n\n\nDe : Adrien Nayrat <adrien.nayrat@anayrat.info>\nEnvoyé : samedi 2 novembre 2019 10:23:49\nÀ : Tomas Vondra; Igor Calabria\nCc : pgsql-hackers@postgresql.org\nObjet : Re: Adding percentile metrics to pg_stat_statements module\n \n\n\n\nOn 10/31/19 8:32 PM, Tomas Vondra wrote:\n> IMO having some sort of CDF approximation (being a q-digest or t-digest)\n> would be useful, although it'd probably need to be optional (mostly\n> becuase of memory consumption).\n\n+1, I like this idea. If we are afraid of CPU cost we could imagine some kind of\nsampling or add the possibility to collect only for a specific queryid.\n\nI dreamed of this kind of feature for PoWA.  Thus, it could make possible to\ncompare CDF between two days for example, before and after introducing a change.\n\nRegards,\n\n-- \nAdrien NAYRAT", "msg_date": "Mon, 5 Jun 2023 08:05:10 +0000", "msg_from": "benoit <benoit@hopsandfork.com>", "msg_from_op": false, "msg_subject": "RE: Adding percentile metrics to pg_stat_statements module" } ]
[ { "msg_contents": "Hi\n\nlong time we are think how to allow add some custom commands in psql. I had\na following idea\n\n1. psql can has special buffer for custom queries. This buffer can be\nfilled by special command \\gdefq. This command will have two parameters -\nname and number of arguments.\n\nsome like\n\nselect * from pg_class where relname = :'_first' \\gdefcq m1 1\nselect * from pg_class where relnamespace = :_first::regnamespace and\nrename = :'_second' \\gdefcq m1 2\n\nthe custom queries can be executed via doubled backslash like\n\n\\\\m1 pg_proc\n\\\\m1 pg_catalog pg_proc\n\nthe runtime will count number of parameters and chose variant with selected\nname and same number of arguments. Next, it save parameters to variables\nlike _first, _second. Last step is query execution.\n\nWhat do you think about this?\n\nRegards\n\nPavel\n\nHilong time we are think how to allow add some custom commands in psql. I had a following idea1. psql can has special buffer for custom queries. This buffer can be filled by special command \\gdefq. This command will have two parameters - name and number of arguments.some like select * from pg_class where relname = :'_first' \\gdefcq m1 1select * from pg_class where relnamespace = :_first::regnamespace and rename = :'_second' \\gdefcq m1 2the custom queries can be executed via doubled backslash like\\\\m1 pg_proc\\\\m1 pg_catalog pg_procthe runtime will count number of parameters and chose variant with selected name and same number of arguments. Next, it save parameters to variables like _first, _second. Last step is query execution.What do you think about this?RegardsPavel", "msg_date": "Thu, 31 Oct 2019 17:48:22 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "idea - proposal - defining own psql commands" } ]
[ { "msg_contents": "This small patch authored by my colleague Craig Ringer enhances\nTestlib's command_fails_like by allowing the passing of extra keyword\ntype arguments. The keyword initially recognized is 'extra_ipcrun_opts'.\nThe value for this keyword needs to be an array, and is passed through\nto the call to IPC::Run.\n\nSome patches I will be submitting shortly rely on this enhancement.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 31 Oct 2019 13:02:58 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "TestLib::command_fails_like enhancement" }, { "msg_contents": "On Thu, Oct 31, 2019 at 01:02:58PM -0400, Andrew Dunstan wrote:\n> This small patch authored by my colleague Craig Ringer enhances\n> Testlib's command_fails_like by allowing the passing of extra keyword\n> type arguments. The keyword initially recognized is 'extra_ipcrun_opts'.\n> The value for this keyword needs to be an array, and is passed through\n> to the call to IPC::Run.\n\nWhy not.\n\n> Some patches I will be submitting shortly rely on this enhancement.\n\nAnything submitted yet or any examples? I was just wondering in which\ncase it mattered.\n--\nMichael", "msg_date": "Sat, 2 Nov 2019 16:44:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\n\nOn 10/31/19 10:02 AM, Andrew Dunstan wrote:\n> \n> This small patch authored by my colleague Craig Ringer enhances\n> Testlib's command_fails_like by allowing the passing of extra keyword\n> type arguments. The keyword initially recognized is 'extra_ipcrun_opts'.\n> The value for this keyword needs to be an array, and is passed through\n> to the call to IPC::Run.\n\nHi Andrew, a few code review comments:\n\nThe POD documentation for this function should be updated to include a \ndescription of the %kwargs argument list.\n\nSince command_fails_like is patterned on command_like, perhaps you \nshould make this change to both of them, even if you only originally \nintend to use the new functionality in command_fails_like. I'm not sure \nof this, though, since I haven't seen any example usage yet.\n\nI'm vaguely bothered by having %kwargs gobble up the remaining function \narguments, not because it isn't a perl-ish thing to do, but because none \nof the other functions in this module do anything similar. The function \ncheck_mode_recursive takes an optional $ignore_list array reference as \nits last argument. Perhaps command_fails_like could follow that pattern \nby taking an optional $kwargs hash reference.\n\n-- \nMark Dilger\n\n\n", "msg_date": "Thu, 7 Nov 2019 14:28:24 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "On Fri, 8 Nov 2019 at 06:28, Mark Dilger <hornschnorter@gmail.com> wrote:\n\n>\n>\n> On 10/31/19 10:02 AM, Andrew Dunstan wrote:\n> >\n> > This small patch authored by my colleague Craig Ringer enhances\n> > Testlib's command_fails_like by allowing the passing of extra keyword\n> > type arguments. The keyword initially recognized is 'extra_ipcrun_opts'.\n> > The value for this keyword needs to be an array, and is passed through\n> > to the call to IPC::Run.\n>\n> Hi Andrew, a few code review comments:\n>\n> The POD documentation for this function should be updated to include a\n> description of the %kwargs argument list.\n>\n> Since command_fails_like is patterned on command_like, perhaps you\n> should make this change to both of them, even if you only originally\n> intend to use the new functionality in command_fails_like. I'm not sure\n> of this, though, since I haven't seen any example usage yet.\n>\n> I'm vaguely bothered by having %kwargs gobble up the remaining function\n> arguments, not because it isn't a perl-ish thing to do, but because none\n> of the other functions in this module do anything similar. The function\n> check_mode_recursive takes an optional $ignore_list array reference as\n> its last argument. Perhaps command_fails_like could follow that pattern\n> by taking an optional $kwargs hash reference.\n>\n\nYeah, that's probably sensible.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 8 Nov 2019 at 06:28, Mark Dilger <hornschnorter@gmail.com> wrote:\n\nOn 10/31/19 10:02 AM, Andrew Dunstan wrote:\n> \n> This small patch authored by my colleague Craig Ringer enhances\n> Testlib's command_fails_like by allowing the passing of extra keyword\n> type arguments. The keyword initially recognized is 'extra_ipcrun_opts'.\n> The value for this keyword needs to be an array, and is passed through\n> to the call to IPC::Run.\n\nHi Andrew, a few code review comments:\n\nThe POD documentation for this function should be updated to include a \ndescription of the %kwargs argument list.\n\nSince command_fails_like is patterned on command_like, perhaps you \nshould make this change to both of them, even if you only originally \nintend to use the new functionality in command_fails_like.  I'm not sure \nof this, though, since I haven't seen any example usage yet.\n\nI'm vaguely bothered by having %kwargs gobble up the remaining function \narguments, not because it isn't a perl-ish thing to do, but because none \nof the other functions in this module do anything similar.  The function \ncheck_mode_recursive takes an optional $ignore_list array reference as \nits last argument.  Perhaps command_fails_like could follow that pattern \nby taking an optional $kwargs hash reference.Yeah, that's probably sensible. --  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 8 Nov 2019 14:16:28 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\nOn 11/8/19 1:16 AM, Craig Ringer wrote:\n> On Fri, 8 Nov 2019 at 06:28, Mark Dilger <hornschnorter@gmail.com\n> <mailto:hornschnorter@gmail.com>> wrote:\n>\n>\n>\n> On 10/31/19 10:02 AM, Andrew Dunstan wrote:\n> >\n> > This small patch authored by my colleague Craig Ringer enhances\n> > Testlib's command_fails_like by allowing the passing of extra\n> keyword\n> > type arguments. The keyword initially recognized is\n> 'extra_ipcrun_opts'.\n> > The value for this keyword needs to be an array, and is passed\n> through\n> > to the call to IPC::Run.\n>\n> Hi Andrew, a few code review comments:\n>\n> The POD documentation for this function should be updated to\n> include a\n> description of the %kwargs argument list.\n>\n> Since command_fails_like is patterned on command_like, perhaps you\n> should make this change to both of them, even if you only originally\n> intend to use the new functionality in command_fails_like.  I'm\n> not sure\n> of this, though, since I haven't seen any example usage yet.\n>\n> I'm vaguely bothered by having %kwargs gobble up the remaining\n> function\n> arguments, not because it isn't a perl-ish thing to do, but\n> because none\n> of the other functions in this module do anything similar.  The\n> function\n> check_mode_recursive takes an optional $ignore_list array\n> reference as\n> its last argument.  Perhaps command_fails_like could follow that\n> pattern\n> by taking an optional $kwargs hash reference.\n>\n>\n> Yeah, that's probably sensible. \n>\n>\n>\n\n\nOK, I will rework it taking these comments into account. Thanks for the\ncomments Mark.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 8 Nov 2019 09:33:20 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\n\nOn 11/8/19 6:33 AM, Andrew Dunstan wrote:\n> \n> On 11/8/19 1:16 AM, Craig Ringer wrote:\n>> On Fri, 8 Nov 2019 at 06:28, Mark Dilger <hornschnorter@gmail.com\n>> <mailto:hornschnorter@gmail.com>> wrote:\n>>\n>>\n>>\n>> On 10/31/19 10:02 AM, Andrew Dunstan wrote:\n>> >\n>> > This small patch authored by my colleague Craig Ringer enhances\n>> > Testlib's command_fails_like by allowing the passing of extra\n>> keyword\n>> > type arguments. The keyword initially recognized is\n>> 'extra_ipcrun_opts'.\n>> > The value for this keyword needs to be an array, and is passed\n>> through\n>> > to the call to IPC::Run.\n>>\n>> Hi Andrew, a few code review comments:\n>>\n>> The POD documentation for this function should be updated to\n>> include a\n>> description of the %kwargs argument list.\n>>\n>> Since command_fails_like is patterned on command_like, perhaps you\n>> should make this change to both of them, even if you only originally\n>> intend to use the new functionality in command_fails_like.  I'm\n>> not sure\n>> of this, though, since I haven't seen any example usage yet.\n>>\n>> I'm vaguely bothered by having %kwargs gobble up the remaining\n>> function\n>> arguments, not because it isn't a perl-ish thing to do, but\n>> because none\n>> of the other functions in this module do anything similar.  The\n>> function\n>> check_mode_recursive takes an optional $ignore_list array\n>> reference as\n>> its last argument.  Perhaps command_fails_like could follow that\n>> pattern\n>> by taking an optional $kwargs hash reference.\n>>\n>>\n>> Yeah, that's probably sensible.\n>>\n>>\n>>\n> \n> \n> OK, I will rework it taking these comments into account. Thanks for the\n> comments Mark.\n\nI'd be happy to see the regression tests you are writing sooner than \nthat, if you don't mind posting them. It's hard to do a proper review \nfor you without a better sense of where you are going with these changes.\n\n-- \nMark Dilger\n\n\n", "msg_date": "Fri, 8 Nov 2019 08:25:50 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\nOn 11/8/19 11:25 AM, Mark Dilger wrote:\n>\n>\n> On 11/8/19 6:33 AM, Andrew Dunstan wrote:\n>>\n>> On 11/8/19 1:16 AM, Craig Ringer wrote:\n>>> On Fri, 8 Nov 2019 at 06:28, Mark Dilger <hornschnorter@gmail.com\n>>> <mailto:hornschnorter@gmail.com>> wrote:\n>>>\n>>>\n>>>\n>>>      On 10/31/19 10:02 AM, Andrew Dunstan wrote:\n>>>      >\n>>>      > This small patch authored by my colleague Craig Ringer enhances\n>>>      > Testlib's command_fails_like by allowing the passing of extra\n>>>      keyword\n>>>      > type arguments. The keyword initially recognized is\n>>>      'extra_ipcrun_opts'.\n>>>      > The value for this keyword needs to be an array, and is passed\n>>>      through\n>>>      > to the call to IPC::Run.\n>>>\n>>>      Hi Andrew, a few code review comments:\n>>>\n>>>      The POD documentation for this function should be updated to\n>>>      include a\n>>>      description of the %kwargs argument list.\n>>>\n>>>      Since command_fails_like is patterned on command_like, perhaps you\n>>>      should make this change to both of them, even if you only\n>>> originally\n>>>      intend to use the new functionality in command_fails_like.  I'm\n>>>      not sure\n>>>      of this, though, since I haven't seen any example usage yet.\n>>>\n>>>      I'm vaguely bothered by having %kwargs gobble up the remaining\n>>>      function\n>>>      arguments, not because it isn't a perl-ish thing to do, but\n>>>      because none\n>>>      of the other functions in this module do anything similar.  The\n>>>      function\n>>>      check_mode_recursive takes an optional $ignore_list array\n>>>      reference as\n>>>      its last argument.  Perhaps command_fails_like could follow that\n>>>      pattern\n>>>      by taking an optional $kwargs hash reference.\n>>>\n>>>\n>>> Yeah, that's probably sensible.\n>>>\n>>>\n>>>\n>>\n>>\n>> OK, I will rework it taking these comments into account. Thanks for the\n>> comments Mark.\n>\n> I'd be happy to see the regression tests you are writing sooner than\n> that, if you don't mind posting them.  It's hard to do a proper review\n> for you without a better sense of where you are going with these changes.\n\n\nThis will need to be rewritten in light of the above, but see\n<https://www.postgresql.org/message-id/87b1e36b-e36a-add5-1a9b-9fa34914a256@2ndQuadrant.com>\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 8 Nov 2019 12:22:05 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\n\nOn 11/8/19 9:22 AM, Andrew Dunstan wrote:\n...\n> This will need to be rewritten in light of the above, but see\n> <https://www.postgresql.org/message-id/87b1e36b-e36a-add5-1a9b-9fa34914a256@2ndQuadrant.com>\n\nThanks for the reference. Having read your motivating example, this new \nreview reverses some of what I suggested in the prior review.\n\n\nIn the existing TestLib.pm, there are eight occurrences of nearly \nidentical usages of IPC::Run scattered through similar functions:\n\nrun_command:\n my $result = IPC::Run::run $cmd, '>', \\$stdout, '2>', \\$stderr;\n\ncheck_pg_config:\n my $result = IPC::Run::run [ 'pg_config', '--includedir' ], '>',\n \\$stdout, '2>', \\$stderr\n or die \"could not execute pg_config\";\n\nprogram_help_ok:\n my $result = IPC::Run::run [ $cmd, '--help' ], '>', \\$stdout, '2>',\n \\$stderr;\n\nprogram_version_ok:\n my $result = IPC::Run::run [ $cmd, '--version' ], '>', \\$stdout, '2>',\n \\$stderr;\n\nprogram_options_handling_ok:\n my $result = IPC::Run::run [ $cmd, '--not-a-valid-option' ], '>',\n \\$stdout,\n '2>', \\$stderr;\n\ncommand_like:\n my $result = IPC::Run::run $cmd, '>', \\$stdout, '2>', \\$stderr;\n\ncommand_like_safe:\n my $result = IPC::Run::run $cmd, '>', $stdoutfile, '2>', $stderrfile;\n\ncommand_fails_like:\n my $result = IPC::Run::run $cmd, '>', \\$stdout, '2>', \\$stderr,\n @extra_ipcrun_opts;\n\nOne rational motive for designing TestLib with so much code duplication \nis to make the tests that use the library easier to read:\n\n command_like_safe(foo);\n command_like(bar);\n command_fails_like(baz);\n\nwhich is easier to understand than:\n\n command_like(foo, failure_mode => safe);\n command_like(bar);\n command_like(baz, failure => expected);\n\nand so forth.\n\nIn line with that thinking, perhaps you should just create:\n\n command_fails_without_tty_like(foo)\n\nand be done, or perhaps:\n\n command_fails_like(foo, tty => 'closed')\n\nand still preserve some of the test readability. Will anyone like the \nreadability of your tests if you have:\n\n command_fails_like(foo, extra_ipcrun_opts => ['<pty<', \\$eof_in]) ?\n\nAdmittedly, \"foo\", \"bar\", and \"baz\" above are shorthand notation for \nthings in practice that are already somewhat hard to read, as in:\n\n command_fails_like(\n [ 'pg_dump', 'qqq', 'abc' ],\n qr/\\Qpg_dump: error: too many command-line arguments (first is \n\"abc\")\\E/,\n 'pg_dump: too many command-line arguments');\n\nbut adding more to that cruft just makes it worse. Right?\n\n-- \nMark Dilger\n\n\n", "msg_date": "Fri, 8 Nov 2019 13:40:06 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\nOn 11/8/19 4:40 PM, Mark Dilger wrote:\n>\n>\n> On 11/8/19 9:22 AM, Andrew Dunstan wrote:\n> ...\n>> This will need to be rewritten in light of the above, but see\n>> <https://www.postgresql.org/message-id/87b1e36b-e36a-add5-1a9b-9fa34914a256@2ndQuadrant.com>\n>>\n>\n> Thanks for the reference.  Having read your motivating example, this\n> new review reverses some of what I suggested in the prior review.\n>\n>\n> In the existing TestLib.pm, there are eight occurrences of nearly\n> identical usages of IPC::Run scattered through similar functions:\n>\n>\n[snip]\n\n\n>\n> One rational motive for designing TestLib with so much code\n> duplication is to make the tests that use the library easier to read:\n>\n>   command_like_safe(foo);\n>   command_like(bar);\n>   command_fails_like(baz);\n>\n> which is easier to understand than:\n>\n>   command_like(foo, failure_mode => safe);\n>   command_like(bar);\n>   command_like(baz, failure => expected);\n>\n> and so forth.\n>\n> In line with that thinking, perhaps you should just create:\n>\n>   command_fails_without_tty_like(foo)\n>\n> and be done, or perhaps:\n>\n>   command_fails_like(foo, tty => 'closed')\n>\n> and still preserve some of the test readability.  Will anyone like the\n> readability of your tests if you have:\n>\n>   command_fails_like(foo, extra_ipcrun_opts => ['<pty<', \\$eof_in]) ?\n>\n> Admittedly, \"foo\", \"bar\", and \"baz\" above are shorthand notation for\n> things in practice that are already somewhat hard to read, as in:\n>\n>   command_fails_like(\n>       [ 'pg_dump', 'qqq', 'abc' ],\n>       qr/\\Qpg_dump: error: too many command-line arguments (first is\n> \"abc\")\\E/,\n>       'pg_dump: too many command-line arguments');\n>\n> but adding more to that cruft just makes it worse.  Right?\n>\n\nOK, I agree that we're getting rather baroque here. I could go with your\nsuggestion of YA function, or possibly a solution that simple passes any\nextra arguments straight through to IPC::Run::run(), e.g.\n\ncommand_fails_like(\n      [ 'pg_dump', 'qqq', 'abc' ],\n      qr/\\Qpg_dump: error: too many command-line arguments (first is\n\"abc\")\\E/,\n      'pg_dump: too many command-line arguments',\n      '<pty<', \\$eof_in);\n\nThat means we're not future-proofing the function - we'll never be able\nto add more arguments to it, but I'm not really certain that matters\nanyway. I should note that perlcritic whines about subroutines with too\nmany arguments, so making provision for more seems unnecessary anyway.\n\nI don't think this is worth spending a huge amount of time on, we've\nalready spent more time discussing it than it would take to implement\neither solution.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 9 Nov 2019 08:25:10 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\nOn 11/9/19 8:25 AM, Andrew Dunstan wrote:\n> OK, I agree that we're getting rather baroque here. I could go with your\n> suggestion of YA function, or possibly a solution that simple passes any\n> extra arguments straight through to IPC::Run::run(), e.g.\n>\n> command_fails_like(\n>       [ 'pg_dump', 'qqq', 'abc' ],\n>       qr/\\Qpg_dump: error: too many command-line arguments (first is\n> \"abc\")\\E/,\n>       'pg_dump: too many command-line arguments',\n>       '<pty<', \\$eof_in);\n>\n> That means we're not future-proofing the function - we'll never be able\n> to add more arguments to it, but I'm not really certain that matters\n> anyway. I should note that perlcritic whines about subroutines with too\n> many arguments, so making provision for more seems unnecessary anyway.\n>\n> I don't think this is worth spending a huge amount of time on, we've\n> already spent more time discussing it than it would take to implement\n> either solution.\n>\n>\n\nOn further consideration, I'm wondering why we don't just\nunconditionally use a closed input pty for all these functions (except\nrun_log). None of them use any input, and making the client worry about\nwhether or not to close it seems something of an abstraction break.\nThere would be no API change at all involved in this case, just a bit of\nextra cleanliness. Would need testing on Windows, I'll go and do that now.\n\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 11 Nov 2019 11:48:25 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\n\nOn 11/11/19 8:48 AM, Andrew Dunstan wrote:\n> \n> On 11/9/19 8:25 AM, Andrew Dunstan wrote:\n>> OK, I agree that we're getting rather baroque here. I could go with your\n>> suggestion of YA function, or possibly a solution that simple passes any\n>> extra arguments straight through to IPC::Run::run(), e.g.\n>>\n>> command_fails_like(\n>>       [ 'pg_dump', 'qqq', 'abc' ],\n>>       qr/\\Qpg_dump: error: too many command-line arguments (first is\n>> \"abc\")\\E/,\n>>       'pg_dump: too many command-line arguments',\n>>       '<pty<', \\$eof_in);\n>>\n>> That means we're not future-proofing the function - we'll never be able\n>> to add more arguments to it, but I'm not really certain that matters\n>> anyway. I should note that perlcritic whines about subroutines with too\n>> many arguments, so making provision for more seems unnecessary anyway.\n>>\n>> I don't think this is worth spending a huge amount of time on, we've\n>> already spent more time discussing it than it would take to implement\n>> either solution.\n>>\n>>\n> \n> On further consideration, I'm wondering why we don't just\n> unconditionally use a closed input pty for all these functions (except\n> run_log). None of them use any input, and making the client worry about\n> whether or not to close it seems something of an abstraction break.\n> There would be no API change at all involved in this case, just a bit of\n> extra cleanliness. Would need testing on Windows, I'll go and do that now.\n> \n> \n> Thoughts?\n\nThat sounds a lot better than your previous patch.\n\nPostgresNode.pm and TestLib.pm both invoke IPC::Run::run. If you change \nall the invocations in TestLib to close input pty, should you do the \nsame for PostgresNode? I don't have a strong argument for doing so, but \nit seems cleaner to have both libraries invoking commands under \nidentical conditions, such that if commands were borrowed from one \nlibrary and called from the other they would behave the same.\n\nPostgresNode already uses TestLib, so perhaps setting up the environment \ncan be abstracted into something, perhaps TestLib::run, and then used \neverywhere that IPC::Run::run currently is used.\n\n\n-- \nMark Dilger\n\n\n", "msg_date": "Mon, 11 Nov 2019 10:27:35 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\nOn 11/11/19 1:27 PM, Mark Dilger wrote:\n>\n>\n> On 11/11/19 8:48 AM, Andrew Dunstan wrote:\n>>\n>> On 11/9/19 8:25 AM, Andrew Dunstan wrote:\n>>> OK, I agree that we're getting rather baroque here. I could go with\n>>> your\n>>> suggestion of YA function, or possibly a solution that simple passes\n>>> any\n>>> extra arguments straight through to IPC::Run::run(), e.g.\n>>>\n>>> command_fails_like(\n>>>        [ 'pg_dump', 'qqq', 'abc' ],\n>>>        qr/\\Qpg_dump: error: too many command-line arguments (first is\n>>> \"abc\")\\E/,\n>>>        'pg_dump: too many command-line arguments',\n>>>        '<pty<', \\$eof_in);\n>>>\n>>> That means we're not future-proofing the function - we'll never be able\n>>> to add more arguments to it, but I'm not really certain that matters\n>>> anyway. I should note that perlcritic whines about subroutines with too\n>>> many arguments, so making provision for more seems unnecessary anyway.\n>>>\n>>> I don't think this is worth spending a huge amount of time on, we've\n>>> already spent more time discussing it than it would take to implement\n>>> either solution.\n>>>\n>>>\n>>\n>> On further consideration, I'm wondering why we don't just\n>> unconditionally use a closed input pty for all these functions (except\n>> run_log). None of them use any input, and making the client worry about\n>> whether or not to close it seems something of an abstraction break.\n>> There would be no API change at all involved in this case, just a bit of\n>> extra cleanliness. Would need testing on Windows, I'll go and do that\n>> now.\n>>\n>>\n>> Thoughts?\n>\n> That sounds a lot better than your previous patch.\n>\n> PostgresNode.pm and TestLib.pm both invoke IPC::Run::run.  If you\n> change all the invocations in TestLib to close input pty, should you\n> do the same for PostgresNode?  I don't have a strong argument for\n> doing so, but it seems cleaner to have both libraries invoking\n> commands under identical conditions, such that if commands were\n> borrowed from one library and called from the other they would behave\n> the same.\n>\n> PostgresNode already uses TestLib, so perhaps setting up the\n> environment can be abstracted into something, perhaps TestLib::run,\n> and then used everywhere that IPC::Run::run currently is used.\n\n\n\nI don't think we need to do that. In the case of the PostgresNode.pm\nuses we know what the executable is, unlike the the TestLib.pm cases.\nThey are our own executables and we don't expect them to be doing\nanything funky with /dev/tty.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 11 Nov 2019 14:28:23 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\n\nOn 11/11/19 11:28 AM, Andrew Dunstan wrote:\n> \n> On 11/11/19 1:27 PM, Mark Dilger wrote:\n>>\n>>\n>> On 11/11/19 8:48 AM, Andrew Dunstan wrote:\n>>>\n>>> On 11/9/19 8:25 AM, Andrew Dunstan wrote:\n>>>> OK, I agree that we're getting rather baroque here. I could go with\n>>>> your\n>>>> suggestion of YA function, or possibly a solution that simple passes\n>>>> any\n>>>> extra arguments straight through to IPC::Run::run(), e.g.\n>>>>\n>>>> command_fails_like(\n>>>>        [ 'pg_dump', 'qqq', 'abc' ],\n>>>>        qr/\\Qpg_dump: error: too many command-line arguments (first is\n>>>> \"abc\")\\E/,\n>>>>        'pg_dump: too many command-line arguments',\n>>>>        '<pty<', \\$eof_in);\n>>>>\n>>>> That means we're not future-proofing the function - we'll never be able\n>>>> to add more arguments to it, but I'm not really certain that matters\n>>>> anyway. I should note that perlcritic whines about subroutines with too\n>>>> many arguments, so making provision for more seems unnecessary anyway.\n>>>>\n>>>> I don't think this is worth spending a huge amount of time on, we've\n>>>> already spent more time discussing it than it would take to implement\n>>>> either solution.\n>>>>\n>>>>\n>>>\n>>> On further consideration, I'm wondering why we don't just\n>>> unconditionally use a closed input pty for all these functions (except\n>>> run_log). None of them use any input, and making the client worry about\n>>> whether or not to close it seems something of an abstraction break.\n>>> There would be no API change at all involved in this case, just a bit of\n>>> extra cleanliness. Would need testing on Windows, I'll go and do that\n>>> now.\n>>>\n>>>\n>>> Thoughts?\n>>\n>> That sounds a lot better than your previous patch.\n>>\n>> PostgresNode.pm and TestLib.pm both invoke IPC::Run::run.  If you\n>> change all the invocations in TestLib to close input pty, should you\n>> do the same for PostgresNode?  I don't have a strong argument for\n>> doing so, but it seems cleaner to have both libraries invoking\n>> commands under identical conditions, such that if commands were\n>> borrowed from one library and called from the other they would behave\n>> the same.\n>>\n>> PostgresNode already uses TestLib, so perhaps setting up the\n>> environment can be abstracted into something, perhaps TestLib::run,\n>> and then used everywhere that IPC::Run::run currently is used.\n> \n> \n> \n> I don't think we need to do that. In the case of the PostgresNode.pm\n> uses we know what the executable is, unlike the the TestLib.pm cases.\n> They are our own executables and we don't expect them to be doing\n> anything funky with /dev/tty.\n\nOk. I think your proposal sounds fine.\n\n-- \nMark Dilger\n\n\n", "msg_date": "Mon, 11 Nov 2019 13:28:37 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "On 11/11/19 4:28 PM, Mark Dilger wrote:\n>\n>\n>>>>>\n>>>>\n>>>> On further consideration, I'm wondering why we don't just\n>>>> unconditionally use a closed input pty for all these functions (except\n>>>> run_log). None of them use any input, and making the client worry\n>>>> about\n>>>> whether or not to close it seems something of an abstraction break.\n>>>> There would be no API change at all involved in this case, just a\n>>>> bit of\n>>>> extra cleanliness. Would need testing on Windows, I'll go and do that\n>>>> now.\n>>>>\n>>>>\n>>>> Thoughts?\n>>>\n>>> That sounds a lot better than your previous patch.\n>>>\n>>> PostgresNode.pm and TestLib.pm both invoke IPC::Run::run.  If you\n>>> change all the invocations in TestLib to close input pty, should you\n>>> do the same for PostgresNode?  I don't have a strong argument for\n>>> doing so, but it seems cleaner to have both libraries invoking\n>>> commands under identical conditions, such that if commands were\n>>> borrowed from one library and called from the other they would behave\n>>> the same.\n>>>\n>>> PostgresNode already uses TestLib, so perhaps setting up the\n>>> environment can be abstracted into something, perhaps TestLib::run,\n>>> and then used everywhere that IPC::Run::run currently is used.\n>>\n>>\n>>\n>> I don't think we need to do that. In the case of the PostgresNode.pm\n>> uses we know what the executable is, unlike the the TestLib.pm cases.\n>> They are our own executables and we don't expect them to be doing\n>> anything funky with /dev/tty.\n>\n> Ok.  I think your proposal sounds fine.\n\n\n\nHere's a patch for that. The pty stuff crashes and burns on my Windows\ntest box, so there I just set stdin to an empty string via the usual\npipe mechanism.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 25 Nov 2019 08:08:56 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\n\nOn 11/25/19 5:08 AM, Andrew Dunstan wrote:\n> \n> On 11/11/19 4:28 PM, Mark Dilger wrote:\n>>\n>>\n>>>>>>\n>>>>>\n>>>>> On further consideration, I'm wondering why we don't just\n>>>>> unconditionally use a closed input pty for all these functions (except\n>>>>> run_log). None of them use any input, and making the client worry\n>>>>> about\n>>>>> whether or not to close it seems something of an abstraction break.\n>>>>> There would be no API change at all involved in this case, just a\n>>>>> bit of\n>>>>> extra cleanliness. Would need testing on Windows, I'll go and do that\n>>>>> now.\n>>>>>\n>>>>>\n>>>>> Thoughts?\n>>>>\n>>>> That sounds a lot better than your previous patch.\n>>>>\n>>>> PostgresNode.pm and TestLib.pm both invoke IPC::Run::run.  If you\n>>>> change all the invocations in TestLib to close input pty, should you\n>>>> do the same for PostgresNode?  I don't have a strong argument for\n>>>> doing so, but it seems cleaner to have both libraries invoking\n>>>> commands under identical conditions, such that if commands were\n>>>> borrowed from one library and called from the other they would behave\n>>>> the same.\n>>>>\n>>>> PostgresNode already uses TestLib, so perhaps setting up the\n>>>> environment can be abstracted into something, perhaps TestLib::run,\n>>>> and then used everywhere that IPC::Run::run currently is used.\n>>>\n>>>\n>>>\n>>> I don't think we need to do that. In the case of the PostgresNode.pm\n>>> uses we know what the executable is, unlike the the TestLib.pm cases.\n>>> They are our own executables and we don't expect them to be doing\n>>> anything funky with /dev/tty.\n>>\n>> Ok.  I think your proposal sounds fine.\n> \n> \n> \n> Here's a patch for that. The pty stuff crashes and burns on my Windows\n> test box, so there I just set stdin to an empty string via the usual\n> pipe mechanism.\n\nOk, I've reviewed and tested this. It works fine for me on Linux. I\nam not set up to test it on Windows. I think it is ready to commit.\n\nI have one remaining comment about the code, and this is just FYI. I\nwon't quibble with you committing your patch as it currently stands.\n\nYou might consider changing the '\\x04' literal to use a named control\ncharacter, both for readability and portability, as here:\n\n+ use charnames ':full';\n+ @no_stdin = ('<pty<', \\\"\\N{END OF TRANSMISSION}\");\n\nThe only character set I can find where this matters is EBCDIC, in\nwhich the EOT character is 55 rather than 4. Since EBCDIC does not\noccur in the list of supported character sets for postgres, per the\ndocs section 23.3.1, I don't suppose it matters too much. Nor can\nI test how this works on EBCDIC, so I'm mostly guessing that perl\nwould do the right thing there. But, at least to my eyes, it is\nmore immediately clear what the code is doing when the control\ncharacter name is spelled out.\n\n\n-- \nMark Dilger\n\n\n", "msg_date": "Mon, 25 Nov 2019 10:56:38 -0800", "msg_from": "Mark Dilger <hornschnorter@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TestLib::command_fails_like enhancement" }, { "msg_contents": "\nOn 11/25/19 1:56 PM, Mark Dilger wrote:\n>\n>\n> On 11/25/19 5:08 AM, Andrew Dunstan wrote:\n>>\n>> On 11/11/19 4:28 PM, Mark Dilger wrote:\n>>>\n>>>\n>>>>>>>\n>>>>>>\n>>>>>> On further consideration, I'm wondering why we don't just\n>>>>>> unconditionally use a closed input pty for all these functions\n>>>>>> (except\n>>>>>> run_log). None of them use any input, and making the client worry\n>>>>>> about\n>>>>>> whether or not to close it seems something of an abstraction break.\n>>>>>> There would be no API change at all involved in this case, just a\n>>>>>> bit of\n>>>>>> extra cleanliness. Would need testing on Windows, I'll go and do\n>>>>>> that\n>>>>>> now.\n>>>>>>\n>>>>>>\n>>>>>> Thoughts?\n>>>>>\n>>>>> That sounds a lot better than your previous patch.\n>>>>>\n>>>>> PostgresNode.pm and TestLib.pm both invoke IPC::Run::run.  If you\n>>>>> change all the invocations in TestLib to close input pty, should you\n>>>>> do the same for PostgresNode?  I don't have a strong argument for\n>>>>> doing so, but it seems cleaner to have both libraries invoking\n>>>>> commands under identical conditions, such that if commands were\n>>>>> borrowed from one library and called from the other they would behave\n>>>>> the same.\n>>>>>\n>>>>> PostgresNode already uses TestLib, so perhaps setting up the\n>>>>> environment can be abstracted into something, perhaps TestLib::run,\n>>>>> and then used everywhere that IPC::Run::run currently is used.\n>>>>\n>>>>\n>>>>\n>>>> I don't think we need to do that. In the case of the PostgresNode.pm\n>>>> uses we know what the executable is, unlike the the TestLib.pm cases.\n>>>> They are our own executables and we don't expect them to be doing\n>>>> anything funky with /dev/tty.\n>>>\n>>> Ok.  I think your proposal sounds fine.\n>>\n>>\n>>\n>> Here's a patch for that. The pty stuff crashes and burns on my Windows\n>> test box, so there I just set stdin to an empty string via the usual\n>> pipe mechanism.\n>\n> Ok, I've reviewed and tested this.  It works fine for me on Linux.  I\n> am not set up to test it on Windows.  I think it is ready to commit.\n>\n> I have one remaining comment about the code, and this is just FYI.  I\n> won't quibble with you committing your patch as it currently stands.\n>\n> You might consider changing the '\\x04' literal to use a named control\n> character, both for readability and portability, as here:\n>\n> +               use charnames ':full';\n> +               @no_stdin = ('<pty<', \\\"\\N{END OF TRANSMISSION}\");\n>\n> The only character set I can find where this matters is EBCDIC, in\n> which the EOT character is 55 rather than 4.  Since EBCDIC does not\n> occur in the list of supported character sets for postgres, per the\n> docs section 23.3.1, I don't suppose it matters too much.  Nor can\n> I test how this works on EBCDIC, so I'm mostly guessing that perl\n> would do the right thing there.  But, at least to my eyes, it is\n> more immediately clear what the code is doing when the control\n> character name is spelled out.\n>\n>\n\n\nAgreed, I'll do it that way. This is quite timely, as I just finished\nreworking the patch that relies on it. Thanks for the review.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 25 Nov 2019 15:07:10 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: TestLib::command_fails_like enhancement" } ]
[ { "msg_contents": "Hello Devs,\n\nThis patch moves duplicated query cancellation code code from psql & \nscripts to fe-utils, so that it is shared and may be used by other \ncommands.\n\nThis is because Masao-san suggested to add a query cancellation feature to \npgbench for long queries (server-side data generation being discussed, but \npossibly pk and fk could use that as well).\n\n-- \nFabien.", "msg_date": "Thu, 31 Oct 2019 19:43:36 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "fe-utils - share query cancellation code" }, { "msg_contents": "On Thu, Oct 31, 2019 at 11:43 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Devs,\n>\n> This patch moves duplicated query cancellation code code from psql &\n> scripts to fe-utils, so that it is shared and may be used by other\n> commands.\n>\n> This is because Masao-san suggested to add a query cancellation feature to\n> pgbench for long queries (server-side data generation being discussed, but\n> possibly pk and fk could use that as well).\n>\n> --\n> Fabien.\n\n\nI give a quick look and I think we can\n\nvoid\npsql_setup_cancel_handler(void)\n{\n#ifndef WIN32\n setup_cancel_handler(psql_sigint_callback);\n#else\n setup_cancel_handler();\n#endif /* WIN32 */\n}\n\nto\n\nvoid\npsql_setup_cancel_handler(void)\n{\n setup_cancel_handler(psql_sigint_callback);\n}\n\nBecause it does not matter for setup_cancel_handler what we passed\nbecause it is ignoring that in case of windows.\n\nHmm, need to remove the assert in the function\n\"setup_cancel_handler\"\n\n-- Ibrar Ahmed\n\nOn Thu, Oct 31, 2019 at 11:43 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Devs,\n\nThis patch moves duplicated query cancellation code code from psql & \nscripts to fe-utils, so that it is shared and may be used by other \ncommands.\n\nThis is because Masao-san suggested to add a query cancellation feature to \npgbench for long queries (server-side data generation being discussed, but \npossibly pk and fk could use that as well).\n\n-- \nFabien.I give a quick look and I think we can voidpsql_setup_cancel_handler(void){#ifndef WIN32        setup_cancel_handler(psql_sigint_callback);#else        setup_cancel_handler();#endif /* WIN32 */}to voidpsql_setup_cancel_handler(void){        setup_cancel_handler(psql_sigint_callback);}Because it does not matter for setup_cancel_handler what we passedbecause it is ignoring that in case of windows.Hmm, need to remove the assert in the function\"setup_cancel_handler\"-- Ibrar Ahmed", "msg_date": "Fri, 1 Nov 2019 00:50:40 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "Hello,\n\n> I give a quick look and I think we can\n>\n> void\n> psql_setup_cancel_handler(void)\n> {\n> setup_cancel_handler(psql_sigint_callback);\n> }\n>\n> Because it does not matter for setup_cancel_handler what we passed\n> because it is ignoring that in case of windows.\n\nThe \"psql_sigint_callback\" function is not defined under WIN32.\n\nI've fixed a missing NULL argument in the section you pointed out, though.\n\nI've used the shared infrastructure in pgbench.\n\nI've noticed yet another instance of the cancelation stuff in \n\"src/bin/pg_dump/parallel.c\", but it seems somehow different from the two \nothers, so I have not tried to used the shared version.\n\n-- \nFabien.", "msg_date": "Fri, 1 Nov 2019 10:19:04 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On 2019-Nov-01, Fabien COELHO wrote:\n\n> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> index 03bcd22996..389b4d7bcd 100644\n> --- a/src/bin/pgbench/pgbench.c\n> +++ b/src/bin/pgbench/pgbench.c\n> @@ -59,9 +59,10 @@\n> \n> #include \"common/int.h\"\n> #include \"common/logging.h\"\n> -#include \"fe_utils/conditional.h\"\n> #include \"getopt_long.h\"\n> #include \"libpq-fe.h\"\n> +#include \"fe_utils/conditional.h\"\n> +#include \"fe_utils/cancel.h\"\n> #include \"pgbench.h\"\n> #include \"portability/instr_time.h\"\n\nwtf?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Nov 2019 10:30:19 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "\nHello Alvaro,\n\n>> #include \"common/int.h\"\n>> #include \"common/logging.h\"\n>> -#include \"fe_utils/conditional.h\"\n>> #include \"getopt_long.h\"\n>> #include \"libpq-fe.h\"\n>> +#include \"fe_utils/conditional.h\"\n>> +#include \"fe_utils/cancel.h\"\n>> #include \"pgbench.h\"\n>> #include \"portability/instr_time.h\"\n>\n> wtf?\n\nI understand that you are unhappy about something, but where the issue is \nfails me, the \"wtf\" 3 characters are not enough to point me in the right \ndirection. Feel free to elaborate a little bit more:-)\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 1 Nov 2019 16:26:42 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On 2019-Nov-01, Fabien COELHO wrote:\n\n> > > #include \"common/int.h\"\n> > > #include \"common/logging.h\"\n> > > -#include \"fe_utils/conditional.h\"\n> > > #include \"getopt_long.h\"\n> > > #include \"libpq-fe.h\"\n> > > +#include \"fe_utils/conditional.h\"\n> > > +#include \"fe_utils/cancel.h\"\n> > > #include \"pgbench.h\"\n> > > #include \"portability/instr_time.h\"\n> > \n> > wtf?\n> \n> I understand that you are unhappy about something, but where the issue is\n> fails me, the \"wtf\" 3 characters are not enough to point me in the right\n> direction. Feel free to elaborate a little bit more:-)\n\nI don't see why you move the \"conditional.h\" line out of its correct\nalphabetical position (where it is now), and then add \"cancel.h\" next to\nit also out of its correct alphabetical position.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Nov 2019 12:30:34 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": ">> I understand that you are unhappy about something, but where the issue is\n>> fails me, the \"wtf\" 3 characters are not enough to point me in the right\n>> direction. Feel free to elaborate a little bit more:-)\n>\n> I don't see why you move the \"conditional.h\" line out of its correct\n> alphabetical position (where it is now), and then add \"cancel.h\" next to\n> it also out of its correct alphabetical position.\n\nBecause \"cancel.h\" requires PGconn declaration, thus must appear after \n\"libpq-fe.h\", and once I put it after that letting \"conditional.h\" above & \nalone looked a little bit silly. I put cancel after conditional because it \nwas the new addition, which is somehow logical, although not alphabetical.\n\nNow I can put cancel before conditional, sure.\n\nPatch v3 attached does that.\n\n-- \nFabien.", "msg_date": "Fri, 1 Nov 2019 18:41:52 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On 2019-Nov-01, Fabien COELHO wrote:\n\n> Because \"cancel.h\" requires PGconn declaration, thus must appear after\n> \"libpq-fe.h\",\n\nThen you need to add #include libpq-fe.h in cancel.h. Our policy is\nthat headers compile standalone (c.h / postgres_fe.h / postgres.h\nexcluded).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Nov 2019 16:17:42 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "> Then you need to add #include libpq-fe.h in cancel.h. Our policy is\n> that headers compile standalone (c.h / postgres_fe.h / postgres.h\n> excluded).\n\nOk. I do not make a habit of adding headers in postgres, so I did not \nnotice there was an alphabetical logic to that.\n\nAttached patch v4 does it.\n\n-- \nFabien.", "msg_date": "Fri, 1 Nov 2019 22:38:09 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On Sat, Nov 2, 2019 at 10:38 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Attached patch v4 does it.\n\nHi Fabien,\n\nIt looks like don't define sigint_interrupt_jmp and\nsigint_interrupt_enabled on Windows, yet they are still declared and\nreferenced?\n\ncommand.obj : error LNK2001: unresolved external symbol\nsigint_interrupt_enabled [C:\\projects\\postgresql\\psql.vcxproj]\ncopy.obj : error LNK2001: unresolved external symbol\nsigint_interrupt_enabled [C:\\projects\\postgresql\\psql.vcxproj]\ninput.obj : error LNK2001: unresolved external symbol\nsigint_interrupt_enabled [C:\\projects\\postgresql\\psql.vcxproj]\nmainloop.obj : error LNK2001: unresolved external symbol\nsigint_interrupt_enabled [C:\\projects\\postgresql\\psql.vcxproj]\ncommand.obj : error LNK2001: unresolved external symbol\nsigint_interrupt_jmp [C:\\projects\\postgresql\\psql.vcxproj]\ncopy.obj : error LNK2001: unresolved external symbol\nsigint_interrupt_jmp [C:\\projects\\postgresql\\psql.vcxproj]\nmainloop.obj : error LNK2001: unresolved external symbol\nsigint_interrupt_jmp [C:\\projects\\postgresql\\psql.vcxproj]\n.\\Release\\psql\\psql.exe : fatal error LNK1120: 2 unresolved externals\n[C:\\projects\\postgresql\\psql.vcxproj]\n0 Warning(s)\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.64074\n\n\n", "msg_date": "Mon, 4 Nov 2019 14:28:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "> It looks like don't define sigint_interrupt_jmp and \n> sigint_interrupt_enabled on Windows, yet they are still declared and \n> referenced?\n\nIndeed, I put it on the wrong side of a \"#ifndef WIN32\".\n\nBasically it is a false constant under WIN32, which it seems does not have \nsigint handler, but the code checks whether the non existent handler is \nenabled anyway.\n\nPatch v5 attached fixes that, hopefully.\n\n-- \nFabien.", "msg_date": "Wed, 6 Nov 2019 10:41:39 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On Wed, Nov 06, 2019 at 10:41:39AM +0100, Fabien COELHO wrote:\n> Indeed, I put it on the wrong side of a \"#ifndef WIN32\".\n> \n> Basically it is a false constant under WIN32, which it seems does not have\n> sigint handler, but the code checks whether the non existent handler is\n> enabled anyway.\n> \n> Patch v5 attached fixes that, hopefully.\n\nI have looked at this one, and found a couple of issues, most of them\nsmall-ish.\n\ns/cancelation/cancellation/ in fe_utils/cancel.h.\n\nThen, the format of the new file headers was not really consistent\nwith the rest, and the order of the headers included in most of the\nfiles was incorrect. That would break the recent flow of commits done\nby Amit K.\n\nThe query cancellation added to pgbench is different than the actual\nrefactoring, and it is a result of the refactoring, so I would rather\nsplit that into two different commits for clarity. The split is easy\nenough, so that's fine not to send two different patches.\n\nCompilation of the patch fails for me on Windows for psql:\nunresolved external symbol sigint_interrupt_jmp \nPlease note that Mr Robot complains as well at build time:\nhttp://commitfest.cputube.org/fabien-coelho.html\n\nVisibly the problem here is that sigint_interrupt_jmp is declared in\ncommon.h, but you have moved it to a non-WIN32 section of the code in\npsql/common.c. And actually, note that copy.c and mainloop.c make use\nof it...\n\nI would not worry much about SIGTERM as you mentioned in the comments,\nquery cancellations are associated to SIGINT now. There could be an\nargument possible later to allow passing down a custom signal though.\n\nAttached is an updated patch with a couple of edits I have done,\nincluding the removal of some noise diffs and the previous edits. I\nam switching the patch as waiting on author, bumping it to next CF at\nthe same time. Could you please fix the Windows issue?\n--\nMichael", "msg_date": "Thu, 28 Nov 2019 16:52:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "Bonjour Michaᅵl,\n\n> The query cancellation added to pgbench is different than the actual\n> refactoring, and it is a result of the refactoring, so I would rather\n> split that into two different commits for clarity. The split is easy\n> enough, so that's fine not to send two different patches.\n\nYep, different file set.\n\n> Compilation of the patch fails for me on Windows for psql:\n> unresolved external symbol sigint_interrupt_jmp\n> Please note that Mr Robot complains as well at build time:\n> http://commitfest.cputube.org/fabien-coelho.html\n>\n> Visibly the problem here is that sigint_interrupt_jmp is declared in\n> common.h, but you have moved it to a non-WIN32 section of the code in\n> psql/common.c. And actually, note that copy.c and mainloop.c make use\n> of it...\n\nIndeed.\n\n> I would not worry much about SIGTERM as you mentioned in the comments,\n> query cancellations are associated to SIGINT now. There could be an\n> argument possible later to allow passing down a custom signal though.\n\nOk.\n\n> Attached is an updated patch with a couple of edits I have done,\n> including the removal of some noise diffs and the previous edits.\n\nThanks!\n\n> I am switching the patch as waiting on author, bumping it to next CF at \n> the same time. Could you please fix the Windows issue?\n\nI do not have a Windows host, so I can only do blind tests. I just moved \nthe declaration out of !WIN32 scope in attached v7, which might solve the \nresolution error. All other issues pointed out above seem fixed in the v6 \nyou sent.\n\n-- \nFabien.", "msg_date": "Fri, 29 Nov 2019 08:44:25 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On Fri, Nov 29, 2019 at 08:44:25AM +0100, Fabien COELHO wrote:\n> I do not have a Windows host, so I can only do blind tests. I just moved the\n> declaration out of !WIN32 scope in attached v7, which might solve the\n> resolution error. All other issues pointed out above seem fixed in the v6\n> you sent.\n\nCommitted the patch after splitting things into two commits and after\ntesting things from Linux and from a Windows console: the actual\nrefactoring and the pgbench changes. While polishing the code, I have\nfound the upthread argument of Ibrar quite appealing because there are\nuse cases where a callback can be interesting on Windows, like simply\nbeing able to log the cancel event to a different source. So I have\nremoved the callback restriction and the assertion, making the\ncallback of psql a no-op on Windows. A second thing is that two large\ncomments originally in psql had better be moved to cancel.c because\nthe logic with libpq cancel routines applies only there.\n--\nMichael", "msg_date": "Mon, 2 Dec 2019 11:54:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On Mon, Dec 02, 2019 at 11:54:02AM +0900, Michael Paquier wrote:\n> Committed the patch after splitting things into two commits and after\n> testing things from Linux and from a Windows console: the actual\n> refactoring and the pgbench changes.\n\nI have found that we have a useless declaration of CancelRequested in\ncommon.h, which is already part of cancel.h. On top of that I think\nthat we need to rework a bit the header inclusions of bin/scripts/, as\nper the attached. A small set of issues, still these are issues.\nSorry for having missed these.\n--\nMichael", "msg_date": "Tue, 3 Dec 2019 19:16:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "Bonjour Michaᅵl,\n\n>> Committed the patch after splitting things into two commits and after \n>> testing things from Linux and from a Windows console: the actual \n>> refactoring and the pgbench changes.\n>\n> I have found that we have a useless declaration of CancelRequested in \n> common.h, which is already part of cancel.h.\n\nOk.\n\n> On top of that I think that we need to rework a bit the header \n> inclusions of bin/scripts/, as per the attached.\n\nLooks fine to me: patch applies, compiles, runs.\n\n-- \nFabien.", "msg_date": "Tue, 3 Dec 2019 13:11:27 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: fe-utils - share query cancellation code" }, { "msg_contents": "On Tue, Dec 03, 2019 at 01:11:27PM +0100, Fabien COELHO wrote:\n> Looks fine to me: patch applies, compiles, runs.\n\nThanks for double-checking. Done.\n--\nMichael", "msg_date": "Wed, 4 Dec 2019 10:10:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fe-utils - share query cancellation code" } ]
[ { "msg_contents": "Hi,\n\nWe currently align byval types such as int4/8, float4/8, timestamp *,\ndate etc, even though we mostly don't need to. When tuples are deformed,\nall byval types are copied out from the tuple data into the\ncorresponding Datum array, therefore the original alignment in the tuple\ndata doesn't matter. This is different from byref types, where the\nDatum formed will often be a pointer into the tuple data.\n\nWhile there are some older systems where it could be a bit slower to\ncopy data out from unaligned positions into the datum array, this is\nmore than bought back by the next point:\n\n\nThe fact that these types are aligned has substantial costs:\n\nFor one, we often waste substantial amounts of space inside tables with\nalignment padding. It's not uncommon to see about 30% or more of space\nwasted (especially when taking alignment of the first column into\naccount).\n\nFor another, and this I think is less obvious, we actually waste\nsubstantial amounts of CPU maintaining the alignment. This is primarily\nthe case because we have to perform to align the pointer to the next\nfield during tuple [de]forming. Those instructions [1] have to be\nexecuted taking time, but what's worse, they also reduce the ability of\nout-of-order execution to hide latencies. There's a hard dependency on\nknowing the offset to the next column to be able to continue with the\nnext column.\n\n\nThere's two reasons why we can't just set the alignment for these types\nto 'c'.\n1) pg_upgrade, for fairly obvious reasons\n2) We map catalog table rows to structs, in a *lot* of places.\n\n\nIt seems to me that, despite the above, it's still worth trying to\nimprove upon the current state, to benefit from reduced space and CPU\nusage.\n\nAs it turns out we already separate out the alignment for a type, and a\ncolumn, between pg_type.typalign and pg_attribute.attalign. One way to\ntackle this would be to allow to specify, for byval types only, at\ncolumn creation time whether a column uses a 'struct-mappable' alignment\nor not. When set, set attalign to pg_type.typalign for alignment, when\nnot, to 'c'. By changing pg_dump in binary upgrade mode to emit the\nnecessary options, and by adding such options during bki processing,\nwe'd solve 1) and 2), but otherwise gain the benefits.\n\nAlternatively we could declare such a propert on the table level, but\nthat seems more restrictive, without a corresponding upside.\n\n\nIt's possible that we should do something related with a few varlena\ndatatypes. We currently use intalign for types like text, json, and as\nfar as I can tell that does not make all that much sense. They're not\nstruct mappable *anyway* (and if they were, they'd need to be 8 byte\naligned on common platforms, __alignof__(void*) is 8). We'd have to take\na bit of care to treat the varlena header as unaligned - but we need to\ndo so anyway, because of 1byte varlenas. Short varlenas seems to make it\nless crucial to pursue this, as the average datum that'd benefit is long\nenough to make padding a non-issue. So I don't think it'd make sense to\ntackle this project at the same time.\n\n\nTo fully benefit from the increased tuple deforming speed, it might be\nbeneficial to branch very early between two different versions within\nslot_deform_heap_tuple, having determined whether there's any byval\ncolumns with alignment requirements at slot creation /\nExecSetSlotDescriptor() time (or even set a different callback\ngetsomeattrs callback, but that's a bit more complicated).\n\n\nThoughts?\n\n\nIndependent of the above, I think it might make sense to replace\npg_attribute.attalign with a smallint or such. It's a bit absurd that we\nneed code like\n#define att_align_nominal(cur_offset, attalign) \\\n( \\\n\t((attalign) == 'i') ? INTALIGN(cur_offset) : \\\n\t (((attalign) == 'c') ? (uintptr_t) (cur_offset) : \\\n\t (((attalign) == 'd') ? DOUBLEALIGN(cur_offset) : \\\n\t ( \\\n\t\t\tAssertMacro((attalign) == 's'), \\\n\t\t\tSHORTALIGN(cur_offset) \\\n\t ))) \\\n)\n\ninstead of just using TYPEALIGN(). There's no need to adapt CREATE TYPE,\nor pg_type - we should just store the number in\npg_attribute.attalign. That keeps CREATE TYPE 32/64bit/arch independent,\ndoesn't require reconstructing c/s/i/d in pg_dump, simplifies the\ngenerated code [1], and would also \"just\" work for what I described\nearlier in this email.\n\nGreetings,\n\nAndres Freund\n\n\n[1] E.g. as a function of (void *ptr, char attalign) this ends up with assembly\nlike\n\t.globl\talignme\n\t.type\talignme, @function\nalignme:\n.LFB210:\n\t.cfi_startproc\n\tmovq\t%rdi, %rax\n\tcmpb\t$105, %sil\n\tje\t.L496\n\tcmpb\t$99, %sil\n\tje\t.L492\n\taddq\t$7, %rax\n\tleaq\t1(%rdi), %rdi\n\tandq\t$-8, %rax\n\tandq\t$-2, %rdi\n\tcmpb\t$100, %sil\n\tcmovne\t%rdi, %rax\n.L492:\n\tret\n\t.p2align 4,,10\n\t.p2align 3\n.L496:\n\taddq\t$3, %rax\n\tandq\t$-4, %rax\n\tret\n\t.cfi_endproc\n\nusing (void *ptr, int8 attalign) instead yields:\n\t.globl\talignme2\n\t.type\talignme2, @function\nalignme2:\n.LFB211:\n\t.cfi_startproc\n\tmovzbl\t%sil, %esi\n\tleal\t-1(%rsi), %eax\n\tcltq\n\tnegl\t%esi\n\taddq\t%rax, %rdi\n\tmovslq\t%esi, %rax\n\tandq\t%rdi, %rax\n\tret\n\t.cfi_endproc\n\n\n", "msg_date": "Thu, 31 Oct 2019 11:48:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Removing alignment padding for byval types" }, { "msg_contents": "On Thu, Oct 31, 2019 at 11:48:21AM -0700, Andres Freund wrote:\n>Hi,\n>\n>We currently align byval types such as int4/8, float4/8, timestamp *,\n>date etc, even though we mostly don't need to. When tuples are deformed,\n>all byval types are copied out from the tuple data into the\n>corresponding Datum array, therefore the original alignment in the tuple\n>data doesn't matter. This is different from byref types, where the\n>Datum formed will often be a pointer into the tuple data.\n>\n>While there are some older systems where it could be a bit slower to\n>copy data out from unaligned positions into the datum array, this is\n>more than bought back by the next point:\n>\n>\n>The fact that these types are aligned has substantial costs:\n>\n>For one, we often waste substantial amounts of space inside tables with\n>alignment padding. It's not uncommon to see about 30% or more of space\n>wasted (especially when taking alignment of the first column into\n>account).\n>\n>For another, and this I think is less obvious, we actually waste\n>substantial amounts of CPU maintaining the alignment. This is primarily\n>the case because we have to perform to align the pointer to the next\n>field during tuple [de]forming. Those instructions [1] have to be\n>executed taking time, but what's worse, they also reduce the ability of\n>out-of-order execution to hide latencies. There's a hard dependency on\n>knowing the offset to the next column to be able to continue with the\n>next column.\n>\n\nRight. Reducing this overhead was one of the goals to allow \"logical\nordering\" of columns in a table (while arbitrarily reordering the\nphysical ones), but that patch got out of hand pretty quickly. Also,\nit'd still keep some of the overhead, because it was not removing the\nalignment/padding entirely.\n\n>\n>There's two reasons why we can't just set the alignment for these types\n>to 'c'.\n>1) pg_upgrade, for fairly obvious reasons\n>2) We map catalog table rows to structs, in a *lot* of places.\n>\n>\n>It seems to me that, despite the above, it's still worth trying to\n>improve upon the current state, to benefit from reduced space and CPU\n>usage.\n>\n>As it turns out we already separate out the alignment for a type, and a\n>column, between pg_type.typalign and pg_attribute.attalign. One way to\n>tackle this would be to allow to specify, for byval types only, at\n>column creation time whether a column uses a 'struct-mappable' alignment\n>or not. When set, set attalign to pg_type.typalign for alignment, when\n>not, to 'c'. By changing pg_dump in binary upgrade mode to emit the\n>necessary options, and by adding such options during bki processing,\n>we'd solve 1) and 2), but otherwise gain the benefits.\n>\n>Alternatively we could declare such a propert on the table level, but\n>that seems more restrictive, without a corresponding upside.\n>\n\nI don't know, but it seems entirely sufficient specifying this at the\ntable level, no? What would be the use case for removing padding for\nonly some of the columns? I don't see the use case for that.\n\n>\n>It's possible that we should do something related with a few varlena\n>datatypes. We currently use intalign for types like text, json, and as\n>far as I can tell that does not make all that much sense. They're not\n>struct mappable *anyway* (and if they were, they'd need to be 8 byte\n>aligned on common platforms, __alignof__(void*) is 8). We'd have to take\n>a bit of care to treat the varlena header as unaligned - but we need to\n>do so anyway, because of 1byte varlenas. Short varlenas seems to make it\n>less crucial to pursue this, as the average datum that'd benefit is long\n>enough to make padding a non-issue. So I don't think it'd make sense to\n>tackle this project at the same time.\n>\n\nNot sure, but how come it's not failing on the picky platforms, then? On\nx86 it's probably OK because it's pretty permissive, but I'd expect some\nplatforms (s390, parisc, itanium, powerpc, ...) to be much pickier.\n\n>\n>To fully benefit from the increased tuple deforming speed, it might be\n>beneficial to branch very early between two different versions within\n>slot_deform_heap_tuple, having determined whether there's any byval\n>columns with alignment requirements at slot creation /\n>ExecSetSlotDescriptor() time (or even set a different callback\n>getsomeattrs callback, but that's a bit more complicated).\n>\n>\n>Thoughts?\n>\n\nSeems reasonable. I certainly agree this padding is pretty annoying, so\nif we can get rid of it without causing issues, that'd be nice. \n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 31 Oct 2019 20:15:12 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Removing alignment padding for byval types" }, { "msg_contents": "Hi,\n\nOn 2019-10-31 20:15:12 +0100, Tomas Vondra wrote:\n> On Thu, Oct 31, 2019 at 11:48:21AM -0700, Andres Freund wrote:\n> > We currently align byval types such as int4/8, float4/8, timestamp *,\n> > date etc, even though we mostly don't need to. When tuples are deformed,\n> > all byval types are copied out from the tuple data into the\n> > corresponding Datum array, therefore the original alignment in the tuple\n> > data doesn't matter. This is different from byref types, where the\n> > Datum formed will often be a pointer into the tuple data.\n> > \n> > While there are some older systems where it could be a bit slower to\n> > copy data out from unaligned positions into the datum array, this is\n> > more than bought back by the next point:\n> > \n> > \n> > The fact that these types are aligned has substantial costs:\n> > \n> > For one, we often waste substantial amounts of space inside tables with\n> > alignment padding. It's not uncommon to see about 30% or more of space\n> > wasted (especially when taking alignment of the first column into\n> > account).\n> > \n> > For another, and this I think is less obvious, we actually waste\n> > substantial amounts of CPU maintaining the alignment. This is primarily\n> > the case because we have to perform to align the pointer to the next\n> > field during tuple [de]forming. Those instructions [1] have to be\n> > executed taking time, but what's worse, they also reduce the ability of\n> > out-of-order execution to hide latencies. There's a hard dependency on\n> > knowing the offset to the next column to be able to continue with the\n> > next column.\n> > \n> \n> Right. Reducing this overhead was one of the goals to allow \"logical\n> ordering\" of columns in a table (while arbitrarily reordering the\n> physical ones), but that patch got out of hand pretty quickly. Also,\n> it'd still keep some of the overhead, because it was not removing the\n> alignment/padding entirely.\n\nYea. It'd keep just about all the CPU overhead, because we'd still need\nto align as soon as there is a preceding nulled or varlena colum.\n\nThere's still some benefit for logical column order, as grouping NOT\nNULL fixed-length columns at the start is beneficial. And it's also\nbeneficial to have frequently accessed columns at the start. But I think\nthis proposal gains a lot of the space related benefits, at a much lower\ncomplexity, together with a lot of other benefits.\n\n\n> > There's two reasons why we can't just set the alignment for these types\n> > to 'c'.\n> > 1) pg_upgrade, for fairly obvious reasons\n> > 2) We map catalog table rows to structs, in a *lot* of places.\n> > \n> > \n> > It seems to me that, despite the above, it's still worth trying to\n> > improve upon the current state, to benefit from reduced space and CPU\n> > usage.\n> > \n> > As it turns out we already separate out the alignment for a type, and a\n> > column, between pg_type.typalign and pg_attribute.attalign. One way to\n> > tackle this would be to allow to specify, for byval types only, at\n> > column creation time whether a column uses a 'struct-mappable' alignment\n> > or not. When set, set attalign to pg_type.typalign for alignment, when\n> > not, to 'c'. By changing pg_dump in binary upgrade mode to emit the\n> > necessary options, and by adding such options during bki processing,\n> > we'd solve 1) and 2), but otherwise gain the benefits.\n> > \n> > Alternatively we could declare such a propert on the table level, but\n> > that seems more restrictive, without a corresponding upside.\n\n> I don't know, but it seems entirely sufficient specifying this at the\n> table level, no? What would be the use case for removing padding for\n> only some of the columns? I don't see the use case for that.\n\nWell, if we had it on a per-table level, we'd also align\na) catalog table columns that follow the first varlena column - which we don't need\n to align, as they can't be accessed via mapping\nb) columns in pg_upgraded tables that have been added after the upgrade\n\n\n> > It's possible that we should do something related with a few varlena\n> > datatypes. We currently use intalign for types like text, json, and as\n> > far as I can tell that does not make all that much sense. They're not\n> > struct mappable *anyway* (and if they were, they'd need to be 8 byte\n> > aligned on common platforms, __alignof__(void*) is 8). We'd have to take\n> > a bit of care to treat the varlena header as unaligned - but we need to\n> > do so anyway, because of 1byte varlenas. Short varlenas seems to make it\n> > less crucial to pursue this, as the average datum that'd benefit is long\n> > enough to make padding a non-issue. So I don't think it'd make sense to\n> > tackle this project at the same time.\n> > \n> \n> Not sure, but how come it's not failing on the picky platforms, then? On\n> x86 it's probably OK because it's pretty permissive, but I'd expect some\n> platforms (s390, parisc, itanium, powerpc, ...) to be much pickier.\n\nI'm not quite following? As I said, we already need to use alignment\naware code due to caring for short varlenas. And these types aren't\nactually struct mappable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Oct 2019 12:24:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Removing alignment padding for byval types" }, { "msg_contents": "On Thu, Oct 31, 2019 at 12:24:33PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-10-31 20:15:12 +0100, Tomas Vondra wrote:\n>> On Thu, Oct 31, 2019 at 11:48:21AM -0700, Andres Freund wrote:\n>> > We currently align byval types such as int4/8, float4/8, timestamp *,\n>> > date etc, even though we mostly don't need to. When tuples are deformed,\n>> > all byval types are copied out from the tuple data into the\n>> > corresponding Datum array, therefore the original alignment in the tuple\n>> > data doesn't matter. This is different from byref types, where the\n>> > Datum formed will often be a pointer into the tuple data.\n>> >\n>> > While there are some older systems where it could be a bit slower to\n>> > copy data out from unaligned positions into the datum array, this is\n>> > more than bought back by the next point:\n>> >\n>> >\n>> > The fact that these types are aligned has substantial costs:\n>> >\n>> > For one, we often waste substantial amounts of space inside tables with\n>> > alignment padding. It's not uncommon to see about 30% or more of space\n>> > wasted (especially when taking alignment of the first column into\n>> > account).\n>> >\n>> > For another, and this I think is less obvious, we actually waste\n>> > substantial amounts of CPU maintaining the alignment. This is primarily\n>> > the case because we have to perform to align the pointer to the next\n>> > field during tuple [de]forming. Those instructions [1] have to be\n>> > executed taking time, but what's worse, they also reduce the ability of\n>> > out-of-order execution to hide latencies. There's a hard dependency on\n>> > knowing the offset to the next column to be able to continue with the\n>> > next column.\n>> >\n>>\n>> Right. Reducing this overhead was one of the goals to allow \"logical\n>> ordering\" of columns in a table (while arbitrarily reordering the\n>> physical ones), but that patch got out of hand pretty quickly. Also,\n>> it'd still keep some of the overhead, because it was not removing the\n>> alignment/padding entirely.\n>\n>Yea. It'd keep just about all the CPU overhead, because we'd still need\n>to align as soon as there is a preceding nulled or varlena colum.\n>\n>There's still some benefit for logical column order, as grouping NOT\n>NULL fixed-length columns at the start is beneficial. And it's also\n>beneficial to have frequently accessed columns at the start. But I think\n>this proposal gains a lot of the space related benefits, at a much lower\n>complexity, together with a lot of other benefits.\n>\n\n+1\n\n>\n>> > There's two reasons why we can't just set the alignment for these types\n>> > to 'c'.\n>> > 1) pg_upgrade, for fairly obvious reasons\n>> > 2) We map catalog table rows to structs, in a *lot* of places.\n>> >\n>> >\n>> > It seems to me that, despite the above, it's still worth trying to\n>> > improve upon the current state, to benefit from reduced space and CPU\n>> > usage.\n>> >\n>> > As it turns out we already separate out the alignment for a type, and a\n>> > column, between pg_type.typalign and pg_attribute.attalign. One way to\n>> > tackle this would be to allow to specify, for byval types only, at\n>> > column creation time whether a column uses a 'struct-mappable' alignment\n>> > or not. When set, set attalign to pg_type.typalign for alignment, when\n>> > not, to 'c'. By changing pg_dump in binary upgrade mode to emit the\n>> > necessary options, and by adding such options during bki processing,\n>> > we'd solve 1) and 2), but otherwise gain the benefits.\n>> >\n>> > Alternatively we could declare such a propert on the table level, but\n>> > that seems more restrictive, without a corresponding upside.\n>\n>> I don't know, but it seems entirely sufficient specifying this at the\n>> table level, no? What would be the use case for removing padding for\n>> only some of the columns? I don't see the use case for that.\n>\n>Well, if we had it on a per-table level, we'd also align\n>a) catalog table columns that follow the first varlena column - which we don't need\n> to align, as they can't be accessed via mapping\n>b) columns in pg_upgraded tables that have been added after the upgrade\n>\n\nHmm, OK. I think the question is whether it's worth the extra\ncomplexity. I'd say it's not, but perhaps I'm wrong.\n\n>\n>> > It's possible that we should do something related with a few varlena\n>> > datatypes. We currently use intalign for types like text, json, and as\n>> > far as I can tell that does not make all that much sense. They're not\n>> > struct mappable *anyway* (and if they were, they'd need to be 8 byte\n>> > aligned on common platforms, __alignof__(void*) is 8). We'd have to take\n>> > a bit of care to treat the varlena header as unaligned - but we need to\n>> > do so anyway, because of 1byte varlenas. Short varlenas seems to make it\n>> > less crucial to pursue this, as the average datum that'd benefit is long\n>> > enough to make padding a non-issue. So I don't think it'd make sense to\n>> > tackle this project at the same time.\n>> >\n>>\n>> Not sure, but how come it's not failing on the picky platforms, then? On\n>> x86 it's probably OK because it's pretty permissive, but I'd expect some\n>> platforms (s390, parisc, itanium, powerpc, ...) to be much pickier.\n>\n>I'm not quite following? As I said, we already need to use alignment\n>aware code due to caring for short varlenas. And these types aren't\n>actually struct mappable.\n>\n\nSorry, I misread/misunderstood what you wrote.\n\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 31 Oct 2019 20:45:34 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Removing alignment padding for byval types" } ]
[ { "msg_contents": "This patch allows the superuser to grant passwordless connection rights\nin postgres_fdw user mappings.\n\n\nThe patch is authored by my colleague Craig Ringer, with slight bitrot\nfixed by me.\n\n\nOne use case for this is with passphrase-protected client certificates,\na patch for which will follow shortly.\n\n\nHere are Craig's remarks on the patch:\n\n  \n    postgres_fdw denies a non-superuser the ability to establish a\nconnection that\n    doesn't have a password in the connection string, or one that fails\nto actually\n    use the password in authentication. This is to stop the unprivileged\nuser using\n    OS-level authentication as the postgres server (peer, ident, trust).\nIt also\n    stops unauthorized use of local credentials like .pgpass, a service\nfile,\n    client certificate files, etc.\n   \n    Add the ability for a superuser to create user mappings that\noverride this\n    behaviour by setting the passwordless_ok attribute to true in a user\nmapping\n    for a non-superuser. The non-superuser gains the ability to use the\nFDW the\n    mapping applies to even if there's no password in their mapping or\nin the\n    connection string.\n   \n    This is only safe if the superuser has established that the local\nserver is\n    configured safely. It must be configured not to allow\n    trust/peer/ident/sspi/gssapi auth to allow the OS user the postgres\nserver runs\n    as to log in to postgres as a superuser. Client certificate keys can\nbe used\n    too, if accessible. But the superuser can already GRANT superrole TO\n    normalrole, so it's not any sort of new power.\n   \n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 31 Oct 2019 16:58:20 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Allow superuser to grant passwordless connection rights on\n postgres_fdw" }, { "msg_contents": "On Thu, Oct 31, 2019 at 4:58 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> This patch allows the superuser to grant passwordless connection rights\n> in postgres_fdw user mappings.\n\nThis is clearly something that we need, as the current code seems\nwoefully ignorant of the fact that passwords are not the only\nauthentication method supported by PostgreSQL, nor even the most\nsecure.\n\nBut, I do wonder a bit if we ought to think harder about the overall\nauthentication model for FDW. Like, maybe we'd take a different view\nof how to solve this particular piece of the problem if we were\nthinking about how FDWs could do LDAP authentication, SSL\nauthentication, credentials forwarding...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 12:58:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow superuser to grant passwordless connection rights on\n postgres_fdw" }, { "msg_contents": "\nOn 11/1/19 12:58 PM, Robert Haas wrote:\n> On Thu, Oct 31, 2019 at 4:58 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>> This patch allows the superuser to grant passwordless connection rights\n>> in postgres_fdw user mappings.\n> This is clearly something that we need, as the current code seems\n> woefully ignorant of the fact that passwords are not the only\n> authentication method supported by PostgreSQL, nor even the most\n> secure.\n>\n> But, I do wonder a bit if we ought to think harder about the overall\n> authentication model for FDW. Like, maybe we'd take a different view\n> of how to solve this particular piece of the problem if we were\n> thinking about how FDWs could do LDAP authentication, SSL\n> authentication, credentials forwarding...\n>\n\n\nI'm certainly open to alternatives.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 1 Nov 2019 14:00:27 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow superuser to grant passwordless connection rights on\n postgres_fdw" }, { "msg_contents": "Greetings,\n\n* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n> On 11/1/19 12:58 PM, Robert Haas wrote:\n> > On Thu, Oct 31, 2019 at 4:58 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >> This patch allows the superuser to grant passwordless connection rights\n> >> in postgres_fdw user mappings.\n> > This is clearly something that we need, as the current code seems\n> > woefully ignorant of the fact that passwords are not the only\n> > authentication method supported by PostgreSQL, nor even the most\n> > secure.\n> >\n> > But, I do wonder a bit if we ought to think harder about the overall\n> > authentication model for FDW. Like, maybe we'd take a different view\n> > of how to solve this particular piece of the problem if we were\n> > thinking about how FDWs could do LDAP authentication, SSL\n> > authentication, credentials forwarding...\n> \n> I'm certainly open to alternatives.\n\nI've long felt that the way to handle this kind of requirement is to\nhave a \"trusted remote server\" kind of option- where the local server\nauthenticates to the remote server as a *server* and then says \"this is\nthe user on this server, and this is the user that this user wishes to\nbe\" and the remote server is then able to decide if they accept that, or\nnot.\n\nTo be specific, there would be some kind of 'trust' established between\nthe servers and only if there is some kind of server-level\nauthentication, eg: dual TLS auth, or dual GSSAPI auth; and then, a\nmapping is defined for that server, which specifies what remote user is\nallowed to log in as what local user.\n\nThis would be a server-to-server auth arrangement, and is quite\ndifferent from credential forwarding, or similar. I am certainly also a\nhuge fan of the idea that we support Kerberos/GSSAPI credential\nforwarding / delegation, where a client willingly forwards to the PG\nserver a set of credentials which then allow the PG server to\nauthenticate as that user to another system (eg: through an FDW to\nanother PG server).\n\nOf course, as long as we're talking pie-in-the-sky ideas, I would\ncertainly be entirely for supporting both. ;)\n\nThanks,\n\nStephen", "msg_date": "Sun, 3 Nov 2019 23:20:05 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Allow superuser to grant passwordless connection rights on\n postgres_fdw" }, { "msg_contents": "On Mon, 4 Nov 2019 at 12:20, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n> > On 11/1/19 12:58 PM, Robert Haas wrote:\n> > > On Thu, Oct 31, 2019 at 4:58 PM Andrew Dunstan\n> > > <andrew.dunstan@2ndquadrant.com> wrote:\n> > >> This patch allows the superuser to grant passwordless connection\n> rights\n> > >> in postgres_fdw user mappings.\n> > > This is clearly something that we need, as the current code seems\n> > > woefully ignorant of the fact that passwords are not the only\n> > > authentication method supported by PostgreSQL, nor even the most\n> > > secure.\n> > >\n> > > But, I do wonder a bit if we ought to think harder about the overall\n> > > authentication model for FDW. Like, maybe we'd take a different view\n> > > of how to solve this particular piece of the problem if we were\n> > > thinking about how FDWs could do LDAP authentication, SSL\n> > > authentication, credentials forwarding...\n> >\n> > I'm certainly open to alternatives.\n>\n> I've long felt that the way to handle this kind of requirement is to\n> have a \"trusted remote server\" kind of option- where the local server\n> authenticates to the remote server as a *server* and then says \"this is\n> the user on this server, and this is the user that this user wishes to\n> be\" and the remote server is then able to decide if they accept that, or\n> not.\n>\n\nThe original use case for the patch was to allow FDWs to use SSL/TLS client\ncertificates. Each user-mapping has its own certificate - there's a\nseparate patch to allow that. So there's no delegation of trust via\nKerberos etc in that particular case.\n\nI can see value in using Kerberos etc for that too though, as it separates\nauthorization and authentication in the same manner as most sensible\nsystems. You can say \"user postgres@foo is trusted to vet users so you can\nsafely hand out tickets for any bar@foo that postgres@foo says is legit\".\n\nI would strongly discourage allowing all users on host A to authenticate as\nuser postgres on host B. But with appropriate user-mappings support, we\ncould likely support that sort of model for both SSPI and Kerberos.\n\nA necessary prerequisite is that Pg be able to cope with passwordless\nuser-mappings though. Hence this patch.\n\n\n\n>\n> To be specific, there would be some kind of 'trust' established between\n> the servers and only if there is some kind of server-level\n> authentication, eg: dual TLS auth, or dual GSSAPI auth; and then, a\n> mapping is defined for that server, which specifies what remote user is\n> allowed to log in as what local user.\n>\n> This would be a server-to-server auth arrangement, and is quite\n> different from credential forwarding, or similar. I am certainly also a\n> huge fan of the idea that we support Kerberos/GSSAPI credential\n> forwarding / delegation, where a client willingly forwards to the PG\n> server a set of credentials which then allow the PG server to\n> authenticate as that user to another system (eg: through an FDW to\n> another PG server).\n>\n> Of course, as long as we're talking pie-in-the-sky ideas, I would\n> certainly be entirely for supporting both. ;)\n>\n> Thanks,\n>\n> Stephen\n>\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Mon, 4 Nov 2019 at 12:20, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n> On 11/1/19 12:58 PM, Robert Haas wrote:\n> > On Thu, Oct 31, 2019 at 4:58 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> >> This patch allows the superuser to grant passwordless connection rights\n> >> in postgres_fdw user mappings.\n> > This is clearly something that we need, as the current code seems\n> > woefully ignorant of the fact that passwords are not the only\n> > authentication method supported by PostgreSQL, nor even the most\n> > secure.\n> >\n> > But, I do wonder a bit if we ought to think harder about the overall\n> > authentication model for FDW. Like, maybe we'd take a different view\n> > of how to solve this particular piece of the problem if we were\n> > thinking about how FDWs could do LDAP authentication, SSL\n> > authentication, credentials forwarding...\n> \n> I'm certainly open to alternatives.\n\nI've long felt that the way to handle this kind of requirement is to\nhave a \"trusted remote server\" kind of option- where the local server\nauthenticates to the remote server as a *server* and then says \"this is\nthe user on this server, and this is the user that this user wishes to\nbe\" and the remote server is then able to decide if they accept that, or\nnot.The original use case for the patch was to allow FDWs to use SSL/TLS client certificates. Each user-mapping has its own certificate - there's a separate patch to allow that. So there's no delegation of trust via Kerberos etc in that particular case.I can see value in using Kerberos etc for that too though, as it separates authorization and authentication in the same manner as most sensible systems. You can say \"user postgres@foo is trusted to vet users so you can safely hand out tickets for any bar@foo that postgres@foo says is legit\".I would strongly discourage allowing all users on host A to authenticate as user postgres on host B. But with appropriate user-mappings support, we could likely support that sort of model for both SSPI and Kerberos.A necessary prerequisite is that Pg be able to cope with passwordless user-mappings though. Hence this patch. \n\nTo be specific, there would be some kind of 'trust' established between\nthe servers and only if there is some kind of server-level\nauthentication, eg: dual TLS auth, or dual GSSAPI auth; and then, a\nmapping is defined for that server, which specifies what remote user is\nallowed to log in as what local user.\n\nThis would be a server-to-server auth arrangement, and is quite\ndifferent from credential forwarding, or similar.  I am certainly also a\nhuge fan of the idea that we support Kerberos/GSSAPI credential\nforwarding / delegation, where a client willingly forwards to the PG\nserver a set of credentials which then allow the PG server to\nauthenticate as that user to another system (eg: through an FDW to\nanother PG server).\n\nOf course, as long as we're talking pie-in-the-sky ideas, I would\ncertainly be entirely for supporting both. ;)\n\nThanks,\n\nStephen\n--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Sun, 10 Nov 2019 17:35:36 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow superuser to grant passwordless connection rights on\n postgres_fdw" }, { "msg_contents": "On Sun, Nov 10, 2019 at 4:35 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n> On Mon, 4 Nov 2019 at 12:20, Stephen Frost <sfrost@snowman.net> wrote:\n>>\n>> Greetings,\n>>\n>> * Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n>> > On 11/1/19 12:58 PM, Robert Haas wrote:\n>> > > On Thu, Oct 31, 2019 at 4:58 PM Andrew Dunstan\n>> > > <andrew.dunstan@2ndquadrant.com> wrote:\n>> > >> This patch allows the superuser to grant passwordless connection rights\n>> > >> in postgres_fdw user mappings.\n>> > > This is clearly something that we need, as the current code seems\n>> > > woefully ignorant of the fact that passwords are not the only\n>> > > authentication method supported by PostgreSQL, nor even the most\n>> > > secure.\n>> > >\n>> > > But, I do wonder a bit if we ought to think harder about the overall\n>> > > authentication model for FDW. Like, maybe we'd take a different view\n>> > > of how to solve this particular piece of the problem if we were\n>> > > thinking about how FDWs could do LDAP authentication, SSL\n>> > > authentication, credentials forwarding...\n>> >\n>> > I'm certainly open to alternatives.\n>>\n>> I've long felt that the way to handle this kind of requirement is to\n>> have a \"trusted remote server\" kind of option- where the local server\n>> authenticates to the remote server as a *server* and then says \"this is\n>> the user on this server, and this is the user that this user wishes to\n>> be\" and the remote server is then able to decide if they accept that, or\n>> not.\n>\n>\n> The original use case for the patch was to allow FDWs to use SSL/TLS client certificates. Each user-mapping has its own certificate - there's a separate patch to allow that. So there's no delegation of trust via Kerberos etc in that particular case.\n>\n> I can see value in using Kerberos etc for that too though, as it separates authorization and authentication in the same manner as most sensible systems. You can say \"user postgres@foo is trusted to vet users so you can safely hand out tickets for any bar@foo that postgres@foo says is legit\".\n>\n> I would strongly discourage allowing all users on host A to authenticate as user postgres on host B. But with appropriate user-mappings support, we could likely support that sort of model for both SSPI and Kerberos.\n>\n> A necessary prerequisite is that Pg be able to cope with passwordless user-mappings though. Hence this patch.\n>\n>\n\n\nYeah, I agree. Does anyone else want to weigh in on this? If nobody\nobjects I'd like to tidy this up and get it committed so we can add\nsupport for client certs in postgres_fdw, which is the real business\nat hand, and which I know from various offline comments a number of\npeople are keen to have.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 25 Nov 2019 16:56:24 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow superuser to grant passwordless connection rights on\n postgres_fdw" }, { "msg_contents": "Greetings,\n\n* Craig Ringer (craig@2ndquadrant.com) wrote:\n> On Mon, 4 Nov 2019 at 12:20, Stephen Frost <sfrost@snowman.net> wrote:\n> > I've long felt that the way to handle this kind of requirement is to\n> > have a \"trusted remote server\" kind of option- where the local server\n> > authenticates to the remote server as a *server* and then says \"this is\n> > the user on this server, and this is the user that this user wishes to\n> > be\" and the remote server is then able to decide if they accept that, or\n> > not.\n> \n> The original use case for the patch was to allow FDWs to use SSL/TLS client\n> certificates. Each user-mapping has its own certificate - there's a\n> separate patch to allow that. So there's no delegation of trust via\n> Kerberos etc in that particular case.\n> \n> I can see value in using Kerberos etc for that too though, as it separates\n> authorization and authentication in the same manner as most sensible\n> systems. You can say \"user postgres@foo is trusted to vet users so you can\n> safely hand out tickets for any bar@foo that postgres@foo says is legit\".\n\nSo, just to be clear, the way this *actually* works is a bit different\nfrom the way being described above, last time I looked into Kerberos\ndelegations anyway.\n\nEssentially, the KDC can be set up to allow 'bar@foo' to request a\nticket to delegate to 'postgres@foo', which then allows 'postgres@foo'\nto connect as if they are 'bar@foo' to some other service (and in some\nimplementations, I believe it's further possible to say that the ticket\nfor 'bar@foo' which is delegated to 'postgres@foo' is only allowed to\nrequest tickets for certain specific services, such as 'postgres2@foo'\nor what-have-you).\n\nNote that setting this up with an MIT KDC requires configuring it with\nan LDAP backend as the traditional KDC database doesn't support this\nkind of complex delegation control (again, last time I checked anyway).\n\n> I would strongly discourage allowing all users on host A to authenticate as\n> user postgres on host B. But with appropriate user-mappings support, we\n> could likely support that sort of model for both SSPI and Kerberos.\n\nIdeally, both sides would get a 'vote' regarding what's allowed, I would\nthink. That is, the connecting side would have to have a user mapping\nthat says \"this authenticated user is allowed to connect to this remote\nserver as this user\", and the remote server would have something like\n\"this server that's connecting, validated by the certificate presented\nby the server, is allowed to authenticate as this user\". I feel like\nwe're mostly there by allowing the connecting server to use a\ncertificate to connect to the remote server, while it's also checking\nthe user mapping, and the remote server's pg_hba.conf being configured\nto allow cert-based auth with a CN mapping from the CN of the connecting\nserver's certificate to authenticate to whatever users the remote server\nwants to allow. Is that more-or-less the idea here..?\n\n> A necessary prerequisite is that Pg be able to cope with passwordless\n> user-mappings though. Hence this patch.\n\nSure, that part seems like it makes sense to me (and perhaps has now\nbeen done, just catching up on things after travel and holidays and such\nhere in the US).\n\nThanks!\n\nStephen", "msg_date": "Tue, 3 Dec 2019 09:36:23 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Allow superuser to grant passwordless connection rights on\n postgres_fdw" }, { "msg_contents": "On Tue, Dec 3, 2019 at 9:36 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n\n>\n> > A necessary prerequisite is that Pg be able to cope with passwordless\n> > user-mappings though. Hence this patch.\n>\n> Sure, that part seems like it makes sense to me (and perhaps has now\n> been done, just catching up on things after travel and holidays and such\n> here in the US).\n>\n\n\nIt hasn't been done, but I now propose to commit it shortly so other\nwork can proceed.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Dec 2019 11:28:33 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow superuser to grant passwordless connection rights on\n postgres_fdw" } ]
[ { "msg_contents": "\nThis patch provides for an sslpassword parameter for libpq, and a hook\nthat a client can fill in for a callback function to set the password.\n\n\nThis provides similar facilities to those already available in the JDBC\ndriver.\n\n\nThere is also a function to fetch the sslpassword from the connection\nparameters, in the same way that other settings can be fetched.\n\n\nThis is mostly the excellent work of my colleague Craig Ringer, with a\nfew embellishments from me.\n\n\nHere are his notes:\n\n\n    Allow libpq to non-interactively decrypt client certificates that\nare stored\n    encrypted by adding a new \"sslpassword\" connection option.\n   \n    The sslpassword option offers a middle ground between a cleartext\nkey and\n    setting up advanced key mangement via openssl engines, PKCS#11, USB\ncrypto\n    offload and key escrow, etc.\n   \n    Previously use of encrypted client certificate keys only worked if\nthe user\n    could enter the key's password interactively on stdin, in response\nto openssl's\n    default prompt callback:\n   \n        Enter PEM passhprase:\n   \n    That's infesible in many situations, especially things like use from\n    postgres_fdw.\n   \n    This change also allows admins to prevent libpq from ever prompting\nfor a\n    password by calling:\n   \n        PQsetSSLKeyPassHook(PQdefaultSSLKeyPassHook);\n   \n    which is useful since OpenSSL likes to open /dev/tty to prompt for a\npassword,\n    so even closing stdin won't stop it blocking if there's no user\ninput available.\n    Applications may also override or extend SSL password fetching with\ntheir own\n    callback.\n   \n    There is deliberately no environment variable equivalent for the\nsslpassword\n    option.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 31 Oct 2019 18:33:12 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "libpq sslpassword parameter and callback function" }, { "msg_contents": "This time with attachment.\n\n\nOn 10/31/19 6:33 PM, Andrew Dunstan wrote:\n> This patch provides for an sslpassword parameter for libpq, and a hook\n> that a client can fill in for a callback function to set the password.\n>\n>\n> This provides similar facilities to those already available in the JDBC\n> driver.\n>\n>\n> There is also a function to fetch the sslpassword from the connection\n> parameters, in the same way that other settings can be fetched.\n>\n>\n> This is mostly the excellent work of my colleague Craig Ringer, with a\n> few embellishments from me.\n>\n>\n> Here are his notes:\n>\n>\n>     Allow libpq to non-interactively decrypt client certificates that\n> are stored\n>     encrypted by adding a new \"sslpassword\" connection option.\n>    \n>     The sslpassword option offers a middle ground between a cleartext\n> key and\n>     setting up advanced key mangement via openssl engines, PKCS#11, USB\n> crypto\n>     offload and key escrow, etc.\n>    \n>     Previously use of encrypted client certificate keys only worked if\n> the user\n>     could enter the key's password interactively on stdin, in response\n> to openssl's\n>     default prompt callback:\n>    \n>         Enter PEM passhprase:\n>    \n>     That's infesible in many situations, especially things like use from\n>     postgres_fdw.\n>    \n>     This change also allows admins to prevent libpq from ever prompting\n> for a\n>     password by calling:\n>    \n>         PQsetSSLKeyPassHook(PQdefaultSSLKeyPassHook);\n>    \n>     which is useful since OpenSSL likes to open /dev/tty to prompt for a\n> password,\n>     so even closing stdin won't stop it blocking if there's no user\n> input available.\n>     Applications may also override or extend SSL password fetching with\n> their own\n>     callback.\n>    \n>     There is deliberately no environment variable equivalent for the\n> sslpassword\n>     option.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 31 Oct 2019 18:34:32 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "\nOn 10/31/19 6:34 PM, Andrew Dunstan wrote:\n> This time with attachment.\n>\n>\n> On 10/31/19 6:33 PM, Andrew Dunstan wrote:\n>> This patch provides for an sslpassword parameter for libpq, and a hook\n>> that a client can fill in for a callback function to set the password.\n>>\n>>\n>> This provides similar facilities to those already available in the JDBC\n>> driver.\n>>\n>>\n>> There is also a function to fetch the sslpassword from the connection\n>> parameters, in the same way that other settings can be fetched.\n>>\n>>\n>> This is mostly the excellent work of my colleague Craig Ringer, with a\n>> few embellishments from me.\n>>\n>>\n>> Here are his notes:\n>>\n>>\n>>     Allow libpq to non-interactively decrypt client certificates that\n>> are stored\n>>     encrypted by adding a new \"sslpassword\" connection option.\n>>    \n>>     The sslpassword option offers a middle ground between a cleartext\n>> key and\n>>     setting up advanced key mangement via openssl engines, PKCS#11, USB\n>> crypto\n>>     offload and key escrow, etc.\n>>    \n>>     Previously use of encrypted client certificate keys only worked if\n>> the user\n>>     could enter the key's password interactively on stdin, in response\n>> to openssl's\n>>     default prompt callback:\n>>    \n>>         Enter PEM passhprase:\n>>    \n>>     That's infesible in many situations, especially things like use from\n>>     postgres_fdw.\n>>    \n>>     This change also allows admins to prevent libpq from ever prompting\n>> for a\n>>     password by calling:\n>>    \n>>         PQsetSSLKeyPassHook(PQdefaultSSLKeyPassHook);\n>>    \n>>     which is useful since OpenSSL likes to open /dev/tty to prompt for a\n>> password,\n>>     so even closing stdin won't stop it blocking if there's no user\n>> input available.\n>>     Applications may also override or extend SSL password fetching with\n>> their own\n>>     callback.\n>>    \n>>     There is deliberately no environment variable equivalent for the\n>> sslpassword\n>>     option.\n>>\n>>\n\nI should also mention that this patch provides for support for DER\nformat certificates and keys.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 31 Oct 2019 19:27:52 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "On Fri, 1 Nov 2019 at 07:27, Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\nwrote:\n\n>\n> On 10/31/19 6:34 PM, Andrew Dunstan wrote:\n> > This time with attachment.\n> >\n> >\n> > On 10/31/19 6:33 PM, Andrew Dunstan wrote:\n> >> This patch provides for an sslpassword parameter for libpq, and a hook\n> >> that a client can fill in for a callback function to set the password.\n> >>\n> >>\n> >> This provides similar facilities to those already available in the JDBC\n> >> driver.\n> >>\n> >>\n> >> There is also a function to fetch the sslpassword from the connection\n> >> parameters, in the same way that other settings can be fetched.\n> >>\n> >>\n> >> This is mostly the excellent work of my colleague Craig Ringer, with a\n> >> few embellishments from me.\n> >>\n> >>\n> >> Here are his notes:\n> >>\n> >>\n> >> Allow libpq to non-interactively decrypt client certificates that\n> >> are stored\n> >> encrypted by adding a new \"sslpassword\" connection option.\n> >>\n> >> The sslpassword option offers a middle ground between a cleartext\n> >> key and\n> >> setting up advanced key mangement via openssl engines, PKCS#11, USB\n> >> crypto\n> >> offload and key escrow, etc.\n> >>\n> >> Previously use of encrypted client certificate keys only worked if\n> >> the user\n> >> could enter the key's password interactively on stdin, in response\n> >> to openssl's\n> >> default prompt callback:\n> >>\n> >> Enter PEM passhprase:\n> >>\n> >> That's infesible in many situations, especially things like use from\n> >> postgres_fdw.\n> >>\n> >> This change also allows admins to prevent libpq from ever prompting\n> >> for a\n> >> password by calling:\n> >>\n> >> PQsetSSLKeyPassHook(PQdefaultSSLKeyPassHook);\n> >>\n> >> which is useful since OpenSSL likes to open /dev/tty to prompt for a\n> >> password,\n> >> so even closing stdin won't stop it blocking if there's no user\n> >> input available.\n> >> Applications may also override or extend SSL password fetching with\n> >> their own\n> >> callback.\n> >>\n> >> There is deliberately no environment variable equivalent for the\n> >> sslpassword\n> >> option.\n> >>\n> >>\n>\n> I should also mention that this patch provides for support for DER\n> format certificates and keys.\n>\n>\nYep, that was a trivial change I rolled into it.\n\nFWIW, this is related to two other patches: the patch to allow passwordless\nfdw connections with explicit superuser approval, and the patch to allow\nsslkey/sslpassword to be set as user mapping options in postgres_fdw .\nTogether all three patches make it possible to use SSL client certificates\nto manage authentication in postgres_fdw user mappings.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 1 Nov 2019 at 07:27, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nOn 10/31/19 6:34 PM, Andrew Dunstan wrote:\n> This time with attachment.\n>\n>\n> On 10/31/19 6:33 PM, Andrew Dunstan wrote:\n>> This patch provides for an sslpassword parameter for libpq, and a hook\n>> that a client can fill in for a callback function to set the password.\n>>\n>>\n>> This provides similar facilities to those already available in the JDBC\n>> driver.\n>>\n>>\n>> There is also a function to fetch the sslpassword from the connection\n>> parameters, in the same way that other settings can be fetched.\n>>\n>>\n>> This is mostly the excellent work of my colleague Craig Ringer, with a\n>> few embellishments from me.\n>>\n>>\n>> Here are his notes:\n>>\n>>\n>>     Allow libpq to non-interactively decrypt client certificates that\n>> are stored\n>>     encrypted by adding a new \"sslpassword\" connection option.\n>>    \n>>     The sslpassword option offers a middle ground between a cleartext\n>> key and\n>>     setting up advanced key mangement via openssl engines, PKCS#11, USB\n>> crypto\n>>     offload and key escrow, etc.\n>>    \n>>     Previously use of encrypted client certificate keys only worked if\n>> the user\n>>     could enter the key's password interactively on stdin, in response\n>> to openssl's\n>>     default prompt callback:\n>>    \n>>         Enter PEM passhprase:\n>>    \n>>     That's infesible in many situations, especially things like use from\n>>     postgres_fdw.\n>>    \n>>     This change also allows admins to prevent libpq from ever prompting\n>> for a\n>>     password by calling:\n>>    \n>>         PQsetSSLKeyPassHook(PQdefaultSSLKeyPassHook);\n>>    \n>>     which is useful since OpenSSL likes to open /dev/tty to prompt for a\n>> password,\n>>     so even closing stdin won't stop it blocking if there's no user\n>> input available.\n>>     Applications may also override or extend SSL password fetching with\n>> their own\n>>     callback.\n>>    \n>>     There is deliberately no environment variable equivalent for the\n>> sslpassword\n>>     option.\n>>\n>>\n\nI should also mention that this patch provides for support for DER\nformat certificates and keys.Yep, that was a trivial change I rolled into it.FWIW, this is related to two other patches: the patch to allow passwordless fdw connections with explicit superuser approval, and the patch to allow sslkey/sslpassword to be set as user mapping options in postgres_fdw . Together all three patches make it possible to use SSL client certificates to manage authentication in postgres_fdw user mappings.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Sun, 10 Nov 2019 17:47:24 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "On Sun, Nov 10, 2019 at 05:47:24PM +0800, Craig Ringer wrote:\n> Yep, that was a trivial change I rolled into it.\n> \n> FWIW, this is related to two other patches: the patch to allow passwordless fdw\n> connections with explicit superuser approval, and the patch to allow sslkey/\n> sslpassword to be set as user mapping options in postgres_fdw . Together all\n> three patches make it possible to use SSL client certificates to manage\n> authentication in postgres_fdw user mappings.\n\nOh, nice, greatly needed!\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 12 Nov 2019 22:10:38 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "On 10/31/19 7:27 PM, Andrew Dunstan wrote:\n> On 10/31/19 6:34 PM, Andrew Dunstan wrote:\n>> This time with attachment.\n>>\n>>\n>> On 10/31/19 6:33 PM, Andrew Dunstan wrote:\n>>> This patch provides for an sslpassword parameter for libpq, and a hook\n>>> that a client can fill in for a callback function to set the password.\n>>>\n>>>\n>>> This provides similar facilities to those already available in the JDBC\n>>> driver.\n>>>\n>>>\n>>> There is also a function to fetch the sslpassword from the connection\n>>> parameters, in the same way that other settings can be fetched.\n>>>\n>>>\n>>> This is mostly the excellent work of my colleague Craig Ringer, with a\n>>> few embellishments from me.\n>>>\n>>>\n>>> Here are his notes:\n>>>\n>>>\n>>>     Allow libpq to non-interactively decrypt client certificates that\n>>> are stored\n>>>     encrypted by adding a new \"sslpassword\" connection option.\n>>>    \n>>>     The sslpassword option offers a middle ground between a cleartext\n>>> key and\n>>>     setting up advanced key mangement via openssl engines, PKCS#11, USB\n>>> crypto\n>>>     offload and key escrow, etc.\n>>>    \n>>>     Previously use of encrypted client certificate keys only worked if\n>>> the user\n>>>     could enter the key's password interactively on stdin, in response\n>>> to openssl's\n>>>     default prompt callback:\n>>>    \n>>>         Enter PEM passhprase:\n>>>    \n>>>     That's infesible in many situations, especially things like use from\n>>>     postgres_fdw.\n>>>    \n>>>     This change also allows admins to prevent libpq from ever prompting\n>>> for a\n>>>     password by calling:\n>>>    \n>>>         PQsetSSLKeyPassHook(PQdefaultSSLKeyPassHook);\n>>>    \n>>>     which is useful since OpenSSL likes to open /dev/tty to prompt for a\n>>> password,\n>>>     so even closing stdin won't stop it blocking if there's no user\n>>> input available.\n>>>     Applications may also override or extend SSL password fetching with\n>>> their own\n>>>     callback.\n>>>    \n>>>     There is deliberately no environment variable equivalent for the\n>>> sslpassword\n>>>     option.\n>>>\n>>>\n> I should also mention that this patch provides for support for DER\n> format certificates and keys.\n>\n>\n\n\nHere's an updated version of the patch, adjusted to the now committed\nchanges to TestLib.pm.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 25 Nov 2019 16:09:08 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "On 11/25/19 4:09 PM, Andrew Dunstan wrote:\n> On 10/31/19 7:27 PM, Andrew Dunstan wrote:\n>> On 10/31/19 6:34 PM, Andrew Dunstan wrote:\n>>> This time with attachment.\n>>>\n>>>\n>>> On 10/31/19 6:33 PM, Andrew Dunstan wrote:\n>>>> This patch provides for an sslpassword parameter for libpq, and a hook\n>>>> that a client can fill in for a callback function to set the password.\n>>>>\n>>>>\n>>>> This provides similar facilities to those already available in the JDBC\n>>>> driver.\n>>>>\n>>>>\n>>>> There is also a function to fetch the sslpassword from the connection\n>>>> parameters, in the same way that other settings can be fetched.\n>>>>\n>>>>\n>>>> This is mostly the excellent work of my colleague Craig Ringer, with a\n>>>> few embellishments from me.\n>>>>\n>>>>\n>>>> Here are his notes:\n>>>>\n>>>>\n>>>>     Allow libpq to non-interactively decrypt client certificates that\n>>>> are stored\n>>>>     encrypted by adding a new \"sslpassword\" connection option.\n>>>>    \n>>>>     The sslpassword option offers a middle ground between a cleartext\n>>>> key and\n>>>>     setting up advanced key mangement via openssl engines, PKCS#11, USB\n>>>> crypto\n>>>>     offload and key escrow, etc.\n>>>>    \n>>>>     Previously use of encrypted client certificate keys only worked if\n>>>> the user\n>>>>     could enter the key's password interactively on stdin, in response\n>>>> to openssl's\n>>>>     default prompt callback:\n>>>>    \n>>>>         Enter PEM passhprase:\n>>>>    \n>>>>     That's infesible in many situations, especially things like use from\n>>>>     postgres_fdw.\n>>>>    \n>>>>     This change also allows admins to prevent libpq from ever prompting\n>>>> for a\n>>>>     password by calling:\n>>>>    \n>>>>         PQsetSSLKeyPassHook(PQdefaultSSLKeyPassHook);\n>>>>    \n>>>>     which is useful since OpenSSL likes to open /dev/tty to prompt for a\n>>>> password,\n>>>>     so even closing stdin won't stop it blocking if there's no user\n>>>> input available.\n>>>>     Applications may also override or extend SSL password fetching with\n>>>> their own\n>>>>     callback.\n>>>>    \n>>>>     There is deliberately no environment variable equivalent for the\n>>>> sslpassword\n>>>>     option.\n>>>>\n>>>>\n>> I should also mention that this patch provides for support for DER\n>> format certificates and keys.\n>>\n>>\n>\n> Here's an updated version of the patch, adjusted to the now committed\n> changes to TestLib.pm.\n>\n>\n\n\nHere's an update now we have backed out the TestLib changes. The tests\nthat need a pty are skipped.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 27 Nov 2019 19:06:10 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, failed\n\nHi Andrew,\r\n\r\nI've reviewed your \"libpq sslpassword parameter and callback function\" patch (0001-libpq-sslpassword-der-support.patch), and only found a few minor things (otherwise it looked good to me):\r\n\r\n1) There's a few trailing white-space warnings on patch application (from where you modified to skip 2 of the tests):\r\ngit apply 0001-libpq-sslpassword-der-support.patch\r\n0001-libpq-sslpassword-der-support.patch:649: trailing whitespace.\r\n\t# so they don't hang. For now they are not performed. \r\n0001-libpq-sslpassword-der-support.patch:659: trailing whitespace.\r\n\t\r\nwarning: 2 lines add whitespace errors.\r\n\r\n\r\n2) src/interfaces/libpq/libpq-fe.h\r\nThe following portion of the comment should be removed.\r\n\r\n+ * 2ndQPostgres extension. If you need to be compatible with unpatched libpq\r\n+ * you must dlsym() these.\r\n\r\n3) Documentation for the \"PQsslpassword\" function should be added to the libpq \"33.2 Connection Status Functions\" section.\r\n\r\n\r\nI made the following notes about how/what I reviewed and tested:\r\n\r\n- Applied patch and built Postgres (--with-openssl --enable-tap-tests), checked build output\r\n- Checked patch code modifications (format, logic, memory usage, efficiency, corner cases etc.)\r\n- Built documentation and checked updated portions (format, grammar, details, completeness etc.)\r\n- Checked test updates \r\n- Ran updated contrib/dblink tests - confirmed all passed\r\n- Ran updated SSL (TAP) tests - confirmed all passed\r\n- Ran \"make installcheck-world\", as per review requirements\r\n- Wrote small libpq-based app to test:\r\n - new APIs (PQsslpassword, PQsetSSLKeyPassHook, PQgetSSLKeyPassHook, PQdefaultSSLKeyPassHook)\r\n - passphrase-protected key with/without patch\r\n - patch with/without new key password callack\r\n - patch and certificate with/without pass phrase protection on key\r\n - default callback, callback delegation\r\n - PEM/DER keys\r\n\r\n\r\nRegards,\r\nGreg", "msg_date": "Fri, 29 Nov 2019 03:25:45 +0000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "On 11/28/19 10:25 PM, Greg Nancarrow wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, failed\n>\n> Hi Andrew,\n>\n> I've reviewed your \"libpq sslpassword parameter and callback function\" patch (0001-libpq-sslpassword-der-support.patch), and only found a few minor things (otherwise it looked good to me):\n>\n> 1) There's a few trailing white-space warnings on patch application (from where you modified to skip 2 of the tests):\n> git apply 0001-libpq-sslpassword-der-support.patch\n> 0001-libpq-sslpassword-der-support.patch:649: trailing whitespace.\n> \t# so they don't hang. For now they are not performed. \n> 0001-libpq-sslpassword-der-support.patch:659: trailing whitespace.\n> \t\n> warning: 2 lines add whitespace errors.\n>\n>\n> 2) src/interfaces/libpq/libpq-fe.h\n> The following portion of the comment should be removed.\n>\n> + * 2ndQPostgres extension. If you need to be compatible with unpatched libpq\n> + * you must dlsym() these.\n>\n> 3) Documentation for the \"PQsslpassword\" function should be added to the libpq \"33.2 Connection Status Functions\" section.\n>\n>\n> I made the following notes about how/what I reviewed and tested:\n>\n> - Applied patch and built Postgres (--with-openssl --enable-tap-tests), checked build output\n> - Checked patch code modifications (format, logic, memory usage, efficiency, corner cases etc.)\n> - Built documentation and checked updated portions (format, grammar, details, completeness etc.)\n> - Checked test updates \n> - Ran updated contrib/dblink tests - confirmed all passed\n> - Ran updated SSL (TAP) tests - confirmed all passed\n> - Ran \"make installcheck-world\", as per review requirements\n> - Wrote small libpq-based app to test:\n> - new APIs (PQsslpassword, PQsetSSLKeyPassHook, PQgetSSLKeyPassHook, PQdefaultSSLKeyPassHook)\n> - passphrase-protected key with/without patch\n> - patch with/without new key password callack\n> - patch and certificate with/without pass phrase protection on key\n> - default callback, callback delegation\n> - PEM/DER keys\n>\n>\n\n\nThanks, nice thorough review.\n\n\nHere's an updated patch that I think fixes all the things you mentioned.\nI plan to commit this tomorrow.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 29 Nov 2019 09:27:02 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "On Fri, Nov 29, 2019 at 09:27:02AM -0500, Andrew Dunstan wrote:\n> On 11/28/19 10:25 PM, Greg Nancarrow wrote:\n> > 3) Documentation for the \"PQsslpassword\" function should be added to the libpq \"33.2 Connection Status Functions\" section.\n\n> I plan to commit this tomorrow.\n\nThe PQsslpassword() function retrieves a connection parameter value, which one\ncan retrieve with PQconninfo(). Since introducing PQconninfo(), we have not\nadded any per-parameter accessor functions. Would you remove PQsslpassword()?\n\n\n", "msg_date": "Fri, 6 Dec 2019 07:57:15 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: libpq sslpassword parameter and callback function" }, { "msg_contents": "\nOn 12/6/19 2:57 AM, Noah Misch wrote:\n> On Fri, Nov 29, 2019 at 09:27:02AM -0500, Andrew Dunstan wrote:\n>> On 11/28/19 10:25 PM, Greg Nancarrow wrote:\n>>> 3) Documentation for the \"PQsslpassword\" function should be added to the libpq \"33.2 Connection Status Functions\" section.\n>> I plan to commit this tomorrow.\n> The PQsslpassword() function retrieves a connection parameter value, which one\n> can retrieve with PQconninfo(). Since introducing PQconninfo(), we have not\n> added any per-parameter accessor functions. Would you remove PQsslpassword()?\n\n\n\n*sigh* ok\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 7 Dec 2019 08:51:20 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: libpq sslpassword parameter and callback function" } ]
[ { "msg_contents": "Hello hackers,\n\nPlease feel free to edit this new page, which I'd like to use to keep\ntrack of observations, ideas and threads relating to hash joins.\n\nhttps://wiki.postgresql.org/wiki/Hash_Join\n\n\n", "msg_date": "Fri, 1 Nov 2019 10:16:35 +1100", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "A wiki page to track hash join projects and ideas" } ]
[ { "msg_contents": "Hi!\n\nOur customer faced with issue, when index is invisible after creation.\nThe reproducible case is following.\n\n $ psql db2\n # begin;\n # select txid_current();\n$ psql db1\n# select i as id, 0 as v into t from generate_series(1, 100000) i;\n# create unique index idx on t (id);\n# update t set v = v + 1 where id = 10000;\n# update t set v = v + 1 where id = 10000;\n# update t set v = v + 1 where id = 10000;\n# update t set v = v + 1 where id = 10000;\n# update t set v = v + 1 where id = 10000;\n# drop index idx;\n# create unique index idx on t (id);\n# explain analyze select v from t where id = 10000;\n\nThere is no issue if there is no parallel session in database db2.\nThe fact that index visibility depends on open transaction in\ndifferent database is ridiculous for users.\n\nThis happens so, because we're checking that there is no broken HOT\nchains after index creation by comparison pg_index.xmin and\nTransactionXmin. So, we check that pg_index.xmin is in the past for\ncurrent transaction in lossy way by comparison just xmins. Attached\npatch changes this check to XidInMVCCSnapshot().\n\nWith patch the issue is gone. My doubt about this patch is that it\nchanges check with TransactionXmin to check with GetActiveSnapshot(),\nwhich might be more recent. However, query shouldn't be executer with\nolder snapshot than one it was planned with.\n\nAny thoughts?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 1 Nov 2019 02:50:39 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Improve checking for pg_index.xmin" }, { "msg_contents": "On 01/11/2019 01:50, Alexander Korotkov wrote:\n> Hi!\n> \n> Our customer faced with issue, when index is invisible after creation.\n> The reproducible case is following.\n> \n> $ psql db2\n> # begin;\n> # select txid_current();\n> $ psql db1\n> # select i as id, 0 as v into t from generate_series(1, 100000) i;\n> # create unique index idx on t (id);\n> # update t set v = v + 1 where id = 10000;\n> # update t set v = v + 1 where id = 10000;\n> # update t set v = v + 1 where id = 10000;\n> # update t set v = v + 1 where id = 10000;\n> # update t set v = v + 1 where id = 10000;\n> # drop index idx;\n> # create unique index idx on t (id);\n> # explain analyze select v from t where id = 10000;\n> \n> There is no issue if there is no parallel session in database db2.\n> The fact that index visibility depends on open transaction in\n> different database is ridiculous for users.\n> \n> This happens so, because we're checking that there is no broken HOT\n> chains after index creation by comparison pg_index.xmin and\n> TransactionXmin. So, we check that pg_index.xmin is in the past for\n> current transaction in lossy way by comparison just xmins. Attached\n> patch changes this check to XidInMVCCSnapshot().\n> \n> With patch the issue is gone. My doubt about this patch is that it\n> changes check with TransactionXmin to check with GetActiveSnapshot(),\n> which might be more recent. However, query shouldn't be executer with\n> older snapshot than one it was planned with.\n\nHmm. Maybe you could construct a case like that with a creative mix of \nstable and volatile functions? Using GetOldestSnapshot() would be safer.\n\n- Heikki\n\n\n", "msg_date": "Wed, 8 Jan 2020 11:28:49 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Improve checking for pg_index.xmin" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 01/11/2019 01:50, Alexander Korotkov wrote:\n>> This happens so, because we're checking that there is no broken HOT\n>> chains after index creation by comparison pg_index.xmin and\n>> TransactionXmin. So, we check that pg_index.xmin is in the past for\n>> current transaction in lossy way by comparison just xmins. Attached\n>> patch changes this check to XidInMVCCSnapshot().\n>> With patch the issue is gone. My doubt about this patch is that it\n>> changes check with TransactionXmin to check with GetActiveSnapshot(),\n>> which might be more recent. However, query shouldn't be executer with\n>> older snapshot than one it was planned with.\n\n> Hmm. Maybe you could construct a case like that with a creative mix of \n> stable and volatile functions? Using GetOldestSnapshot() would be safer.\n\nI really wonder if this is safe at all.\n\n(1) Can we assume that the query will be executed with same-or-newer\nsnapshot as what was used by the planner? There's no such constraint\nin the plancache, I'm pretty sure.\n\n(2) Is committed-ness of the index-creating transaction actually\nsufficient to ensure that none of the broken HOT chains it saw are\na problem for the onlooker transaction? This is, at best, really\nun-obvious. Some of those HOT chains could involve xacts that were\nstill not committed when the index build finished, I believe.\n\n(3) What if the index was made with CREATE INDEX CONCURRENTLY ---\nwhich xid is actually on the pg_index row, and how does that factor\ninto (1) and (2)?\n\nOn the whole I don't find the risk/reward tradeoff of this looking\npromising. Even if it works reliably, I think the situations where\nit'll help a lot are a bit artificial.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jan 2020 08:37:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve checking for pg_index.xmin" }, { "msg_contents": "On Wed, Jan 8, 2020 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > On 01/11/2019 01:50, Alexander Korotkov wrote:\n> >> This happens so, because we're checking that there is no broken HOT\n> >> chains after index creation by comparison pg_index.xmin and\n> >> TransactionXmin. So, we check that pg_index.xmin is in the past for\n> >> current transaction in lossy way by comparison just xmins. Attached\n> >> patch changes this check to XidInMVCCSnapshot().\n> >> With patch the issue is gone. My doubt about this patch is that it\n> >> changes check with TransactionXmin to check with GetActiveSnapshot(),\n> >> which might be more recent. However, query shouldn't be executer with\n> >> older snapshot than one it was planned with.\n>\n> > Hmm. Maybe you could construct a case like that with a creative mix of\n> > stable and volatile functions? Using GetOldestSnapshot() would be safer.\n>\n> I really wonder if this is safe at all.\n>\n> (1) Can we assume that the query will be executed with same-or-newer\n> snapshot as what was used by the planner? There's no such constraint\n> in the plancache, I'm pretty sure.\n>\n> (2) Is committed-ness of the index-creating transaction actually\n> sufficient to ensure that none of the broken HOT chains it saw are\n> a problem for the onlooker transaction? This is, at best, really\n> un-obvious. Some of those HOT chains could involve xacts that were\n> still not committed when the index build finished, I believe.\n>\n> (3) What if the index was made with CREATE INDEX CONCURRENTLY ---\n> which xid is actually on the pg_index row, and how does that factor\n> into (1) and (2)?\n\nThank you for pointing. I'll investigate these issues in detail.\n\n> On the whole I don't find the risk/reward tradeoff of this looking\n> promising. Even if it works reliably, I think the situations where\n> it'll help a lot are a bit artificial.\n\nI can't agree that these situations are artificial. For me, it seems\nnatural that user expects index to be visible once it's created.\nAlso, we always teach users that long-running transactions are evil,\nbut nevertheless they are frequent in real life. So, it doesn't seem\nunlikely that one expects index to become visible, while long\ntransaction is running in parallel. This particular case was reported\nby our customer. After investigation I was surprised how rare such\ncases are reported...\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 12 Jan 2020 01:02:50 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Improve checking for pg_index.xmin" }, { "msg_contents": "On Sun, Jan 12, 2020 at 3:33 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Wed, Jan 8, 2020 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > > On 01/11/2019 01:50, Alexander Korotkov wrote:\n> > >> This happens so, because we're checking that there is no broken HOT\n> > >> chains after index creation by comparison pg_index.xmin and\n> > >> TransactionXmin. So, we check that pg_index.xmin is in the past for\n> > >> current transaction in lossy way by comparison just xmins. Attached\n> > >> patch changes this check to XidInMVCCSnapshot().\n> > >> With patch the issue is gone. My doubt about this patch is that it\n> > >> changes check with TransactionXmin to check with GetActiveSnapshot(),\n> > >> which might be more recent. However, query shouldn't be executer with\n> > >> older snapshot than one it was planned with.\n> >\n> > > Hmm. Maybe you could construct a case like that with a creative mix of\n> > > stable and volatile functions? Using GetOldestSnapshot() would be safer.\n> >\n> > I really wonder if this is safe at all.\n> >\n> > (1) Can we assume that the query will be executed with same-or-newer\n> > snapshot as what was used by the planner? There's no such constraint\n> > in the plancache, I'm pretty sure.\n> >\n> > (2) Is committed-ness of the index-creating transaction actually\n> > sufficient to ensure that none of the broken HOT chains it saw are\n> > a problem for the onlooker transaction? This is, at best, really\n> > un-obvious. Some of those HOT chains could involve xacts that were\n> > still not committed when the index build finished, I believe.\n> >\n> > (3) What if the index was made with CREATE INDEX CONCURRENTLY ---\n> > which xid is actually on the pg_index row, and how does that factor\n> > into (1) and (2)?\n>\n> Thank you for pointing. I'll investigate these issues in detail.\n>\n\nAre you planning to work on this patch [1] for current CF? If not,\nthen I think it is better to either move this to the next CF or mark\nit as RWF.\n\n[1] - https://commitfest.postgresql.org/27/2337/\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Mar 2020 18:08:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve checking for pg_index.xmin" }, { "msg_contents": "On Tue, Mar 24, 2020 at 3:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Sun, Jan 12, 2020 at 3:33 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> >\n> > On Wed, Jan 8, 2020 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > > > On 01/11/2019 01:50, Alexander Korotkov wrote:\n> > > >> This happens so, because we're checking that there is no broken HOT\n> > > >> chains after index creation by comparison pg_index.xmin and\n> > > >> TransactionXmin. So, we check that pg_index.xmin is in the past for\n> > > >> current transaction in lossy way by comparison just xmins. Attached\n> > > >> patch changes this check to XidInMVCCSnapshot().\n> > > >> With patch the issue is gone. My doubt about this patch is that it\n> > > >> changes check with TransactionXmin to check with GetActiveSnapshot(),\n> > > >> which might be more recent. However, query shouldn't be executer with\n> > > >> older snapshot than one it was planned with.\n> > >\n> > > > Hmm. Maybe you could construct a case like that with a creative mix of\n> > > > stable and volatile functions? Using GetOldestSnapshot() would be safer.\n> > >\n> > > I really wonder if this is safe at all.\n> > >\n> > > (1) Can we assume that the query will be executed with same-or-newer\n> > > snapshot as what was used by the planner? There's no such constraint\n> > > in the plancache, I'm pretty sure.\n> > >\n> > > (2) Is committed-ness of the index-creating transaction actually\n> > > sufficient to ensure that none of the broken HOT chains it saw are\n> > > a problem for the onlooker transaction? This is, at best, really\n> > > un-obvious. Some of those HOT chains could involve xacts that were\n> > > still not committed when the index build finished, I believe.\n> > >\n> > > (3) What if the index was made with CREATE INDEX CONCURRENTLY ---\n> > > which xid is actually on the pg_index row, and how does that factor\n> > > into (1) and (2)?\n> >\n> > Thank you for pointing. I'll investigate these issues in detail.\n> >\n>\n> Are you planning to work on this patch [1] for current CF? If not,\n> then I think it is better to either move this to the next CF or mark\n> it as RWF.\n\nI didn't manage to investigate this subject and provide new patch\nversion. I'm marking this RWF.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 25 Mar 2020 01:27:22 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Improve checking for pg_index.xmin" } ]
[ { "msg_contents": "This patch achieves  $SUBJECT and also provides some testing of the\nsslpassword setting.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 31 Oct 2019 19:54:41 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "On Thu, Oct 31, 2019 at 07:54:41PM -0400, Andrew Dunstan wrote:\n> This patch achieves $SUBJECT and also provides some testing of the\n> sslpassword setting.\n\nThe patch does not apply anymore, so a rebase is needed. As it has\nnot been reviewed, I am moving it to next CF, waiting on author.\n--\nMichael", "msg_date": "Sun, 1 Dec 2019 10:48:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "\nOn 11/30/19 8:48 PM, Michael Paquier wrote:\n> On Thu, Oct 31, 2019 at 07:54:41PM -0400, Andrew Dunstan wrote:\n>> This patch achieves $SUBJECT and also provides some testing of the\n>> sslpassword setting.\n> The patch does not apply anymore, so a rebase is needed. As it has\n> not been reviewed, I am moving it to next CF, waiting on author.\n\n\n\nThat's OK. This patch is dependent, as it always has been, on the patch\nto allow passwordless user mappings for postgres_fdw. I hope to commit\nthat soon, but I'd prefer some signoff from prominent hackers, as I\ndon't want to go too far down this road and then encounter a bunch of\nobjections.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 1 Dec 2019 18:12:14 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "On 2019-12-02 00:12, Andrew Dunstan wrote:\n> On 11/30/19 8:48 PM, Michael Paquier wrote:\n>> On Thu, Oct 31, 2019 at 07:54:41PM -0400, Andrew Dunstan wrote:\n>>> This patch achieves $SUBJECT and also provides some testing of the\n>>> sslpassword setting.\n>> The patch does not apply anymore, so a rebase is needed. As it has\n>> not been reviewed, I am moving it to next CF, waiting on author.\n> \n> That's OK. This patch is dependent, as it always has been, on the patch\n> to allow passwordless user mappings for postgres_fdw. I hope to commit\n> that soon, but I'd prefer some signoff from prominent hackers, as I\n> don't want to go too far down this road and then encounter a bunch of\n> objections.\n\nThe prerequisite patch has been committed, so please see about getting \nthis patch moving forward.\n\nThe patch is very small, of course, but I don't understand the \"bit of a \nhack\" comment. Could you explain that?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jan 2020 10:06:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "On Wed, Jan 8, 2020 at 7:36 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-12-02 00:12, Andrew Dunstan wrote:\n> > On 11/30/19 8:48 PM, Michael Paquier wrote:\n> >> On Thu, Oct 31, 2019 at 07:54:41PM -0400, Andrew Dunstan wrote:\n> >>> This patch achieves $SUBJECT and also provides some testing of the\n> >>> sslpassword setting.\n> >> The patch does not apply anymore, so a rebase is needed. As it has\n> >> not been reviewed, I am moving it to next CF, waiting on author.\n> >\n> > That's OK. This patch is dependent, as it always has been, on the patch\n> > to allow passwordless user mappings for postgres_fdw. I hope to commit\n> > that soon, but I'd prefer some signoff from prominent hackers, as I\n> > don't want to go too far down this road and then encounter a bunch of\n> > objections.\n>\n> The prerequisite patch has been committed, so please see about getting\n> this patch moving forward.\n>\n> The patch is very small, of course, but I don't understand the \"bit of a\n> hack\" comment. Could you explain that?\n>\n\n\nI have rewritten the comment, and committed the feature.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jan 2020 18:44:33 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "Re: Andrew Dunstan 2019-11-01 <f941b95e-27ad-cb5c-2495-13c44f90b1bc@2ndQuadrant.com>\n> \t\t{\"password_required\", UserMappingRelationId, false},\n> +\t\t/*\n> +\t\t * Extra room for the user mapping copies of sslcert and sslkey. These\n> +\t\t * are really libpq options but we repeat them here to allow them to\n> +\t\t * appear in both foreign server context (when we generate libpq\n> +\t\t * options) and user mapping context (from here). Bit of a hack\n> +\t\t * putting this in \"non_libpq_options\".\n> +\t\t */\n> +\t\t{\"sslcert\", UserMappingRelationId, true},\n> +\t\t{\"sslkey\", UserMappingRelationId, true},\n\nNice feature, we were actually looking for exactly this yesterday.\n\nI have some concerns about security, though. It's true that the\nsslcert/sslkey options can only be set/modified by superusers when\n\"password_required\" is set. But when password_required is not set, any\nuser and create user mappings that reference arbitrary files on the\nserver filesystem. I believe the options are still used in that case\nfor creating connections, even when that means the remote server isn't\nset up for cert auth, which needs password_required=false to succeed.\n\nIn short, I believe these options need explicit superuser checks.\n\nChristoph\n\n\n", "msg_date": "Thu, 9 Jan 2020 11:30:14 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "Re: To Andrew Dunstan 2020-01-09 <20200109103014.GA4192@msg.df7cb.de>\n> sslcert/sslkey options can only be set/modified by superusers when\n> \"password_required\" is set. But when password_required is not set, any\n> user and create user mappings that reference arbitrary files on the\n> server filesystem.\n\n(A nice addition here which would avoid the problems would be the\npossibility to pass in the certificates as strings, but that needs\nsupport in libpq.)\n\nChristoph\n\n\n", "msg_date": "Thu, 9 Jan 2020 11:45:11 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "Re: To Andrew Dunstan 2020-01-09 <20200109103014.GA4192@msg.df7cb.de>\n> I believe the options are still used in that case\n> for creating connections, even when that means the remote server isn't\n> set up for cert auth, which needs password_required=false to succeed.\n\nThey are indeed:\n\nstat(\"/var/lib/postgresql/.postgresql/root.crt\", 0x7ffcff3e2bb0) = -1 ENOENT (Datei oder Verzeichnis nicht gefunden)\nstat(\"/foo\", 0x7ffcff3e2bb0) = -1 ENOENT (Datei oder Verzeichnis nicht gefunden)\n ^^^^ sslcert\n\nI'm not sure if that could be exploited in any way, but let's just\nforbid it.\n\nChristoph\n\n\n", "msg_date": "Thu, 9 Jan 2020 13:48:55 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "On Thu, Jan 9, 2020 at 5:30 AM Christoph Berg <myon@debian.org> wrote:\n> I have some concerns about security, though. It's true that the\n> sslcert/sslkey options can only be set/modified by superusers when\n> \"password_required\" is set. But when password_required is not set, any\n> user and create user mappings that reference arbitrary files on the\n> server filesystem. I believe the options are still used in that case\n> for creating connections, even when that means the remote server isn't\n> set up for cert auth, which needs password_required=false to succeed.\n>\n> In short, I believe these options need explicit superuser checks.\n\nI share the concern about the security issue here. I can't testify to\nwhether Christoph's whole analysis is here, but as a general point,\nnon-superusers can't be allowed to do things that cause the server to\naccess arbitrary local files.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Jan 2020 09:51:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "On Fri, Jan 10, 2020 at 1:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jan 9, 2020 at 5:30 AM Christoph Berg <myon@debian.org> wrote:\n> > I have some concerns about security, though. It's true that the\n> > sslcert/sslkey options can only be set/modified by superusers when\n> > \"password_required\" is set. But when password_required is not set, any\n> > user and create user mappings that reference arbitrary files on the\n> > server filesystem. I believe the options are still used in that case\n> > for creating connections, even when that means the remote server isn't\n> > set up for cert auth, which needs password_required=false to succeed.\n> >\n> > In short, I believe these options need explicit superuser checks.\n>\n> I share the concern about the security issue here. I can't testify to\n> whether Christoph's whole analysis is here, but as a general point,\n> non-superusers can't be allowed to do things that cause the server to\n> access arbitrary local files.\n\n\nIt's probably fairly easy to do (c.f. 6136e94dcb). I'm not (yet)\nconvinced that there is any significant security threat here. This\ndoesn't give the user or indeed any postgres code any access to the\ncontents of these files. But if there is a consensus to restrict this\nI'll do it.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jan 2020 08:08:42 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "> On 9 Jan 2020, at 22:38, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n\n> I'm not (yet)\n> convinced that there is any significant security threat here. This\n> doesn't give the user or indeed any postgres code any access to the\n> contents of these files. But if there is a consensus to restrict this\n> I'll do it.\n\nI've seen successful exploits made from parts that I in my wildest imagination\ncouldn't think be useful, so FWIW +1 for adding belts to suspenders and\nrestricting this.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 9 Jan 2020 23:00:59 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Fri, Jan 10, 2020 at 1:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I share the concern about the security issue here. I can't testify to\n>> whether Christoph's whole analysis is here, but as a general point,\n>> non-superusers can't be allowed to do things that cause the server to\n>> access arbitrary local files.\n\n> It's probably fairly easy to do (c.f. 6136e94dcb). I'm not (yet)\n> convinced that there is any significant security threat here. This\n> doesn't give the user or indeed any postgres code any access to the\n> contents of these files. But if there is a consensus to restrict this\n> I'll do it.\n\nWell, even without access to the file contents, the mere ability to\nprobe the existence of a file is something we don't want unprivileged\nusers to have. And (I suppose) this is enough for that, by looking\nat what error you get back from trying it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jan 2020 17:02:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "On Fri, Jan 10, 2020 at 8:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On Fri, Jan 10, 2020 at 1:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> I share the concern about the security issue here. I can't testify to\n> >> whether Christoph's whole analysis is here, but as a general point,\n> >> non-superusers can't be allowed to do things that cause the server to\n> >> access arbitrary local files.\n>\n> > It's probably fairly easy to do (c.f. 6136e94dcb). I'm not (yet)\n> > convinced that there is any significant security threat here. This\n> > doesn't give the user or indeed any postgres code any access to the\n> > contents of these files. But if there is a consensus to restrict this\n> > I'll do it.\n>\n> Well, even without access to the file contents, the mere ability to\n> probe the existence of a file is something we don't want unprivileged\n> users to have. And (I suppose) this is enough for that, by looking\n> at what error you get back from trying it.\n>\n\n\nOK, that's convincing enough. Will do it before long.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jan 2020 08:46:11 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" }, { "msg_contents": "On Fri, 10 Jan 2020 at 06:16, Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\nwrote:\n\n> On Fri, Jan 10, 2020 at 8:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > > On Fri, Jan 10, 2020 at 1:21 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> > >> I share the concern about the security issue here. I can't testify to\n> > >> whether Christoph's whole analysis is here, but as a general point,\n> > >> non-superusers can't be allowed to do things that cause the server to\n> > >> access arbitrary local files.\n> >\n> > > It's probably fairly easy to do (c.f. 6136e94dcb). I'm not (yet)\n> > > convinced that there is any significant security threat here. This\n> > > doesn't give the user or indeed any postgres code any access to the\n> > > contents of these files. But if there is a consensus to restrict this\n> > > I'll do it.\n> >\n> > Well, even without access to the file contents, the mere ability to\n> > probe the existence of a file is something we don't want unprivileged\n> > users to have. And (I suppose) this is enough for that, by looking\n> > at what error you get back from trying it.\n> >\n>\n>\n> OK, that's convincing enough. Will do it before long.\n\n\nThanks. I'm 100% convinced the superuser restriction should be imposed. I\ncan imagine there being a risk of leaking file contents in error output\nsuch as parse errors from OpenSSL that we pass on for example. Tricking Pg\ninto reading from a fifo could be problematic too.\n\nI should've applied that restriction from the start, the same way as\npasswordless connections are restricted.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 10 Jan 2020 at 06:16, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:On Fri, Jan 10, 2020 at 8:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On Fri, Jan 10, 2020 at 1:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> I share the concern about the security issue here. I can't testify to\n> >> whether Christoph's whole analysis is here, but as a general point,\n> >> non-superusers can't be allowed to do things that cause the server to\n> >> access arbitrary local files.\n>\n> > It's probably fairly easy to do (c.f. 6136e94dcb). I'm not (yet)\n> > convinced that there is any significant security threat here. This\n> > doesn't give the user or indeed any postgres code any access to the\n> > contents of these files. But if there is a consensus to restrict this\n> > I'll do it.\n>\n> Well, even without access to the file contents, the mere ability to\n> probe the existence of a file is something we don't want unprivileged\n> users to have.  And (I suppose) this is enough for that, by looking\n> at what error you get back from trying it.\n>\n\n\nOK, that's convincing enough. Will do it before long.Thanks. I'm 100% convinced the superuser restriction should be imposed. I can imagine there being a risk of leaking file contents in error output such as parse errors from OpenSSL that we pass on for example. Tricking Pg into reading from a fifo could be problematic too.I should've applied that restriction from the start, the same way as passwordless connections are restricted.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Mon, 20 Jan 2020 16:09:26 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow 'sslkey' and 'sslcert' in postgres_fdw user mappings" } ]
[ { "msg_contents": "Hi,\n\nSometimes you want to answer if a difference between two timestamps is\nlesser than x minutes but you are not sure which timestamp is greater\nthan the other one (to obtain a positive result -- it is not always\npossible). However, if you cannot obtain the absolute value of\nsubtraction, you have to add two conditions.\n\nThe attached patch implements abs function and @ operator for\nintervals. The following example illustrates the use case:\n\npostgres=# create table xpto (a timestamp, b timestamp);\nCREATE TABLE\npostgres=# insert into xpto (a, b) values(now(), now() - interval '1\nday'),(now() - interval '5 hour', now()),(now() + '3 hour', now());\nINSERT 0 3\npostgres=# select *, a - b as t from xpto;\n a | b | t\n----------------------------+----------------------------+-----------\n 2019-10-31 22:43:30.601861 | 2019-10-30 22:43:30.601861 | 1 day\n 2019-10-31 17:43:30.601861 | 2019-10-31 22:43:30.601861 | -05:00:00\n 2019-11-01 01:43:30.601861 | 2019-10-31 22:43:30.601861 | 03:00:00\n(3 rows)\n\npostgres=# select *, a - b as i from xpto where abs(a - b) < interval '12 hour';\n a | b | i\n----------------------------+----------------------------+-----------\n 2019-10-31 17:43:30.601861 | 2019-10-31 22:43:30.601861 | -05:00:00\n 2019-11-01 01:43:30.601861 | 2019-10-31 22:43:30.601861 | 03:00:00\n(2 rows)\n\npostgres=# select @ interval '1 years -2 months 3 days 4 hours -5\nminutes 6.789 seconds' as t;\n t\n-----------------------------\n 10 mons 3 days 03:55:06.789\n(1 row)\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Thu, 31 Oct 2019 23:20:07 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": true, "msg_subject": "abs function for interval" }, { "msg_contents": "Hi,\n\nOn 2019-10-31 23:20:07 -0300, Euler Taveira wrote:\n> diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c\n> index 1dc4c820de..a6b8b8c221 100644\n> --- a/src/backend/utils/adt/timestamp.c\n> +++ b/src/backend/utils/adt/timestamp.c\n> @@ -2435,6 +2435,23 @@ interval_cmp(PG_FUNCTION_ARGS)\n> \tPG_RETURN_INT32(interval_cmp_internal(interval1, interval2));\n> }\n>\n> +Datum\n> +interval_abs(PG_FUNCTION_ARGS)\n> +{\n> +\tInterval *interval = PG_GETARG_INTERVAL_P(0);\n> +\tInterval *result;\n> +\n> +\tresult = palloc(sizeof(Interval));\n> +\t*result = *interval;\n> +\n> +\t/* convert all struct Interval members to absolute values */\n> +\tresult->month = (interval->month < 0) ? (-1 * interval->month) : interval->month;\n> +\tresult->day = (interval->day < 0) ? (-1 * interval->day) : interval->day;\n> +\tresult->time = (interval->time < 0) ? (-1 * interval->time) : interval->time;\n> +\n> +\tPG_RETURN_INTERVAL_P(result);\n> +}\n> +\n\nSeveral points:\n\n1) I don't think you can do the < 0 check on an elementwise basis. Your\n code would e.g. make a hash out of abs('1 day -1 second'), by\n inverting the second, but not the day (whereas nothing should be\n done).\n\n It'd probably be easiest to implement this by comparing with a 0\n interval using inteval_lt() or interval_cmp_internal().\n\n2) This will not correctly handle overflows, I believe. What happens if you\n do SELECT abs('-2147483648 days'::interval)? You probably should\n reuse interval_um() for this.\n\n\n> --- a/src/test/regress/expected/interval.out\n> +++ b/src/test/regress/expected/interval.out\n> @@ -927,3 +927,11 @@ select make_interval(secs := 7e12);\n> @ 1944444444 hours 26 mins 40 secs\n> (1 row)\n>\n> +-- test absolute operator\n> +set IntervalStyle to postgres;\n> +select @ interval '1 years -2 months 3 days 4 hours -5 minutes 6.789 seconds' as t;\n> + t\n> +-----------------------------\n> + 10 mons 3 days 03:55:06.789\n> +(1 row)\n> +\n> diff --git a/src/test/regress/sql/interval.sql b/src/test/regress/sql/interval.sql\n> index bc5537d1b9..8f9a2bda29 100644\n> --- a/src/test/regress/sql/interval.sql\n> +++ b/src/test/regress/sql/interval.sql\n\n\n> @@ -308,3 +308,7 @@ select make_interval(months := 'NaN'::float::int);\n> select make_interval(secs := 'inf');\n> select make_interval(secs := 'NaN');\n> select make_interval(secs := 7e12);\n> +\n> +-- test absolute operator\n> +set IntervalStyle to postgres;\n> +select @ interval '1 years -2 months 3 days 4 hours -5 minutes 6.789 seconds' as t;\n> --\n> 2.11.0\n\nThis is not even remotely close to enough tests. In your only test abs()\ndoes not change the value, as there's no negative component (the 1 year\n-2 month result in a positive 10 months, and the hours, minutes and\nseconds get folded together too).\n\nAt the very least a few boundary conditions need to be tested (see b)\nabove), a few more complicated cases with different components being\nof different signs, and you need to show the values with and without\napplying abs().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 31 Oct 2019 19:45:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: abs function for interval" }, { "msg_contents": "Em qui, 31 de out de 2019 às 23:45, Andres Freund <andres@anarazel.de> escreveu:\n>\n> 1) I don't think you can do the < 0 check on an elementwise basis. Your\n> code would e.g. make a hash out of abs('1 day -1 second'), by\n> inverting the second, but not the day (whereas nothing should be\n> done).\n>\n> It'd probably be easiest to implement this by comparing with a 0\n> interval using inteval_lt() or interval_cmp_internal().\n>\nHmm. Good idea. Let me try it.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Fri, 1 Nov 2019 00:48:50 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": true, "msg_subject": "Re: abs function for interval" }, { "msg_contents": "On Fri, Nov 01, 2019 at 12:48:50AM -0300, Euler Taveira wrote:\n> Hmm. Good idea. Let me try it.\n\nMarked as RwF, as this has not been updated in four weeks. Please\nfeel free to resubmit later once you have an updated version.\n--\nMichael", "msg_date": "Thu, 28 Nov 2019 13:17:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: abs function for interval" } ]
[ { "msg_contents": "Hi,\n\n Postgres has a global variable `disable_cost`. It is set the value\n1.0e10.\n\n This value will be added to the cost of path if related GUC is set off.\nFor example,\n if enable_nestloop is set off, when planner trys to add nestloop join\npath, it continues\n to add such path but with a huge cost `disable_cost`.\n\n But 1.0e10 may not be large enough. I encounter this issue in\nGreenplum(based on postgres).\nHeikki tolds me that someone also encountered the same issue on Postgres.\nSo I send it here to\nhave a discussion.\n\n My issue: I did some spikes and tests on TPCDS 1TB Bytes data. For\nquery 104, it generates\n nestloop join even with enable_nestloop set off. And the final plan's\ntotal cost is very huge (about 1e24). But If I enlarge the disable_cost to\n1e30, then, planner will generate hash join.\n\n So I guess that disable_cost is not large enough for huge amount of\ndata.\n\n It is tricky to set disable_cost a huge number. Can we come up with\nbetter solution?\n\n The following thoughts are from Heikki:\n\n> Aside from not having a large enough disable cost, there's also the\n> fact that the high cost might affect the rest of the plan, if we have to\n> use a plan type that's disabled. For example, if a table doesn't have any\n> indexes, but enable_seqscan is off, we might put the unavoidable Seq Scan\n> on different side of a join than we we would with enable_seqscan=on,\n> because of the high cost estimate.\n\n\n\n> I think a more robust way to disable forbidden plan types would be to\n> handle the disabling in add_path(). Instead of having a high disable cost\n> on the Path itself, the comparison add_path() would always consider\n> disabled paths as more expensive than others, regardless of the cost.\n\n\n Any thoughts or ideas on the problem? Thanks!\n\nBest Regards,\nZhenghua Lyu\n\nHi,    Postgres has a global variable `disable_cost`. It is set the value 1.0e10.      This value will be added to the cost of path if related GUC is set off. For example,    if enable_nestloop is set off, when planner trys to add nestloop join path, it continues    to add such path but with a huge cost `disable_cost`.    But 1.0e10 may not be large enough. I encounter this issue in Greenplum(based on postgres).Heikki tolds me that someone also encountered the same issue on Postgres. So I send it here tohave a discussion.    My issue: I did some spikes and tests on TPCDS 1TB Bytes data. For query 104, it generates nestloop join even with enable_nestloop set off. And the final plan's total cost is very huge (about 1e24). But If I enlarge the disable_cost to 1e30, then, planner will generate hash join.    So I guess that disable_cost is not large enough for huge amount of data.    It is tricky to set disable_cost a huge number. Can we come up with better solution?        The following thoughts are from Heikki:    Aside from not having a large enough disable cost, there's also the fact that the high cost might affect the rest of the plan, if we have to use a plan type that's disabled. For example, if a table doesn't have any indexes, but enable_seqscan is off, we might put the unavoidable Seq Scan on different side of a join than we we would with enable_seqscan=on, because of the high cost estimate. I think a more robust way to disable forbidden plan types would be to handle the disabling in add_path(). Instead of having a high disable cost on the Path itself, the comparison add_path() would always consider disabled paths as more expensive than others, regardless of the cost.  Any thoughts or ideas on the problem? Thanks!Best Regards,Zhenghua Lyu", "msg_date": "Fri, 1 Nov 2019 14:42:25 +0800", "msg_from": "Zhenghua Lyu <zlv@pivotal.io>", "msg_from_op": true, "msg_subject": "On disable_cost" }, { "msg_contents": "On Fri, Nov 1, 2019 at 7:42 PM Zhenghua Lyu <zlv@pivotal.io> wrote:\n> It is tricky to set disable_cost a huge number. Can we come up with better solution?\n\nWhat happens if you use DBL_MAX?\n\n\n", "msg_date": "Fri, 1 Nov 2019 19:58:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Em sex, 1 de nov de 2019 às 03:42, Zhenghua Lyu <zlv@pivotal.io> escreveu:\n>\n> My issue: I did some spikes and tests on TPCDS 1TB Bytes data. For query 104, it generates\n> nestloop join even with enable_nestloop set off. And the final plan's total cost is very huge (about 1e24). But If I enlarge the disable_cost to 1e30, then, planner will generate hash join.\n>\n> So I guess that disable_cost is not large enough for huge amount of data.\n>\n> It is tricky to set disable_cost a huge number. Can we come up with better solution?\n>\nIsn't it a case for a GUC disable_cost? As Thomas suggested, DBL_MAX\nupper limit should be sufficient.\n\n> The following thoughts are from Heikki:\n>>\n>> Aside from not having a large enough disable cost, there's also the fact that the high cost might affect the rest of the plan, if we have to use a plan type that's disabled. For example, if a table doesn't have any indexes, but enable_seqscan is off, we might put the unavoidable Seq Scan on different side of a join than we we would with enable_seqscan=on, because of the high cost estimate.\n>\n>\n>>\n>> I think a more robust way to disable forbidden plan types would be to handle the disabling in add_path(). Instead of having a high disable cost on the Path itself, the comparison add_path() would always consider disabled paths as more expensive than others, regardless of the cost.\n>\nI'm afraid it is not as cheap as using diable_cost as a node cost. Are\nyou proposing to add a new boolean variable in Path struct to handle\nthose cases in compare_path_costs_fuzzily?\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Fri, 1 Nov 2019 11:48:29 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Hi,\n\nOn 2019-11-01 19:58:04 +1300, Thomas Munro wrote:\n> On Fri, Nov 1, 2019 at 7:42 PM Zhenghua Lyu <zlv@pivotal.io> wrote:\n> > It is tricky to set disable_cost a huge number. Can we come up with better solution?\n> \n> What happens if you use DBL_MAX?\n\nThat seems like a bad idea - we add the cost multiple times. And we\nstill want to compare plans that potentially involve that cost, if\nthere's no other way to plan the query.\n\n- Andres\n\n\n", "msg_date": "Fri, 1 Nov 2019 09:00:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Nov 1, 2019 at 12:00 PM Andres Freund <andres@anarazel.de> wrote:\n> That seems like a bad idea - we add the cost multiple times. And we\n> still want to compare plans that potentially involve that cost, if\n> there's no other way to plan the query.\n\nYeah. I kind of wonder if we shouldn't instead (a) skip adding paths\nthat use methods which are disabled and then (b) if we don't end up\nwith any paths for that reloptinfo, try again, ignoring disabling\nGUCs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 12:22:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "re: coping with adding disable_cost more than once\n\nAnother option would be to have a 2-part Cost structure. If disable_cost is\never added to the Cost, then you set a flag recording this. If any plans\nexist that have no disable_costs added to them, then the planner chooses the\nminimum cost among those, otherwise you choose the minimum cost path.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Fri, 1 Nov 2019 09:30:52 -0700 (MST)", "msg_from": "Jim Finnerty <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Hi,\n\nOn 2019-11-01 12:22:06 -0400, Robert Haas wrote:\n> On Fri, Nov 1, 2019 at 12:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > That seems like a bad idea - we add the cost multiple times. And we\n> > still want to compare plans that potentially involve that cost, if\n> > there's no other way to plan the query.\n> \n> Yeah. I kind of wonder if we shouldn't instead (a) skip adding paths\n> that use methods which are disabled and then (b) if we don't end up\n> with any paths for that reloptinfo, try again, ignoring disabling\n> GUCs.\n\nHm. That seems complicated. Is it clear that we'd always notice that we\nhave no plan early enough to know which paths to reconsider? I think\nthere's cases where that'd only happen a few levels up.\n\nAs a first step I'd be inclined to \"just\" adjust disable_cost up to\nsomething like 1.0e12. Unfortunately much higher and and we're getting\ninto the area where the loss of precision starts to be significant\nenough that I'm not sure that we're always careful enough to perform\nmath in the right order (e.g. 1.0e16 + 1 being 1.0e16, and 1e+20 + 1000\nbeing 1e+20). I've seen queries with costs above 1e10 where that costing\nwasn't insane.\n\nAnd then, in a larger patch, go for something like Heikki's proposal\nquoted by Zhenghua Lyu upthread, where we treat 'forbidden' as a\nseparate factor in comparisons of path costs, rather than fudging the\ncost upwards. But there's some care to be taken to make sure we don't\nregress performance too much due to the additional logic in\ncompare_path_cost et al.\n\nI'd also be curious to see if there's some other problem with cost\ncalculation here - some of the quoted final costs seem high enough to be\nsuspicious. I'd be curious to see a plan...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Nov 2019 09:43:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Nov 1, 2019 at 12:43 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. That seems complicated. Is it clear that we'd always notice that we\n> have no plan early enough to know which paths to reconsider? I think\n> there's cases where that'd only happen a few levels up.\n\nYeah, there could be problems of that kind. I think if a baserel has\nno paths, then we know right away that we've got a problem, but for\njoinrels it might be more complicated.\n\n> As a first step I'd be inclined to \"just\" adjust disable_cost up to\n> something like 1.0e12. Unfortunately much higher and and we're getting\n> into the area where the loss of precision starts to be significant\n> enough that I'm not sure that we're always careful enough to perform\n> math in the right order (e.g. 1.0e16 + 1 being 1.0e16, and 1e+20 + 1000\n> being 1e+20). I've seen queries with costs above 1e10 where that costing\n> wasn't insane.\n\nWe've done that before and we can do it again. But we're going to need\nto have something better eventually, I think, not just keep kicking\nthe can down the road.\n\nAnother point to consider here is that in some cases we could really\njust skip generating certain paths altogether. We already do this for\nhash joins: if we're planning a join and enable_hashjoin is disabled,\nwe just don't generate hash joins paths at all, except for full joins,\nwhere there might be no other legal method. As this example shows,\nthis cannot be applied in all cases, but maybe we could do it more\nwidely than we do today. I'm not sure how beneficial that technique\nwould be, though, because it doesn't seem like it's quite enough to\nsolve this problem by itself.\n\nYet another approach would be to divide the cost into two parts, a\n\"cost\" component and a \"violations\" component. If two paths are\ncompared, the one with fewer violations always wins; if it's a tie,\nthey compare on cost. A path's violation count is the total of its\nchildren, plus one for itself if it does something that's disabled.\nThis would be more principled than the current approach, but maybe\nit's too costly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 1 Nov 2019 12:56:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Nov 01, 2019 at 09:30:52AM -0700, Jim Finnerty wrote:\n>re: coping with adding disable_cost more than once\n>\n>Another option would be to have a 2-part Cost structure. If disable_cost is\n>ever added to the Cost, then you set a flag recording this. If any plans\n>exist that have no disable_costs added to them, then the planner chooses the\n>minimum cost among those, otherwise you choose the minimum cost path.\n>\n\nYeah, I agree having is_disabled flag, and treat all paths with 'true'\nas more expensive than paths with 'false' (and when both paths have the\nsame value then actually compare the cost) is probably the way forward.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 1 Nov 2019 18:04:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 2019-11-01 12:56:30 -0400, Robert Haas wrote:\n> On Fri, Nov 1, 2019 at 12:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > As a first step I'd be inclined to \"just\" adjust disable_cost up to\n> > something like 1.0e12. Unfortunately much higher and and we're getting\n> > into the area where the loss of precision starts to be significant\n> > enough that I'm not sure that we're always careful enough to perform\n> > math in the right order (e.g. 1.0e16 + 1 being 1.0e16, and 1e+20 + 1000\n> > being 1e+20). I've seen queries with costs above 1e10 where that costing\n> > wasn't insane.\n> \n> We've done that before and we can do it again. But we're going to need\n> to have something better eventually, I think, not just keep kicking\n> the can down the road.\n\nYea, that's why I continued on to describe what we should do afterwards\n;)\n\n\n> Yet another approach would be to divide the cost into two parts, a\n> \"cost\" component and a \"violations\" component. If two paths are\n> compared, the one with fewer violations always wins; if it's a tie,\n> they compare on cost. A path's violation count is the total of its\n> children, plus one for itself if it does something that's disabled.\n> This would be more principled than the current approach, but maybe\n> it's too costly.\n\nNamely go for something like this. I think we probably get away with the\nadditional comparison, especially if we were to store the violations as\nan integer and did it like if (unlikely(path1->nviolations !=\npath2->nviolations)) or such - that ought to be very well predicted in\nnearly all cases.\n\nI wonder how much we'd need to reformulate\ncompare_path_costs/compare_path_costs_fuzzily to allow the compiler to\nauto-vectorize. Might not be worth caring...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Nov 2019 10:34:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Zhenghua Lyu <zlv@pivotal.io> writes:\n>> I think a more robust way to disable forbidden plan types would be to\n>> handle the disabling in add_path(). Instead of having a high disable cost\n>> on the Path itself, the comparison add_path() would always consider\n>> disabled paths as more expensive than others, regardless of the cost.\n\nGetting rid of disable_cost would be a nice thing to do, but I would\nrather not do it by adding still more complexity to add_path(), not\nto mention having to bloat Paths with a separate \"disabled\" marker.\n\nThe idea that I've been thinking about is to not generate disabled\nPaths in the first place, thus not only fixing the problem but saving\nsome cycles. While this seems easy enough for \"optional\" paths,\nwe have to reserve the ability to generate certain path types regardless,\nif there's no other way to implement the query. This is a bit of a\nstumbling block :-(. At the base relation level, we could do something\nlike generating seqscan last, and only if no other path has been\nsuccessfully generated. But I'm not sure how to scale that up to\njoins. In particular, imagine that we consider joining A to B, and\nfind that the only way is a nestloop, so we generate a nestloop join\ndespite that being nominally disabled. The next join level would\nthen see that as an available path, and it might decide that\n((A nestjoin B) join C) is the cheapest choice, even though there\nmight have been a way to do, say, ((A join C) join B) with no use of\nnestloops. Users would find this surprising.\n\nMaybe the only way to do this is a separate number-of-uses-of-\ndisabled-plan-types cost figure in Paths, but I still don't want\nto go there. The number of cases where disable_cost's shortcomings\nreally matter is too small to justify that, IMHO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Nov 2019 10:57:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Nov 01, 2019 at 09:30:52AM -0700, Jim Finnerty wrote:\n>> re: coping with adding disable_cost more than once\n>> \n>> Another option would be to have a 2-part Cost structure. If disable_cost is\n>> ever added to the Cost, then you set a flag recording this. If any plans\n>> exist that have no disable_costs added to them, then the planner chooses the\n>> minimum cost among those, otherwise you choose the minimum cost path.\n\n> Yeah, I agree having is_disabled flag, and treat all paths with 'true'\n> as more expensive than paths with 'false' (and when both paths have the\n> same value then actually compare the cost) is probably the way forward.\n\nIt would have to be a count, not a boolean --- for example, you want to\nprefer a path that uses one disabled SeqScan over a path that uses two.\n\nI'm with Andres in being pretty worried about the extra burden imposed\non add_path comparisons.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Nov 2019 11:04:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "As a proof of concept, I hacked around a bit today to re-purpose one of the\nbits of the Cost structure to mean \"is_disabled\" so that we can distinguish\n'disabled' from 'non-disabled' paths without making the Cost structure any\nbigger. In fact, it's still a valid double. The obvious choice would have\nbeen to re-purpose the sign bit, but I've had occasion to exploit negative\ncosts before so for this POC I used the high-order bit of the fractional\nbits of the double. (see Wikipedia for double precision floating point for\nthe layout).\n\nThe idea is to set a special bit when disable_cost is added to a cost. \nDedicating multiple bits instead of just 1 would be easily done, but as it\nis we can accumulate many disable_costs without overflowing, so just\ncomparing the cost suffices.\n\nThe patch is not fully debugged and fails on a couple of tests in the serial\ntest suite. It seems to fail on Cartesian products, and maybe in one other\nnon-CP case. I wasn't able to debug it before the day came to an end.\n\nIn one place the core code subtracts off the disable_cost. I left the\n\"disabled\" bit set in this case, which might be wrong.\n\nI don't see an option to attach the patch as an attachment, so here is the\npatch inline (it is based on PG11). The more interesting part is in a small\nnumber of lines in costsize.c. Other changes just add functions that assign\na disable_cost and set the bit, or that compare costs such that a\nnon-disabled cost always compares less than a disabled cost.\n\n------------------\n\ndiff --git a/src/backend/optimizer/path/costsize.c\nb/src/backend/optimizer/path/costsize.c\nindex 4e86458672..3718639330 100644\n--- a/src/backend/optimizer/path/costsize.c\n+++ b/src/backend/optimizer/path/costsize.c\n@@ -123,6 +123,8 @@ double\t\tparallel_setup_cost =\nDEFAULT_PARALLEL_SETUP_COST;\n int\t\t\teffective_cache_size = DEFAULT_EFFECTIVE_CACHE_SIZE;\n \n Cost\t\tdisable_cost = 1.0e10;\n+uint64 disabled_mask = 0x8000000000000;\n+#define IS_DISABLED(cost) (((uint64) cost) & disabled_mask)\n \n int\t\t\tmax_parallel_workers_per_gather = 2;\n \n@@ -205,6 +207,53 @@ clamp_row_est(double nrows)\n \treturn nrows;\n }\n \n+Cost\n+add_cost(Cost cost, Cost delta_cost)\n+{\n+\tuint64 mask = (delta_cost == disable_cost) ? disabled_mask : 0;\n+\tCost max_cost = disabled_mask - disable_cost;\n+\t\n+\tif (cost + delta_cost < max_cost)\n+\t\treturn ((Cost) ((uint64)(cost + delta_cost) | mask));\n+\telse\n+\t\treturn ((Cost) ((uint64)(max_cost) | mask));\n+}\n+\n+bool\n+is_lower_cost(Cost cost1, Cost cost2)\n+{\n+\tif ((uint64)cost1 & disabled_mask && !((uint64)cost2 & disabled_mask))\n+\t\treturn false;\n+\t\n+\tif (!((uint64)cost1 & disabled_mask) && (uint64)cost2 & disabled_mask)\n+\t\treturn true;\n+\t\n+\treturn (cost1 < cost2);\n+}\n+\n+bool\n+is_greater_cost(Cost cost1, Cost cost2)\n+{\n+\tif ((uint64)cost1 & disabled_mask && !((uint64)cost2 & disabled_mask))\n+\t\treturn true;\n+\t\n+\tif (!((uint64)cost1 & disabled_mask) && (uint64)cost2 & disabled_mask)\n+\t\treturn false;\n+\t\n+\treturn (cost1 > cost2);\n+}\n+\n+bool\n+is_geq_cost(Cost cost1, Cost cost2)\n+{\n+\tif ((uint64)cost1 & disabled_mask && !((uint64)cost2 & disabled_mask))\n+\t\treturn true;\n+\t\n+\tif (!((uint64)cost1 & disabled_mask) && (uint64)cost2 & disabled_mask)\n+\t\treturn false;\n+\t\n+\treturn (cost1 >= cost2);\n+}\n \n /*\n * cost_seqscan\n@@ -235,7 +284,7 @@ cost_seqscan(Path *path, PlannerInfo *root,\n \t\tpath->rows = baserel->rows;\n \n \tif (!enable_seqscan)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \t/* fetch estimated page cost for tablespace containing table */\n \tget_tablespace_page_costs(baserel->reltablespace,\n@@ -424,7 +473,7 @@ cost_gather_merge(GatherMergePath *path, PlannerInfo\n*root,\n \t\tpath->path.rows = rel->rows;\n \n \tif (!enable_gathermerge)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \t/*\n \t * Add one to the number of workers to account for the leader. This might\n@@ -538,7 +587,7 @@ cost_index(IndexPath *path, PlannerInfo *root, double\nloop_count,\n \t}\n \n \tif (!enable_indexscan)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \t/* we don't need to check enable_indexonlyscan; indxpath.c does that */\n \n \t/*\n@@ -976,7 +1025,7 @@ cost_bitmap_heap_scan(Path *path, PlannerInfo *root,\nRelOptInfo *baserel,\n \t\tpath->rows = baserel->rows;\n \n \tif (!enable_bitmapscan)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \tpages_fetched = compute_bitmap_pages(root, baserel, bitmapqual,\n \t\t\t\t\t\t\t\t\t\t loop_count, &indexTotalCost,\n@@ -1242,10 +1291,10 @@ cost_tidscan(Path *path, PlannerInfo *root,\n \tif (isCurrentOf)\n \t{\n \t\tAssert(baserel->baserestrictcost.startup >= disable_cost);\n-\t\tstartup_cost -= disable_cost;\n+\t\tstartup_cost -= disable_cost; /* but do not un-set the disabled mark */\n \t}\n \telse if (!enable_tidscan)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \t/*\n \t * The TID qual expressions will be computed once, any other baserestrict\n@@ -1676,7 +1725,7 @@ cost_sort(Path *path, PlannerInfo *root,\n \tlong\t\tsort_mem_bytes = sort_mem * 1024L;\n \n \tif (!enable_sort)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \tpath->rows = tuples;\n \n@@ -2121,8 +2170,8 @@ cost_agg(Path *path, PlannerInfo *root,\n \t\ttotal_cost = input_total_cost;\n \t\tif (aggstrategy == AGG_MIXED && !enable_hashagg)\n \t\t{\n-\t\t\tstartup_cost += disable_cost;\n-\t\t\ttotal_cost += disable_cost;\n+\t\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n+\t\t\ttotal_cost = add_cost(total_cost, disable_cost);\n \t\t}\n \t\t/* calcs phrased this way to match HASHED case, see note above */\n \t\ttotal_cost += aggcosts->transCost.startup;\n@@ -2137,7 +2186,7 @@ cost_agg(Path *path, PlannerInfo *root,\n \t\t/* must be AGG_HASHED */\n \t\tstartup_cost = input_total_cost;\n \t\tif (!enable_hashagg)\n-\t\t\tstartup_cost += disable_cost;\n+\t\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \t\tstartup_cost += aggcosts->transCost.startup;\n \t\tstartup_cost += aggcosts->transCost.per_tuple * input_tuples;\n \t\tstartup_cost += (cpu_operator_cost * numGroupCols) * input_tuples;\n@@ -2436,7 +2485,7 @@ final_cost_nestloop(PlannerInfo *root, NestPath *path,\n \t * disabled, which doesn't seem like the way to bet.\n \t */\n \tif (!enable_nestloop)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \t/* cost of inner-relation source data (we already dealt with outer rel) */\n \n@@ -2882,7 +2931,7 @@ final_cost_mergejoin(PlannerInfo *root, MergePath\n*path,\n \t * disabled, which doesn't seem like the way to bet.\n \t */\n \tif (!enable_mergejoin)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \t/*\n \t * Compute cost of the mergequals and qpquals (other restriction clauses)\n@@ -3312,7 +3361,7 @@ final_cost_hashjoin(PlannerInfo *root, HashPath *path,\n \t * disabled, which doesn't seem like the way to bet.\n \t */\n \tif (!enable_hashjoin)\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \t/* mark the path with estimated # of batches */\n \tpath->num_batches = numbatches;\n@@ -3410,7 +3459,7 @@ final_cost_hashjoin(PlannerInfo *root, HashPath *path,\n \tif (relation_byte_size(clamp_row_est(inner_path_rows * innermcvfreq),\n \t\t\t\t\t\t inner_path->pathtarget->width) >\n \t\t(work_mem * 1024L))\n-\t\tstartup_cost += disable_cost;\n+\t\tstartup_cost = add_cost(startup_cost, disable_cost);\n \n \t/*\n \t * Compute cost of the hashquals and qpquals (other restriction clauses)\n@@ -3930,7 +3979,7 @@ cost_qual_eval_walker(Node *node,\ncost_qual_eval_context *context)\n \telse if (IsA(node, CurrentOfExpr))\n \t{\n \t\t/* Report high cost to prevent selection of anything but TID scan */\n-\t\tcontext->total.startup += disable_cost;\n+\t\tcontext->total.startup = add_cost(context->total.startup, disable_cost);\n \t}\n \telse if (IsA(node, SubLink))\n \t{\ndiff --git a/src/backend/optimizer/util/pathnode.c\nb/src/backend/optimizer/util/pathnode.c\nindex 4736d84a83..fd746a06bc 100644\n--- a/src/backend/optimizer/util/pathnode.c\n+++ b/src/backend/optimizer/util/pathnode.c\n@@ -72,33 +72,33 @@ compare_path_costs(Path *path1, Path *path2,\nCostSelector criterion)\n {\n \tif (criterion == STARTUP_COST)\n \t{\n-\t\tif (path1->startup_cost < path2->startup_cost)\n+\t\tif (is_lower_cost(path1->startup_cost, path2->startup_cost))\n \t\t\treturn -1;\n-\t\tif (path1->startup_cost > path2->startup_cost)\n+\t\tif (is_greater_cost(path1->startup_cost, path2->startup_cost))\n \t\t\treturn +1;\n \n \t\t/*\n \t\t * If paths have the same startup cost (not at all unlikely), order\n \t\t * them by total cost.\n \t\t */\n-\t\tif (path1->total_cost < path2->total_cost)\n+\t\tif (is_lower_cost(path1->total_cost, path2->total_cost))\n \t\t\treturn -1;\n-\t\tif (path1->total_cost > path2->total_cost)\n+\t\tif (is_greater_cost(path1->total_cost, path2->total_cost))\n \t\t\treturn +1;\n \t}\n \telse\n \t{\n-\t\tif (path1->total_cost < path2->total_cost)\n+\t\tif (is_lower_cost(path1->total_cost, path2->total_cost))\n \t\t\treturn -1;\n-\t\tif (path1->total_cost > path2->total_cost)\n+\t\tif (is_greater_cost(path1->total_cost, path2->total_cost))\n \t\t\treturn +1;\n \n \t\t/*\n \t\t * If paths have the same total cost, order them by startup cost.\n \t\t */\n-\t\tif (path1->startup_cost < path2->startup_cost)\n+\t\tif (is_lower_cost(path1->startup_cost, path2->startup_cost))\n \t\t\treturn -1;\n-\t\tif (path1->startup_cost > path2->startup_cost)\n+\t\tif (is_greater_cost(path1->startup_cost, path2->startup_cost))\n \t\t\treturn +1;\n \t}\n \treturn 0;\n@@ -126,9 +126,9 @@ compare_fractional_path_costs(Path *path1, Path *path2,\n \t\tfraction * (path1->total_cost - path1->startup_cost);\n \tcost2 = path2->startup_cost +\n \t\tfraction * (path2->total_cost - path2->startup_cost);\n-\tif (cost1 < cost2)\n+\tif (is_lower_cost(cost1, cost2))\n \t\treturn -1;\n-\tif (cost1 > cost2)\n+\tif (is_greater_cost(cost1, cost2))\n \t\treturn +1;\n \treturn 0;\n }\n@@ -172,11 +172,11 @@ compare_path_costs_fuzzily(Path *path1, Path *path2,\ndouble fuzz_factor)\n \t * Check total cost first since it's more likely to be different; many\n \t * paths have zero startup cost.\n \t */\n-\tif (path1->total_cost > path2->total_cost * fuzz_factor)\n+\tif (is_greater_cost(path1->total_cost, path2->total_cost * fuzz_factor))\n \t{\n \t\t/* path1 fuzzily worse on total cost */\n \t\tif (CONSIDER_PATH_STARTUP_COST(path1) &&\n-\t\t\tpath2->startup_cost > path1->startup_cost * fuzz_factor)\n+\t\t\tis_greater_cost(path2->startup_cost, path1->startup_cost * fuzz_factor))\n \t\t{\n \t\t\t/* ... but path2 fuzzily worse on startup, so DIFFERENT */\n \t\t\treturn COSTS_DIFFERENT;\n@@ -184,11 +184,11 @@ compare_path_costs_fuzzily(Path *path1, Path *path2,\ndouble fuzz_factor)\n \t\t/* else path2 dominates */\n \t\treturn COSTS_BETTER2;\n \t}\n-\tif (path2->total_cost > path1->total_cost * fuzz_factor)\n+\tif (is_greater_cost(path2->total_cost, path1->total_cost * fuzz_factor))\n \t{\n \t\t/* path2 fuzzily worse on total cost */\n \t\tif (CONSIDER_PATH_STARTUP_COST(path2) &&\n-\t\t\tpath1->startup_cost > path2->startup_cost * fuzz_factor)\n+\t\t\tis_greater_cost(path1->startup_cost, path2->startup_cost * fuzz_factor))\n \t\t{\n \t\t\t/* ... but path1 fuzzily worse on startup, so DIFFERENT */\n \t\t\treturn COSTS_DIFFERENT;\n@@ -197,12 +197,12 @@ compare_path_costs_fuzzily(Path *path1, Path *path2,\ndouble fuzz_factor)\n \t\treturn COSTS_BETTER1;\n \t}\n \t/* fuzzily the same on total cost ... */\n-\tif (path1->startup_cost > path2->startup_cost * fuzz_factor)\n+\tif (is_greater_cost(path1->startup_cost, path2->startup_cost *\nfuzz_factor))\n \t{\n \t\t/* ... but path1 fuzzily worse on startup, so path2 wins */\n \t\treturn COSTS_BETTER2;\n \t}\n-\tif (path2->startup_cost > path1->startup_cost * fuzz_factor)\n+\tif (is_greater_cost(path2->startup_cost, path1->startup_cost *\nfuzz_factor))\n \t{\n \t\t/* ... but path2 fuzzily worse on startup, so path1 wins */\n \t\treturn COSTS_BETTER1;\n@@ -605,7 +605,7 @@ add_path(RelOptInfo *parent_rel, Path *new_path)\n \t\telse\n \t\t{\n \t\t\t/* new belongs after this old path if it has cost >= old's */\n-\t\t\tif (new_path->total_cost >= old_path->total_cost)\n+\t\t\tif (is_geq_cost(new_path->total_cost, old_path->total_cost))\n \t\t\t\tinsert_after = p1;\n \t\t\t/* p1_prev advances */\n \t\t\tp1_prev = p1;\n@@ -681,7 +681,7 @@ add_path_precheck(RelOptInfo *parent_rel,\n \t\t *\n \t\t * Cost comparisons here should match compare_path_costs_fuzzily.\n \t\t */\n-\t\tif (total_cost > old_path->total_cost * STD_FUZZ_FACTOR)\n+\t\tif (is_greater_cost(total_cost, old_path->total_cost * STD_FUZZ_FACTOR))\n \t\t{\n \t\t\t/* new path can win on startup cost only if consider_startup */\n \t\t\tif (startup_cost > old_path->startup_cost * STD_FUZZ_FACTOR ||\n@@ -796,14 +796,14 @@ add_partial_path(RelOptInfo *parent_rel, Path\n*new_path)\n \t\t/* Unless pathkeys are incompable, keep just one of the two paths. */\n \t\tif (keyscmp != PATHKEYS_DIFFERENT)\n \t\t{\n-\t\t\tif (new_path->total_cost > old_path->total_cost * STD_FUZZ_FACTOR)\n+\t\t\tif (is_greater_cost(new_path->total_cost, old_path->total_cost *\nSTD_FUZZ_FACTOR))\n \t\t\t{\n \t\t\t\t/* New path costs more; keep it only if pathkeys are better. */\n \t\t\t\tif (keyscmp != PATHKEYS_BETTER1)\n \t\t\t\t\taccept_new = false;\n \t\t\t}\n-\t\t\telse if (old_path->total_cost > new_path->total_cost\n-\t\t\t\t\t * STD_FUZZ_FACTOR)\n+\t\t\telse if (is_greater_cost(old_path->total_cost, new_path->total_cost\n+\t\t\t\t\t\t\t\t\t * STD_FUZZ_FACTOR))\n \t\t\t{\n \t\t\t\t/* Old path costs more; keep it only if pathkeys are better. */\n \t\t\t\tif (keyscmp != PATHKEYS_BETTER2)\n@@ -819,7 +819,7 @@ add_partial_path(RelOptInfo *parent_rel, Path *new_path)\n \t\t\t\t/* Costs are about the same, old path has better pathkeys. */\n \t\t\t\taccept_new = false;\n \t\t\t}\n-\t\t\telse if (old_path->total_cost > new_path->total_cost * 1.0000000001)\n+\t\t\telse if (is_greater_cost(old_path->total_cost, new_path->total_cost *\n1.0000000001))\n \t\t\t{\n \t\t\t\t/* Pathkeys are the same, and the old path costs more. */\n \t\t\t\tremove_old = true;\n@@ -847,7 +847,7 @@ add_partial_path(RelOptInfo *parent_rel, Path *new_path)\n \t\telse\n \t\t{\n \t\t\t/* new belongs after this old path if it has cost >= old's */\n-\t\t\tif (new_path->total_cost >= old_path->total_cost)\n+\t\t\tif (is_geq_cost(new_path->total_cost, old_path->total_cost))\n \t\t\t\tinsert_after = p1;\n \t\t\t/* p1_prev advances */\n \t\t\tp1_prev = p1;\n@@ -913,10 +913,10 @@ add_partial_path_precheck(RelOptInfo *parent_rel, Cost\ntotal_cost,\n \t\tkeyscmp = compare_pathkeys(pathkeys, old_path->pathkeys);\n \t\tif (keyscmp != PATHKEYS_DIFFERENT)\n \t\t{\n-\t\t\tif (total_cost > old_path->total_cost * STD_FUZZ_FACTOR &&\n+\t\t\tif (is_greater_cost(total_cost, old_path->total_cost * STD_FUZZ_FACTOR)\n&&\n \t\t\t\tkeyscmp != PATHKEYS_BETTER1)\n \t\t\t\treturn false;\n-\t\t\tif (old_path->total_cost > total_cost * STD_FUZZ_FACTOR &&\n+\t\t\tif (is_greater_cost(old_path->total_cost, total_cost * STD_FUZZ_FACTOR)\n&&\n \t\t\t\tkeyscmp != PATHKEYS_BETTER2)\n \t\t\t\treturn true;\n \t\t}\n@@ -1697,7 +1697,7 @@ create_unique_path(PlannerInfo *root, RelOptInfo *rel,\nPath *subpath,\n \n \tif (sjinfo->semi_can_btree && sjinfo->semi_can_hash)\n \t{\n-\t\tif (agg_path.total_cost < sort_path.total_cost)\n+\t\tif (is_lower_cost(agg_path.total_cost, sort_path.total_cost))\n \t\t\tpathnode->umethod = UNIQUE_PATH_HASH;\n \t\telse\n \t\t\tpathnode->umethod = UNIQUE_PATH_SORT;\ndiff --git a/src/backend/utils/cache/relcache.c\nb/src/backend/utils/cache/relcache.c\nindex 78f3b99a76..c261a9d790 100644\n--- a/src/backend/utils/cache/relcache.c\n+++ b/src/backend/utils/cache/relcache.c\n@@ -5076,8 +5076,8 @@ IsProjectionFunctionalIndex(Relation index)\n \t\t * when values differ because the expression is recalculated when\n \t\t * inserting a new index entry for the changed value.\n \t\t */\n-\t\tif ((index_expr_cost.startup + index_expr_cost.per_tuple) >\n-\t\t\tHEURISTIC_MAX_HOT_RECHECK_EXPR_COST)\n+\t\tif (is_greater_cost((index_expr_cost.startup +\nindex_expr_cost.per_tuple),\n+\t\t\t\t\t\t\tHEURISTIC_MAX_HOT_RECHECK_EXPR_COST))\n \t\t\tis_projection = false;\n \n \t\ttuple = SearchSysCache1(RELOID,\nObjectIdGetDatum(RelationGetRelid(index)));\ndiff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h\nindex 9159f2bab1..c01d08eae5 100644\n--- a/src/include/optimizer/cost.h\n+++ b/src/include/optimizer/cost.h\n@@ -251,6 +251,12 @@ extern PathTarget\n*set_pathtarget_cost_width(PlannerInfo *root, PathTarget *targ\n extern double compute_bitmap_pages(PlannerInfo *root, RelOptInfo *baserel,\n \t\t\t\t\t Path *bitmapqual, int loop_count, Cost *cost, double *tuple);\n \n+extern Cost add_cost(Cost cost, Cost delta_cost);\n+extern bool is_lower_cost(Cost cost1, Cost cost2);\n+extern bool is_greater_cost(Cost cost1, Cost cost2);\n+extern bool is_geq_cost(Cost cost1, Cost cost2);\n+\n+\n /*\n * prototypes for clausesel.c\n *\t routines to compute clause selectivities\n\n\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Tue, 10 Dec 2019 15:50:29 -0700 (MST)", "msg_from": "Jim Finnerty <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, 2019-12-10 at 15:50 -0700, Jim Finnerty wrote:\n> As a proof of concept, I hacked around a bit today to re-purpose one of the\n> bits of the Cost structure to mean \"is_disabled\" so that we can distinguish\n> 'disabled' from 'non-disabled' paths without making the Cost structure any\n> bigger. In fact, it's still a valid double. The obvious choice would have\n> been to re-purpose the sign bit, but I've had occasion to exploit negative\n> costs before so for this POC I used the high-order bit of the fractional\n> bits of the double. (see Wikipedia for double precision floating point for\n> the layout).\n> \n> The idea is to set a special bit when disable_cost is added to a cost. \n> Dedicating multiple bits instead of just 1 would be easily done, but as it\n> is we can accumulate many disable_costs without overflowing, so just\n> comparing the cost suffices.\n\nDoesn't that rely on a specific implementation of double precision (IEEE)?\nI thought that we don't want to limit ourselves to platforms with IEEE floats.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 11 Dec 2019 07:23:51 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, 11 Dec 2019 at 01:24, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2019-12-10 at 15:50 -0700, Jim Finnerty wrote:\n> > As a proof of concept, I hacked around a bit today to re-purpose one of the\n> > bits of the Cost structure to mean \"is_disabled\" so that we can distinguish\n> > 'disabled' from 'non-disabled' paths without making the Cost structure any\n> > bigger. In fact, it's still a valid double. The obvious choice would have\n> > been to re-purpose the sign bit, but I've had occasion to exploit negative\n> > costs before so for this POC I used the high-order bit of the fractional\n> > bits of the double. (see Wikipedia for double precision floating point for\n> > the layout).\n> >\n> > The idea is to set a special bit when disable_cost is added to a cost.\n> > Dedicating multiple bits instead of just 1 would be easily done, but as it\n> > is we can accumulate many disable_costs without overflowing, so just\n> > comparing the cost suffices.\n>\n> Doesn't that rely on a specific implementation of double precision (IEEE)?\n> I thought that we don't want to limit ourselves to platforms with IEEE floats.\n\nWe could always implement it again in another format....\n\nHowever, I wouldn't have expected to be bit twiddling. I would have\nexpected to use standard functions like ldexp to do this. In fact I\nthink if you use the high bit of the exponent you could do it entirely\nusing ldexp and regular double comparisons (with fabs).\n\nIe, to set the bit you set cost = ldexp(cost, __DBL_MAX_EXP__/2). And\nto check for the bit being set you compare ilogb(cost,\n__DBL_MAX_EXP__/2). Hm. that doesn't handle if the cost is already < 1\nin which case I guess you would have to set it to 1 first. Or reserve\nthe two high bits of the cost so you can represent disabled values\nthat had negative exponents before being disabled.\n\nI wonder if it wouldn't be a lot cleaner and more flexible to just go\nwith a plain float for Cost and use the other 32 bits for counters and\nbitmasks and still be ahead of the game. A double can store 2^1024 but\na float 2^128 which still feels like it should be more than enough to\nstore the kinds of costs plans have without the disabled costs. 2^128\nmilliseconds is still 10^28 years which is an awfully expensive\nquery....\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 12 Dec 2019 15:42:37 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, Dec 11, 2019 at 7:24 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> Doesn't that rely on a specific implementation of double precision (IEEE)?\n> I thought that we don't want to limit ourselves to platforms with IEEE floats.\n\nJust by the way, you might want to read the second last paragraph of\nthe commit message for 02ddd499. The dream is over, we're never going\nto run on Vax.\n\n\n", "msg_date": "Fri, 13 Dec 2019 15:59:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Dec 11, 2019 at 7:24 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>> Doesn't that rely on a specific implementation of double precision (IEEE)?\n>> I thought that we don't want to limit ourselves to platforms with IEEE floats.\n\n> Just by the way, you might want to read the second last paragraph of\n> the commit message for 02ddd499. The dream is over, we're never going\n> to run on Vax.\n\nStill, the proposed hack is doubling down on IEEE dependency in a way\nthat I quite dislike, in that (a) it doesn't just read float values\nbut generates new ones (and assumes that the hardware/libc will react in\na predictable way to them), (b) in a part of the code that has no damn\nbusiness having close dependencies on float format, and (c) for a gain\nfar smaller than what we got from the Ryu code.\n\nWe have had prior discussions about whether 02ddd499 justifies adding\nmore IEEE dependencies elsewhere. I don't think it does. IEEE 754\nis not the last word that will ever be said on floating-point arithmetic,\nany more than x86_64 is the last CPU architecture that anyone will ever\ncare about. We should keep our dependencies on it well circumscribed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Dec 2019 14:54:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "I think this would be ready to abstract away behind a few functions that\ncould always be replaced by something else later...\n\n\nHowever on further thought I really think just using a 32-bit float and 32\nbits of other bitmaps or counters would be a better approach.\n\n\nOn Sun., Dec. 15, 2019, 14:54 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Wed, Dec 11, 2019 at 7:24 PM Laurenz Albe <laurenz.albe@cybertec.at>\n> wrote:\n> >> Doesn't that rely on a specific implementation of double precision\n> (IEEE)?\n> >> I thought that we don't want to limit ourselves to platforms with IEEE\n> floats.\n>\n> > Just by the way, you might want to read the second last paragraph of\n> > the commit message for 02ddd499. The dream is over, we're never going\n> > to run on Vax.\n>\n> Still, the proposed hack is doubling down on IEEE dependency in a way\n> that I quite dislike, in that (a) it doesn't just read float values\n> but generates new ones (and assumes that the hardware/libc will react in\n> a predictable way to them), (b) in a part of the code that has no damn\n> business having close dependencies on float format, and (c) for a gain\n> far smaller than what we got from the Ryu code.\n>\n> We have had prior discussions about whether 02ddd499 justifies adding\n> more IEEE dependencies elsewhere. I don't think it does. IEEE 754\n> is not the last word that will ever be said on floating-point arithmetic,\n> any more than x86_64 is the last CPU architecture that anyone will ever\n> care about. We should keep our dependencies on it well circumscribed.\n>\n> regards, tom lane\n>\n>\n>\n\nI think this would be ready to abstract away behind a few functions that could always be replaced by something else later...However on further thought I really think just using a 32-bit float and 32 bits of other bitmaps or counters would be a better approach. On Sun., Dec. 15, 2019, 14:54 Tom Lane, <tgl@sss.pgh.pa.us> wrote:Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Dec 11, 2019 at 7:24 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>> Doesn't that rely on a specific implementation of double precision (IEEE)?\n>> I thought that we don't want to limit ourselves to platforms with IEEE floats.\n\n> Just by the way, you might want to read the second last paragraph of\n> the commit message for 02ddd499.  The dream is over, we're never going\n> to run on Vax.\n\nStill, the proposed hack is doubling down on IEEE dependency in a way\nthat I quite dislike, in that (a) it doesn't just read float values\nbut generates new ones (and assumes that the hardware/libc will react in\na predictable way to them), (b) in a part of the code that has no damn\nbusiness having close dependencies on float format, and (c) for a gain\nfar smaller than what we got from the Ryu code.\n\nWe have had prior discussions about whether 02ddd499 justifies adding\nmore IEEE dependencies elsewhere.  I don't think it does.  IEEE 754\nis not the last word that will ever be said on floating-point arithmetic,\nany more than x86_64 is the last CPU architecture that anyone will ever\ncare about.  We should keep our dependencies on it well circumscribed.\n\n                        regards, tom lane", "msg_date": "Sun, 15 Dec 2019 15:45:31 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Hi hackers,\r\n\r\nI have write an initial patch to retire the disable_cost​ GUC, which labeled a flag on the Path struct instead of adding up a big cost which is hard to estimate. Though it involved in tons of plan changes in regression tests, I have tested on some simple test cases such as eagerly generate a two-stage agg plans and it worked. Could someone help to review?\r\n\r\n\r\nregards,\r\n\r\nJian\r\n________________________________\r\nFrom: Euler Taveira <euler@timbira.com.br>\r\nSent: Friday, November 1, 2019 22:48\r\nTo: Zhenghua Lyu <zlyu@vmware.com>\r\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\nSubject: Re: On disable_cost\r\n\r\n!! External Email\r\n\r\nEm sex, 1 de nov de 2019 às 03:42, Zhenghua Lyu <zlv@pivotal.io> escreveu:\r\n>\r\n> My issue: I did some spikes and tests on TPCDS 1TB Bytes data. For query 104, it generates\r\n> nestloop join even with enable_nestloop set off. And the final plan's total cost is very huge (about 1e24). But If I enlarge the disable_cost to 1e30, then, planner will generate hash join.\r\n>\r\n> So I guess that disable_cost is not large enough for huge amount of data.\r\n>\r\n> It is tricky to set disable_cost a huge number. Can we come up with better solution?\r\n>\r\nIsn't it a case for a GUC disable_cost? As Thomas suggested, DBL_MAX\r\nupper limit should be sufficient.\r\n\r\n> The following thoughts are from Heikki:\r\n>>\r\n>> Aside from not having a large enough disable cost, there's also the fact that the high cost might affect the rest of the plan, if we have to use a plan type that's disabled. For example, if a table doesn't have any indexes, but enable_seqscan is off, we might put the unavoidable Seq Scan on different side of a join than we we would with enable_seqscan=on, because of the high cost estimate.\r\n>\r\n>\r\n>>\r\n>> I think a more robust way to disable forbidden plan types would be to handle the disabling in add_path(). Instead of having a high disable cost on the Path itself, the comparison add_path() would always consider disabled paths as more expensive than others, regardless of the cost.\r\n>\r\nI'm afraid it is not as cheap as using diable_cost as a node cost. Are\r\nyou proposing to add a new boolean variable in Path struct to handle\r\nthose cases in compare_path_costs_fuzzily?\r\n\r\n\r\n--\r\n Euler Taveira Timbira -\r\nhttps://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.timbira.com.br%2F&data=05%7C01%7Cgjian%40vmware.com%7C12a30b2852dd4651667608db9401d056%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638266507757076648%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=v54JhsW8FX4mSmjgt2yP59t7xtv1mZvC%2BBhtKrfp%2FBY%3D&reserved=0<http://www.timbira.com.br/>\r\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\r\n\r\n\r\n\r\n\r\n\r\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.", "msg_date": "Thu, 3 Aug 2023 09:21:39 +0000", "msg_from": "Jian Guo <gjian@vmware.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Thu, Aug 3, 2023 at 5:22 AM Jian Guo <gjian@vmware.com> wrote:\n> I have write an initial patch to retire the disable_cost GUC, which labeled a flag on the Path struct instead of adding up a big cost which is hard to estimate. Though it involved in tons of plan changes in regression tests, I have tested on some simple test cases such as eagerly generate a two-stage agg plans and it worked. Could someone help to review?\n\nI took a look at this patch today. I believe that overall this may\nwell be an approach worth pursuing. However, more work is going to be\nneeded. Here are some comments:\n\n1. You stated that it changes lots of plans in the regression tests,\nbut you haven't provided any sort of analysis of why those plans\nchanged. I'm kind of surprised that there would be \"tons\" of plan\nchanges. You (or someone) should look into why that's happening.\n\n2. The change to compare_path_costs_fuzzily() seems incorrect to me.\nWhen path1->is_disabled && path2->is_disabled, costs should be\ncompared just as they are when neither path is disabled. Instead, the\npatch treats any two such paths as having equal cost. That seems\ncatastrophically bad. Maybe it accounts for some of those plan\nchanges, although that would only be true if those plans were created\nwhile using some disabling GUC.\n\n3. Instead of adding is_disabled at the end of the Path structure, I\nsuggest adding it between param_info and parallel_aware. I think if\nyou do that, the space used by the new field will use up padding bytes\nthat are currently included in the struct, instead of making it\nlonger.\n\n4. A critical issue for any patch of this type is performance. This\nconcern was raised earlier on this thread, but your path doesn't\naddress it. There's no performance analysis or benchmarking included\nin your email. One idea that I have is to write the cost-comparison\ntest like this:\n\nif (unlikely(path1->is_disabled || path2->is_disabled))\n{\n if (!path1->is_disabled)\n return COSTS_BETTER1;\n if (!path2->is_disabled)\n return COSTS_BETTER2;\n /* if both disabled, fall through */\n}\n\nI'm not sure that would be enough to prevent the patch from adding\nnoticeably to the cost of path comparison, but maybe it would help.\n\n5. The patch changes only compare_path_costs_fuzzily() but I wonder\nwhether compare_path_costs() and compare_fractional_path_costs() need\nsimilar surgery. Whether they do or don't, there should likely be some\ncomments explaining the situation.\n\n6. In fact, the patch changes no comments at all, anywhere. I'm not\nsure how many comment changes a patch like this needs to make, but the\nanswer definitely isn't \"none\".\n\n7. The patch doesn't actually remove disable_cost. I guess it should.\n\n8. When you submit a patch, it's a good idea to also add it on\ncommitfest.postgresql.org. It doesn't look like that was done in this\ncase.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 10:27:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Aug 3, 2023 at 5:22 AM Jian Guo <gjian@vmware.com> wrote:\n>> I have write an initial patch to retire the disable_cost GUC, which labeled a flag on the Path struct instead of adding up a big cost which is hard to estimate. Though it involved in tons of plan changes in regression tests, I have tested on some simple test cases such as eagerly generate a two-stage agg plans and it worked. Could someone help to review?\n\n> I took a look at this patch today. I believe that overall this may\n> well be an approach worth pursuing. However, more work is going to be\n> needed. Here are some comments:\n\n> 1. You stated that it changes lots of plans in the regression tests,\n> but you haven't provided any sort of analysis of why those plans\n> changed. I'm kind of surprised that there would be \"tons\" of plan\n> changes. You (or someone) should look into why that's happening.\n\nI've not read the patch, but given this description I would expect\nthere to be *zero* regression changes --- I don't think we have any\ntest cases that depend on disable_cost being finite. If there's more\nthan zero changes, that very likely indicates a bug in the patch.\nEven if there are places where the output legitimately changes, you\nneed to justify each one and make sure that you didn't invalidate the\nintent of that test case.\n\nBTW, having written that paragraph, I wonder if we couldn't get\nthe same end result with a nearly one-line change that consists of\nmaking disable_cost be IEEE infinity. Years ago we didn't want\nto rely on IEEE float semantics in this area, but nowadays I don't\nsee why we shouldn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Mar 2024 13:32:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Mar 12, 2024 at 1:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, having written that paragraph, I wonder if we couldn't get\n> the same end result with a nearly one-line change that consists of\n> making disable_cost be IEEE infinity. Years ago we didn't want\n> to rely on IEEE float semantics in this area, but nowadays I don't\n> see why we shouldn't.\n\nI don't think so, because I think that what will happen in that case\nis that we'll pick a completely random plan if we can't pick a plan\nthat avoids incurring disable_cost. Every plan that contains one\ndisabled node anywhere in the plan tree will look like it has exactly\nthe same cost as any other such plan.\n\nIMHO, this is actually one of the problems with disable_cost as it\nworks today. I think the semantics that we want are: if it's possible\nto pick a plan where nothing is disabled, then pick the cheapest such\nplan; if not, pick the cheapest plan overall. But treating\ndisable_cost doesn't really do that. It does the first part -- picking\nthe cheapest plan where nothing is disabled -- but it doesn't do the\nsecond part, because once you add disable_cost into the cost of some\nparticular plan node, it screws up the rest of the planning, because\nthe cost estimates for the disabled nodes have no bearing in reality.\nFast-start plans, for example, will look insanely good compared to\nwhat would be the case in normal planning (and we lean too much toward\nfast-start plans even normally).\n\n(I don't think we should care how MANY disabled nodes appear in a\nplan, particularly. This is a more arguable point. Is a plan with 1\ndisabled node and 10% more cost better or worse than a plan with 2\ndisabled nodes and 10% less cost? I'd argue that counting the number\nof disabled nodes isn't particularly meaningful.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 14:01:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Mar 12, 2024 at 1:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, having written that paragraph, I wonder if we couldn't get\n>> the same end result with a nearly one-line change that consists of\n>> making disable_cost be IEEE infinity.\n\n> I don't think so, because I think that what will happen in that case\n> is that we'll pick a completely random plan if we can't pick a plan\n> that avoids incurring disable_cost. Every plan that contains one\n> disabled node anywhere in the plan tree will look like it has exactly\n> the same cost as any other such plan.\n\nGood point.\n\n> IMHO, this is actually one of the problems with disable_cost as it\n> works today.\n\nYeah. I keep thinking that the right solution is to not generate\ndisabled paths in the first place if there are any other ways to\nproduce the same relation. That has obvious order-of-operations\nproblems though, and I've not been able to make it work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Mar 2024 15:36:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Mar 12, 2024 at 3:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. I keep thinking that the right solution is to not generate\n> disabled paths in the first place if there are any other ways to\n> produce the same relation. That has obvious order-of-operations\n> problems though, and I've not been able to make it work.\n\nI've expressed the same view in the past. It would be nice not to\nwaste planner effort on paths that we're just going to throw away, but\nI'm not entirely sure what you mean by \"obvious order-of-operations\nproblems.\"\n\nTo me, it seems like what we'd need is to be able to restart the whole\nplanner process if we run out of steam before we get done. For\nexample, suppose we're planning a 2-way join where index and\nindex-only scans are disabled, sorts are disabled, and nested loops\nand hash joins are disabled. There's no problem generating just the\nnon-disabled scan types at the baserel level, but when we reach the\njoin, we're going to find that the only non-disabled join type is a\nmerge join, and we're also going to find that we have no paths that\nprovide pre-sorted input, so we need to sort, which we're also not\nallowed to do. If we could give up at that point and restart planning,\ndisabling all of the plan-choice constraints and now creating all\npaths for each RelOptInfo, then everything would, I believe, be just\nfine. We'd end up needing neither disable_cost nor the mechanism\nproposed by this patch.\n\nBut in the absence of that, we need some way to privilege the\nnon-disabled paths over the disabled ones -- and I'd prefer to have\nsomething more principled than disable_cost, if we can work out the\ndetails.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 15:54:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, 13 Mar 2024 at 08:55, Robert Haas <robertmhaas@gmail.com> wrote:\n> But in the absence of that, we need some way to privilege the\n> non-disabled paths over the disabled ones -- and I'd prefer to have\n> something more principled than disable_cost, if we can work out the\n> details.\n\nThe primary place I see issues with disabled_cost is caused by\nSTD_FUZZ_FACTOR. When you add 1.0e10 to a couple of modestly costly\npaths, it makes them appear fuzzily the same in cases where one could\nbe significantly cheaper than the other. If we were to bump up the\ndisable_cost it would make this problem worse.\n\nI think we do still need some way to pick the cheapest disabled path\nwhen there are no other options.\n\nOne way would be to set fuzz_factor to 1.0 when either of the paths\ncosts in compare_path_costs_fuzzily() is >= disable_cost.\nclamp_row_est() does cap row estimates at MAXIMUM_ROWCOUNT (1e100), so\nI think there is some value of disable_cost that could almost\ncertainly ensure we don't reach it because the path is truly expensive\nrather than disabled.\n\nSo maybe the fix could be to set disable_cost to something like\n1.0e110 and adjust compare_path_costs_fuzzily to not apply the\nfuzz_factor for paths >= disable_cost. However, I wonder if that\nrisks the costs going infinite after a couple of cartesian joins.\n\nDavid\n\n\n", "msg_date": "Wed, 13 Mar 2024 09:55:22 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> So maybe the fix could be to set disable_cost to something like\n> 1.0e110 and adjust compare_path_costs_fuzzily to not apply the\n> fuzz_factor for paths >= disable_cost. However, I wonder if that\n> risks the costs going infinite after a couple of cartesian joins.\n\nPerhaps. It still does nothing for Robert's point that once we're\nforced into using a \"disabled\" plan type, it'd be better if the\ndisabled-ness didn't skew subsequent planning choices.\n\nOn the whole I agree that getting rid of disable_cost entirely\nwould be the way to go, if we can replace that with a separate\nboolean without driving up the cost of add_path too much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 Mar 2024 17:18:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Mar 12, 2024 at 4:55 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> The primary place I see issues with disabled_cost is caused by\n> STD_FUZZ_FACTOR. When you add 1.0e10 to a couple of modestly costly\n> paths, it makes them appear fuzzily the same in cases where one could\n> be significantly cheaper than the other. If we were to bump up the\n> disable_cost it would make this problem worse.\n\nHmm, good point.\n\n> So maybe the fix could be to set disable_cost to something like\n> 1.0e110 and adjust compare_path_costs_fuzzily to not apply the\n> fuzz_factor for paths >= disable_cost. However, I wonder if that\n> risks the costs going infinite after a couple of cartesian joins.\n\nYeah, I think the disabled flag is a better answer if we can make it\nwork. No matter what value we pick for disable_cost, it's bound to\nskew the planning sometimes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Mar 2024 09:05:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Mar 12, 2024 at 1:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > 1. You stated that it changes lots of plans in the regression tests,\n> > but you haven't provided any sort of analysis of why those plans\n> > changed. I'm kind of surprised that there would be \"tons\" of plan\n> > changes. You (or someone) should look into why that's happening.\n>\n> I've not read the patch, but given this description I would expect\n> there to be *zero* regression changes --- I don't think we have any\n> test cases that depend on disable_cost being finite. If there's more\n> than zero changes, that very likely indicates a bug in the patch.\n> Even if there are places where the output legitimately changes, you\n> need to justify each one and make sure that you didn't invalidate the\n> intent of that test case.\n\nI spent some more time poking at this patch. It's missing a ton of\nimportant stuff and is wrong in a whole bunch of really serious ways,\nand I'm not going to try to mention all of them in this email. But I\ndo want to talk about some of the more interesting realizations that\ncame to me as I was working my way through this.\n\nOne of the things I realized relatively early is that the patch does\nnothing to propagate disable_cost upward through the plan tree. That\nmeans that if you have a choice between, say,\nSort-over-Append-over-SeqScan and MergeAppend-over-IndexScan, as we do\nin the regression tests, disabling IndexScan doesn't change the plan\nwith the patch applied, as it does in master. That's because only the\nIndexScan node ends up flagged as disabled. Once we start stacking\nother plan nodes on top of disabled plan nodes, the resultant plans\nare at no disadvantage compared to plans containing no disabled nodes.\nThe IndexScan plan survives initially, despite being disabled, because\nit's got a sort order. That lets us use it to build a MergeAppend\npath, and that MergeAppend path is not disabled, and compares\nfavorably on cost.\n\nAfter straining my brain over various plan changes for a long time,\nand hacking on the code somewhat, I realized that just propagating the\nBoolean upward is insufficient to set things right. That's basically\nbecause I was being dumb when I said this:\n\n> I don't think we should care how MANY disabled nodes appear in a\n> plan, particularly.\n\nSuppose we try to plan a Nested Loop with SeqScan disabled, but\nthere's no alternative to a SeqScan for the outer side of the join. If\nwe suppose an upward-propagating Boolean, every path for the join is\ndisabled because every path for the outer side is disabled. That means\nthat we have no reason to avoid paths where the inner side also uses a\ndisabled path. When we loop over the inner rel's pathlist looking for\nways to build a path for the join, we may find some disabled paths\nthere, and the join paths we build from those paths are disabled, but\nso are the join paths where we use a non-disabled path on the inner\nside. So those paths are just competing with each other on cost, and\nthe path type that is supposedly disabled on the outer side of the\njoin ends up not really being very disabled at all. More precisely, if\ndisabling a plan type causes paths to be discarded completely before\nthe join paths are constructed, then they actually do get removed from\nconsideration. But if those paths make it into inner rel's path list,\neven way out towards the end, then paths derived from them can jump to\nthe front of the joinrel's path list.\n\nThe same kind of problem happens with Append or MergeAppend nodes. The\nregression tests expect that we'll avoid disabled plan types whenever\npossible even if we can't avoid them completely; for instance, the\nmatest0 table intentionally omits an index on one child table.\nDisabling sequential scans is expected to disable them for all of the\nother child tables even though for that particular child table there\nis no other option. But that behavior is hard to achieve if every path\nfor the parent rel is \"equally disabled\". You either want the path\nthat uses only the one required SeqScan to be not-disabled even though\none of its children is disabled ... or you want the disabled flag to\nbe more than a Boolean. And while there's probably more than one way\nto make it work, the easiest thing seems to be to just have a\ndisabled-counter in every node that gets initialized to the total\ndisabled-counter values of all of its children, and then you add 1 if\nthat node is itself doing something that is disabled, i.e. the exact\nopposite of what I said in the quote above.\n\nI haven't done enough work to know whether that would match the\ncurrent behavior, let alone whether it would have acceptable\nperformance, and I'm not at all convinced that's the right direction\nanyway. Actually, at the moment, I don't have a very good idea at all\nwhat the right direction is. I do have a feeling that this patch is\nprobably not going in the right direction, but I might be wrong about\nthat, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Apr 2024 16:40:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> One of the things I realized relatively early is that the patch does\n> nothing to propagate disable_cost upward through the plan tree.\n> ...\n> After straining my brain over various plan changes for a long time,\n> and hacking on the code somewhat, I realized that just propagating the\n> Boolean upward is insufficient to set things right. That's basically\n> because I was being dumb when I said this:\n>> I don't think we should care how MANY disabled nodes appear in a\n>> plan, particularly.\n\nVery interesting, thanks for the summary. So the fact that\ndisable_cost is additive across plan nodes is actually a pretty\nimportant property of the current setup. I think this is closely\nrelated to one argument you made against my upthread idea of using\nIEEE Infinity for disable_cost: that'd mask whether more than one\nof the sub-plans had been disabled.\n\n> ... And while there's probably more than one way\n> to make it work, the easiest thing seems to be to just have a\n> disabled-counter in every node that gets initialized to the total\n> disabled-counter values of all of its children, and then you add 1 if\n> that node is itself doing something that is disabled, i.e. the exact\n> opposite of what I said in the quote above.\n\nYeah, that seems like the next thing to try if anyone plans to pursue\nthis further. That'd essentially do what we're doing now except that\ndisable_cost is its own \"order of infinity\", entirely separate from\nnormal costs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Apr 2024 17:00:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Mon, Apr 1, 2024 at 5:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Very interesting, thanks for the summary. So the fact that\n> disable_cost is additive across plan nodes is actually a pretty\n> important property of the current setup. I think this is closely\n> related to one argument you made against my upthread idea of using\n> IEEE Infinity for disable_cost: that'd mask whether more than one\n> of the sub-plans had been disabled.\n\nYes, exactly. I just hadn't quite put the pieces together.\n\n> Yeah, that seems like the next thing to try if anyone plans to pursue\n> this further. That'd essentially do what we're doing now except that\n> disable_cost is its own \"order of infinity\", entirely separate from\n> normal costs.\n\nRight. I think that's actually what I had in mind in the last\nparagraph of http://postgr.es/m/CA+TgmoY+Ltw7B=1FSFSN4yHcu2roWrz-ijBovj-99LZU=9h1dA@mail.gmail.com\nbut that was a while ago and I'd lost track of why it actually\nmattered. But I also have questions about whether that's really the\nright approach.\n\nI think the approach of just not generating paths we don't want in the\nfirst place merits more consideration. We do that in some cases\nalready, but not in others, and I'm not clear why. Like, if\nindex-scans, index-only scans, sorts, nested loops, and hash joins are\ndisabled, something is going to have to give, because the only\nremaining join type is a merge join yet we've ruled out every possible\nway of getting the day into some order, but I'm not sure whether\nthere's some reason that we need exactly the behavior that we have\nright now rather than anything else. Maybe it would be OK to just\ninsist on at least one unparameterized, non-partial path at the\nbaserel level, and then if that ends up forcing us to ignore the\njoin-type restrictions higher up, so be it. Or maybe that's not OK and\nafter I try that out I'll end up writing another email about how I was\na bit clueless about all of this. I don't know. But I feel like it\nmerits more investigation, because I'm having trouble shaking the\ntheory that what we've got right now is pretty arbitrary.\n\nAnd also ... looking at the regression tests, and also thinking about\nthe kinds of problems that I think people run into in real\ndeployments, I can't help feeling like we've somehow got this whole\nthing backwards. enable_wunk imagines that you want to plan as normal\nexcept with one particular plan type excluded from consideration. And\nmaybe that makes sense if the point of the enable_wunk GUC is that the\nplanner feature might be buggy and you might therefore want to turn it\noff to protect yourself, or if the planner feature might be expensive\nand you might want to turn it off to save cycles. But surely that's\nnot the case with something like enable_seqscan or enable_indexscan.\nWhat I think we're mostly doing in the regression tests is shutting\noff every relevant type of plan except one. I theorize that what we\nactually want to do is tell the planner what we do want to happen,\nrather than what we don't want to happen, but we've got this weird set\nof GUCs that do the opposite of that and we're super-attached to them\nbecause they've existed forever. I don't really have a concrete\nproposal here, but I wonder if what we're actually talking about here\nis spending time and energy polishing a mechanism that nobody likes in\nthe first place.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Apr 2024 19:53:58 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Mon, Apr 1, 2024 at 7:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> What I think we're mostly doing in the regression tests is shutting\n> off every relevant type of plan except one. I theorize that what we\n> actually want to do is tell the planner what we do want to happen,\n> rather than what we don't want to happen, but we've got this weird set\n> of GUCs that do the opposite of that and we're super-attached to them\n> because they've existed forever.\n\n\nSo rather than listing all the things we don't want to happen, we need a\nway to force (nay, highly encourage) a particular solution. As our costing\nis a based on positive numbers, what if we did something like this in\ncostsize.c?\n\n Cost disable_cost = 1.0e10;\n Cost promotion_cost = 1.0e10; // or higher or lower, depending on\nhow strongly we want to \"beat\" disable_costs effects.\n...\n\n if (!enable_seqscan)\n startup_cost += disable_cost;\n else if (promote_seqscan)\n startup_cost -= promotion_cost; // or replace \"promote\" with\n\"encourage\"?\n\n\nCheers,\nGreg\n\nOn Mon, Apr 1, 2024 at 7:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\nWhat I think we're mostly doing in the regression tests is shutting\noff every relevant type of plan except one. I theorize that what we\nactually want to do is tell the planner what we do want to happen,\nrather than what we don't want to happen, but we've got this weird set\nof GUCs that do the opposite of that and we're super-attached to them\nbecause they've existed forever.So rather than listing all the things we don't want to happen, we need a way to force (nay, highly encourage) a particular solution. As our costing is a based on positive numbers, what if we did something like this in costsize.c? Cost        disable_cost = 1.0e10; Cost        promotion_cost = 1.0e10; // or higher or lower, depending on how strongly we want to \"beat\" disable_costs effects....    if (!enable_seqscan)        startup_cost += disable_cost;    else if (promote_seqscan)        startup_cost -= promotion_cost; // or replace \"promote\" with \"encourage\"?Cheers,Greg", "msg_date": "Tue, 2 Apr 2024 10:03:57 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Apr 2, 2024 at 10:04 AM Greg Sabino Mullane <htamfids@gmail.com> wrote:\n> So rather than listing all the things we don't want to happen, we need a way to force (nay, highly encourage) a particular solution. As our costing is a based on positive numbers, what if we did something like this in costsize.c?\n>\n> Cost disable_cost = 1.0e10;\n> Cost promotion_cost = 1.0e10; // or higher or lower, depending on how strongly we want to \"beat\" disable_costs effects.\n> ...\n>\n> if (!enable_seqscan)\n> startup_cost += disable_cost;\n> else if (promote_seqscan)\n> startup_cost -= promotion_cost; // or replace \"promote\" with \"encourage\"?\n\nI'm pretty sure negative costs are going to create a variety of\nunpleasant planning artifacts. The large positive costs do, too, which\nis where this whole discussion started. If I disable (or promote) some\nparticular plan, I want the rest of the plan tree to come out looking\nas much as possible like what would have happened if the same\nalternative had won organically on cost. I think the only reason we're\ndriving this off of costing today is that making add_path() more\ncomplicated is unappealing, mostly on performance grounds, and if you\nadd disabled-ness (or promoted-ness) as a separate axis of value then\nadd_path() has to know about that on top of everything else. I think\nthe goal here is to come up with a more principled alternative that\nisn't just based on whacking large numbers into the cost and hoping\nsomething good happens ... but it is a whole lot easier to be unhappy\nwith the status quo than it is to come up with something that's\nactually better.\n\nI am planning to spend some more time thinking about it, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Apr 2024 11:01:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 2, 2024 at 10:04 AM Greg Sabino Mullane <htamfids@gmail.com> wrote:\n>> if (!enable_seqscan)\n>> startup_cost += disable_cost;\n>> else if (promote_seqscan)\n>> startup_cost -= promotion_cost; // or replace \"promote\" with \"encourage\"?\n\n> I'm pretty sure negative costs are going to create a variety of\n> unpleasant planning artifacts.\n\nIndeed. It might be okay to have negative values for disabled-ness\nif we treat disabled-ness as a \"separate order of infinity\", but\nI suspect that it'd behave poorly when there are both disabled and\npromoted sub-paths in a tree, for pretty much the same reasons you\nexplained just upthread.\n\n> I think the only reason we're\n> driving this off of costing today is that making add_path() more\n> complicated is unappealing, mostly on performance grounds, and if you\n> add disabled-ness (or promoted-ness) as a separate axis of value then\n> add_path() has to know about that on top of everything else.\n\nIt doesn't seem to me that it's a separate axis of value, just a\nhigher-order component of the cost metric. Nonetheless, adding even\na few instructions to add_path comparisons sounds expensive. Maybe\nit'd be fine, but we'd need to do some performance testing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Apr 2024 11:54:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Apr 2, 2024 at 11:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm pretty sure negative costs are going to create a variety of\n> > unpleasant planning artifacts.\n>\n> Indeed. It might be okay to have negative values for disabled-ness\n> if we treat disabled-ness as a \"separate order of infinity\", but\n> I suspect that it'd behave poorly when there are both disabled and\n> promoted sub-paths in a tree, for pretty much the same reasons you\n> explained just upthread.\n\nHmm, can you explain further? I think essentially you'd be maximizing\n#(promoted notes)-#(disabled nodes), but I have no real idea whether\nthat behavior will be exactly what people want or extremely\nunintuitive or something in the middle. It seems like it should be\nfine if there's only promoting or only disabling or if we can respect\nboth the promoting and the disabling, assuming we even want to have\nboth, but I'm suspicious that it will be weird somehow in other cases.\nI can't say exactly in what way, though. Do you have more insight?\n\n> > I think the only reason we're\n> > driving this off of costing today is that making add_path() more\n> > complicated is unappealing, mostly on performance grounds, and if you\n> > add disabled-ness (or promoted-ness) as a separate axis of value then\n> > add_path() has to know about that on top of everything else.\n>\n> It doesn't seem to me that it's a separate axis of value, just a\n> higher-order component of the cost metric. Nonetheless, adding even\n> a few instructions to add_path comparisons sounds expensive. Maybe\n> it'd be fine, but we'd need to do some performance testing.\n\nHmm, yeah. I'm not sure how much difference there is between these\nthings in practice. I didn't run down everything that was happening,\nbut I think what I did was equivalent to making it a higher-order\ncomponent of the cost metric, and it seemed like an awful lot of paths\nwere surviving anyway, e.g. index scans survived\nenable_indexscan=false because they had a sort order, and I think\nsequential scans were surviving enable_seqscan=false too, perhaps\nbecause they had no startup cost. At any rate there's no question that\nadd_path() is hot.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Apr 2024 12:26:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 2, 2024 at 11:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I suspect that it'd behave poorly when there are both disabled and\n>> promoted sub-paths in a tree, for pretty much the same reasons you\n>> explained just upthread.\n\n> Hmm, can you explain further? I think essentially you'd be maximizing\n> #(promoted notes)-#(disabled nodes), but I have no real idea whether\n> that behavior will be exactly what people want or extremely\n> unintuitive or something in the middle. It seems like it should be\n> fine if there's only promoting or only disabling or if we can respect\n> both the promoting and the disabling, assuming we even want to have\n> both, but I'm suspicious that it will be weird somehow in other cases.\n> I can't say exactly in what way, though. Do you have more insight?\n\nNot really. But if you had, say, a join of a promoted path to a\ndisabled path, that would be treated as on-par with a join of two\nregular paths, which seems like it'd lead to odd choices. Maybe\nit'd be fine, but my gut says it'd likely not act nicely. As you\nsay, it's a lot easier to believe that only-promoted or only-disabled\nsituations would behave sanely.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Apr 2024 12:58:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Apr 2, 2024 at 12:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Not really. But if you had, say, a join of a promoted path to a\n> disabled path, that would be treated as on-par with a join of two\n> regular paths, which seems like it'd lead to odd choices. Maybe\n> it'd be fine, but my gut says it'd likely not act nicely. As you\n> say, it's a lot easier to believe that only-promoted or only-disabled\n> situations would behave sanely.\n\nMakes sense.\n\nI wanted to further explore the idea of just not generating plans of\ntypes that are currently disabled. I looked into doing this for\nenable_indexscan and enable_indexonlyscan. As a first step, I\ninvestigated how those settings work now, and was horrified. I don't\nknow whether I just wasn't paying attention back when the original\nindex-only scan work was done -- I remember discussing\nenable_indexonlyscan with you at the time -- or whether it got changed\nsubsequently. Anyway, the current behavior is:\n\n[A] enable_indexscan=false adds disable_cost to the cost of every\nIndex Scan path *and also* every Index-Only Scan path. So disabling\nindex-scans also in effect discourages the use of index-only scans,\nwhich would make sense if we didn't have a separate setting called\nenable_indexonlyscan, but we do. Given that, I think this is\ncompletely and utterly wrong.\n\n[b] enable_indexonlyscan=false causes index-only scan paths not to be\ngenerated at all, but instead, we generate index-scan paths to do the\nsame thing that we would not have generated otherwise. This is weird\nbecause it means that disabling one plan type causes us to consider\nadditional plans of another type, which seems like a thing that a user\nmight not expect. It's more defensible than [A], though, because you\ncould argue that we only omit the index scan path as an optimization,\non the theory that it will always lose to the index-only scan path,\nand thus if the index-only scan path is not generated, there's a point\nto generating the index scan path after all, so we should. However, it\nseems unlikely to me that someone reading the one line of\ndocumentation that we have about this parameter would be able to guess\nthat it works this way.\n\nHere's an example of how the current system behaves:\n\nrobert.haas=# explain select count(*) from pgbench_accounts;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2854.29..2854.30 rows=1 width=8)\n -> Index Only Scan using pgbench_accounts_pkey on pgbench_accounts\n (cost=0.29..2604.29 rows=100000 width=0)\n(2 rows)\n\nrobert.haas=# set enable_indexscan=false;\nSET\nrobert.haas=# explain select count(*) from pgbench_accounts;\n QUERY PLAN\n------------------------------------------------------------------------------\n Aggregate (cost=2890.00..2890.01 rows=1 width=8)\n -> Seq Scan on pgbench_accounts (cost=0.00..2640.00 rows=100000 width=0)\n(2 rows)\n\nrobert.haas=# set enable_seqscan=false;\nSET\nrobert.haas=# set enable_bitmapscan=false;\nSET\nrobert.haas=# explain select count(*) from pgbench_accounts;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=10000002854.29..10000002854.30 rows=1 width=8)\n -> Index Only Scan using pgbench_accounts_pkey on pgbench_accounts\n (cost=10000000000.29..10000002604.29 rows=100000 width=0)\n(2 rows)\n\nrobert.haas=# set enable_indexonlyscan=false;\nSET\nrobert.haas=# explain select count(*) from pgbench_accounts;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Aggregate (cost=10000002890.00..10000002890.01 rows=1 width=8)\n -> Seq Scan on pgbench_accounts\n(cost=10000000000.00..10000002640.00 rows=100000 width=0)\n(2 rows)\n\nThe first time we run the query, it picks an index-only scan because\nit's the cheapest. When index scans are disabled, the query now picks\na sequential scan, even though it wasn't use an index-only scan,\nbecause the index scan that it was using is perceived to have become\nvery expensive. When we then shut off sequential scans and bitmap-only\nscans, it switches back to an index-only scan, because setting\nenable_indexscan=false didn't completely disable index-only scans, but\njust made them expensive. But now everything looks expensive, so we go\nback to the same plan we had initially, except with the cost increased\nby a bazillion. Finally, when we disable index-only scans, that\nremoves that plan from the pool, so now we pick the second-cheapest\nplan overall, which in this case is a sequential scan.\n\nSo just to see what would happen, I wrote a patch to make\nenable_indexscan and enable_indexonlyscan do exactly what they say on\nthe tin: when you set one of them to false, paths of that type are not\ngenerated, and nothing else changes. I found that there are a\nsurprisingly large number of regression tests that rely on the current\nbehavior, so I took a crack at fixing them to achieve their goals (or\nwhat I believed their goals to be) in other ways. The resulting patch\nis attached for your (or anyone's) possible edification.\n\nJust to be clear, I have no immediate plans to press forward with\ntrying to get something committed here. It seems pretty clear to me\nthat we should fix [A] in some way, but maybe not in the way I did it\nhere. It's also pretty clear to me that the fact that enable_indexscan\nand enable_indexonlyscan work completely differently from each other\nis surprising at best, wrong at worst, but here again, what this patch\ndoes about that is not above reproach. I think it may make sense to\ndig through the behavior of some of the remaining enable_* GUCs before\nsettling on a final strategy here, but I thought that the finds above\nwere interesting enough and bizarre enough that it made sense to drop\nan email now and see what people think of all this before going\nfurther.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 Apr 2024 15:21:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, Apr 3, 2024 at 3:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> It's also pretty clear to me that the fact that enable_indexscan\n> and enable_indexonlyscan work completely differently from each other\n> is surprising at best, wrong at worst, but here again, what this patch\n> does about that is not above reproach.\n\n\nYes, that is wrong, surely there is a reason we have two vars. Thanks for\ndigging into this: if nothing else, the code will be better for this\ndiscussion, even if we do nothing for now with disable_cost.\n\nCheers,\nGreg\n\nOn Wed, Apr 3, 2024 at 3:21 PM Robert Haas <robertmhaas@gmail.com> wrote: It's also pretty clear to me that the fact that enable_indexscan\nand enable_indexonlyscan work completely differently from each other\nis surprising at best, wrong at worst, but here again, what this patch\ndoes about that is not above reproach.Yes, that is wrong, surely there is a reason we have two vars. Thanks for digging into this: if nothing else, the code will be better for this discussion, even if we do nothing for now with disable_cost.Cheers,Greg", "msg_date": "Wed, 3 Apr 2024 16:03:44 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Thu, 4 Apr 2024 at 08:21, Robert Haas <robertmhaas@gmail.com> wrote:\n> I wanted to further explore the idea of just not generating plans of\n> types that are currently disabled. I looked into doing this for\n> enable_indexscan and enable_indexonlyscan. As a first step, I\n> investigated how those settings work now, and was horrified. I don't\n> know whether I just wasn't paying attention back when the original\n> index-only scan work was done -- I remember discussing\n> enable_indexonlyscan with you at the time -- or whether it got changed\n> subsequently. Anyway, the current behavior is:\n>\n> [A] enable_indexscan=false adds disable_cost to the cost of every\n> Index Scan path *and also* every Index-Only Scan path. So disabling\n> index-scans also in effect discourages the use of index-only scans,\n> which would make sense if we didn't have a separate setting called\n> enable_indexonlyscan, but we do. Given that, I think this is\n> completely and utterly wrong.\n>\n> [b] enable_indexonlyscan=false causes index-only scan paths not to be\n> generated at all, but instead, we generate index-scan paths to do the\n> same thing that we would not have generated otherwise.\n\nFWIW, I think changing this is a bad idea and I don't think the\nbehaviour that's in your patch is useful. With your patch, if I SET\nenable_indexonlyscan=false, any index that *can* support an IOS for my\nquery will now not be considered at all!\n\nWith your patch applied, I see:\n\n-- default enable_* GUC values.\npostgres=# explain select oid from pg_class order by oid;\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Index Only Scan using pg_class_oid_index on pg_class\n(cost=0.27..22.50 rows=415 width=4)\n(1 row)\n\n\npostgres=# set enable_indexonlyscan=0; -- no index scan?\nSET\npostgres=# explain select oid from pg_class order by oid;\n QUERY PLAN\n-----------------------------------------------------------------\n Sort (cost=36.20..37.23 rows=415 width=4)\n Sort Key: oid\n -> Seq Scan on pg_class (cost=0.00..18.15 rows=415 width=4)\n(3 rows)\n\npostgres=# set enable_seqscan=0; -- still no index scan!\nSET\npostgres=# explain select oid from pg_class order by oid;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Sort (cost=10000000036.20..10000000037.23 rows=415 width=4)\n Sort Key: oid\n -> Seq Scan on pg_class (cost=10000000000.00..10000000018.15\nrows=415 width=4)\n(3 rows)\n\npostgres=# explain select oid from pg_class order by oid,relname; --\nnow an index scan?!\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Incremental Sort (cost=0.43..79.50 rows=415 width=68)\n Sort Key: oid, relname\n Presorted Key: oid\n -> Index Scan using pg_class_oid_index on pg_class\n(cost=0.27..60.82 rows=415 width=68)\n(4 rows)\n\nI don't think this is good as pg_class_oid_index effectively won't be\nused as long as the particular query could use that index with an\nindex *only* scan. You can see above that as soon as I adjust the\nquery slightly so that IOS isn't possible, the index can be used\nagain. I think an Index Scan would have been a much better option for\nthe 2nd query than the seq scan and sort.\n\nI think if I do SET enable_indexonlyscan=0; the index should still be\nused with an Index Scan and it definitely shouldn't result in Index\nScan also being disabled if that index happens to contain all the\ncolumns required to support an IOS.\n\nFWIW, I'm fine with the current behaviour. It looks like we've\nassumed that, when possible, IOS are always superior to Index Scan, so\nthere's no point in generating an Index Scan path when we can generate\nan IOS path. I think this makes sense. For that not to be true,\nchecking the all visible flag would have to be more costly than\nvisiting the heap. Perhaps that could be true if the visibility map\npage had to come from disk and the heap page was cached and the disk\nwas slow, but I don't think that scenario is worthy of considering\nboth Index scan and IOS path types when IOS is possible. We've no way\nto accurately cost that anyway.\n\nThis all seems similar to enable_sort vs enable_incremental_sort. For\na while, we did consider both an incremental sort and a sort when an\nincremental sort was possible, but it seemed to me that an incremental\nsort would always be better when it was possible, so I changed that in\n4a29eabd1. I've not seen anyone complain. I made it so that when an\nincremental sort is possible but is disabled, we do a sort instead.\nThat seems fairly similar to how master handles\nenable_indexonlyscan=false.\n\nIn short, I don't find it strange that disabling one node type results\nin considering another type that we'd otherwise not consider in cases\nwhere we assume that the disabled node type is always superior and\nshould always be used when it is possible.\n\nI do agree that adding disable_cost to IOS when enable_indexscan=0 is\na bit weird. We don't penalise incremental sorts when sorts are\ndisabled, so aligning those might make sense.\n\nDavid\n\n\n", "msg_date": "Thu, 4 Apr 2024 10:15:59 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Thu, 4 Apr 2024 at 10:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> In short, I don't find it strange that disabling one node type results\n> in considering another type that we'd otherwise not consider in cases\n> where we assume that the disabled node type is always superior and\n> should always be used when it is possible.\n\nIn addition to what I said earlier, I think the current\nenable_indexonlyscan is implemented in a way that has the planner do\nwhat it did before IOS was added. I think that goal makes sense with\nany patch that make the planner try something new. We want to have\nsome method to get the previous behaviour for the cases where the\nplanner makes a dumb choice or to avoid some bug in the new feature.\n\nI think using that logic, the current scenario with enable_indexscan\nand enable_indexonlyscan makes complete sense. I mean, including\nenable_indexscan=0 adding disable_cost to IOS Paths.\n\nDavid\n\n\n", "msg_date": "Thu, 4 Apr 2024 16:09:03 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, Apr 3, 2024 at 11:09 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 4 Apr 2024 at 10:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> > In short, I don't find it strange that disabling one node type results\n> > in considering another type that we'd otherwise not consider in cases\n> > where we assume that the disabled node type is always superior and\n> > should always be used when it is possible.\n>\n> In addition to what I said earlier, I think the current\n> enable_indexonlyscan is implemented in a way that has the planner do\n> what it did before IOS was added. I think that goal makes sense with\n> any patch that make the planner try something new. We want to have\n> some method to get the previous behaviour for the cases where the\n> planner makes a dumb choice or to avoid some bug in the new feature.\n\nI see the logic of this, and I agree that the resulting behavior might\nbe more intuitive than what I posted before. I'll do some experiments.\n\n> I think using that logic, the current scenario with enable_indexscan\n> and enable_indexonlyscan makes complete sense. I mean, including\n> enable_indexscan=0 adding disable_cost to IOS Paths.\n\nThis, for me, is a bridge too far. I don't think there's a real\nargument that \"what the planner did before IOS was added\" was add\ndisable_cost to the cost of index-only scan paths. There was no such\npath type. Independently of that argument, I also think the behavior\nof a setting needs to be something that a user can understand. Right\nnow, the documentation says:\n\nEnables or disables the query planner's use of index-scan plan types.\nThe default is on.\nEnables or disables the query planner's use of index-only-scan plan\ntypes (see Section 11.9). The default is on.\n\nI do not think that a user can be expected to guess from these\ndescriptions that the first one also affects index-only scans, or that\nthe two GUCs disable their respective plan types in completely\ndifferent ways. Granted, the latter inconsistency affects a whole\nbunch of these settings, not just this one, but still.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 14:02:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Sat, Nov 2, 2019 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The idea that I've been thinking about is to not generate disabled\n> Paths in the first place, thus not only fixing the problem but saving\n> some cycles. While this seems easy enough for \"optional\" paths,\n> we have to reserve the ability to generate certain path types regardless,\n> if there's no other way to implement the query. This is a bit of a\n> stumbling block :-(. At the base relation level, we could do something\n> like generating seqscan last, and only if no other path has been\n> successfully generated.\n\nContinuing my investigation into this rather old thread, I did a\nrather primitive implementation of this idea, for baserels only, and\ndiscovered that it caused a small number of planner failures running\nthe regression tests. Here is a slightly simplified example:\n\nCREATE TABLE strtest (n text, t text);\nCREATE INDEX strtest_n_idx ON strtest (n);\nSET enable_seqscan=false;\nEXPLAIN SELECT * FROM strtest s1 INNER JOIN strtest s2 ON s1.n >= s2.n;\n\nWith the patch, I get:\n\nERROR: could not devise a query plan for the given query\n\nThe problem here is that it's perfectly possible to generate a valid\npath for s1 -- and likewise for s2, since it's the same underlying\nrelation -- while respecting the enable_seqscan=false constraint.\nHowever, all such paths are parameterized by the other of the two\nrelations, which means that if we do that, we can't plan the join,\nbecause we need an unparameterized path for at least one of the two\nsides in order to build a nested loop join, which is the only way to\nsatisfy the parameterization on the other side.\n\nNow, you could try to fix this by deciding that planning for a baserel\nhasn't really succeeded unless we got at least one *unparameterized*\npath for that baserel. I haven't tried that, but I presume that if you\ndo, it fixes the above example, because now there will be a last-ditch\nsequential scan on both sides and so this example will behave as\nexpected. But if you do that, then in other cases, that sequential\nscan is going to get picked even when it isn't strictly necessary to\ndo so, just because some plan that uses it looks better on cost.\nPresumably that problem can in turn be fixed by deciding that we also\nneed to keep disable_cost around (or the separate disable-counter idea\nthat we were discussing recently in another branch of this thread),\nbut that's arguably missing the point of this exercise.\n\nAnother idea is to remove the ERROR mentioned above from\nset_cheapest() and just allow planning to continue even if some\nrelations end up with no paths. (This would necessitate finding and\nfixing any code that could be confused by a pathless relation.) Then,\nif you get to the top of the plan tree and you have no paths there,\nredo the join search discarding the constraints (or maybe just some of\nthe constraints, e.g. allow sequential scans and nested loops, or\nsomething). Conceptually, I like this idea a lot, but I think there\nare a few problems. One is that I'm not quite sure how to find all the\ncode that would need to be adjusted to make it work, though the header\ncomment for standard_join_search() seems like it's got some helpful\ntips. A second is that it's another version of the disable_cost =\ninfinity problem: once you find that you can't generate a path while\nenforcing all of the restrictions, you just disregard the restrictions\ncompletely, instead of discarding them only to the extent necessary. I\nhave a feeling that's not going to be very appealing.\n\nNow, I suppose it might be that even if we can't remove disable_cost,\nsomething along these lines is still worth doing, just to save CPU\ncycles. You could for example try planning with only non-disabled\nstuff and then do it over again with everything if that doesn't work\nout, still keeping disable_cost around so that you avoid disabled\nnodes where you can. But I'm kind of hoping that I'm missing something\nand there's some approach that could both kill disable_cost and save\nsome cycles at the same time. If (any of) you have an idea, I'd love\nto hear it!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 May 2024 16:33:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Sat, 4 May 2024 at 08:34, Robert Haas <robertmhaas@gmail.com> wrote:\n> Another idea is to remove the ERROR mentioned above from\n> set_cheapest() and just allow planning to continue even if some\n> relations end up with no paths. (This would necessitate finding and\n> fixing any code that could be confused by a pathless relation.) Then,\n> if you get to the top of the plan tree and you have no paths there,\n> redo the join search discarding the constraints (or maybe just some of\n> the constraints, e.g. allow sequential scans and nested loops, or\n> something).\n\nI don't think you'd need to wait longer than where we do set_cheapest\nand find no paths to find out that there's going to be a problem.\n\nI don't think redoing planning is going to be easy or even useful. I\nmean what do you change when you replan? You can't just do\nenable_seqscan and enable_nestloop as if there's no index to provide\nsorted input and the plan requires some sort, then you still can't\nproduce a plan. Adding enable_sort to the list does not give me much\nconfidence we'll never fail to produce a plan either. It just seems\nimpossible to know which of the disabled ones caused the RelOptInfo to\nhave no paths. Also, you might end up enabling one that caused the\nplanner to do something different than it would do today. For\nexample, a Path that today incurs 2x disable_cost vs a Path that only\nreceives 1x disable_cost might do something different if you just went\nand enabled a bunch of enable* GUCs before replanning.\n\n> Now, I suppose it might be that even if we can't remove disable_cost,\n> something along these lines is still worth doing, just to save CPU\n> cycles. You could for example try planning with only non-disabled\n> stuff and then do it over again with everything if that doesn't work\n> out, still keeping disable_cost around so that you avoid disabled\n> nodes where you can. But I'm kind of hoping that I'm missing something\n> and there's some approach that could both kill disable_cost and save\n> some cycles at the same time. If (any of) you have an idea, I'd love\n> to hear it!\n\nI think the int Path.disabledness idea is worth coding up to try it.\nI imagine that a Path will incur the maximum of its subpath's\ndisabledness's then add_path() just needs to prefer lower-valued\ndisabledness Paths.\n\nThat doesn't get you the benefits of fewer CPU cycles, but where did\nthat come from as a motive to change this? There's no shortage of\nother ways to make the planner faster if that's an issue.\n\nDavid\n\n\n", "msg_date": "Sun, 5 May 2024 01:16:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I don't think you'd need to wait longer than where we do set_cheapest\n> and find no paths to find out that there's going to be a problem.\n\nAt a base relation, yes, but that doesn't work for joins: it may be\nthat a particular join cannot be formed, yet other join sequences\nwill work. We have that all the time from outer-join ordering\nrestrictions, never mind enable_xxxjoin flags. So I'm not sure\nthat we can usefully declare early failure for joins.\n\n> I think the int Path.disabledness idea is worth coding up to try it.\n> I imagine that a Path will incur the maximum of its subpath's\n> disabledness's then add_path() just needs to prefer lower-valued\n> disabledness Paths.\n\nI would think sum not maximum, but that's a detail.\n\n> That doesn't get you the benefits of fewer CPU cycles, but where did\n> that come from as a motive to change this? There's no shortage of\n> other ways to make the planner faster if that's an issue.\n\nThe concern was to not *add* CPU cycles in order to make this area\nbetter. But I do tend to agree that we've exhausted all the other\noptions.\n\nBTW, I looked through costsize.c just now to see exactly what we are\nusing disable_cost for, and it seemed like a majority of the cases are\njust wrong. Where possible, we should implement a plan-type-disable\nflag by not generating the associated Path in the first place, not by\napplying disable_cost to it. But it looks like a lot of people have\nerroneously copied the wrong logic. I would say that only these plan\ntypes should use the disable_cost method:\n\n\tseqscan\n\tnestloop join\n\tsort\n\nas those are the only ones where we risk not being able to make a\nplan at all for lack of other alternatives.\n\nThere is also some weirdness around needing to force use of tidscan\nif we have WHERE CURRENT OF. But perhaps a different hack could be\nused for that.\n\nWe also have this for hashjoin:\n\n\t * If the bucket holding the inner MCV would exceed hash_mem, we don't\n\t * want to hash unless there is really no other alternative, so apply\n\t * disable_cost.\n\nI'm content to leave that be, if we can't remove disable_cost\nentirely.\n\nWhat I'm wondering at this point is whether we need to trouble with\nimplementing the separate-disabledness-count method, if we trim back\nthe number of places using disable_cost to the absolute minimum.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 May 2024 12:57:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Sun, 5 May 2024 at 04:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > That doesn't get you the benefits of fewer CPU cycles, but where did\n> > that come from as a motive to change this? There's no shortage of\n> > other ways to make the planner faster if that's an issue.\n>\n> The concern was to not *add* CPU cycles in order to make this area\n> better. But I do tend to agree that we've exhausted all the other\n> options.\n\nIt really looks to me that Robert was talking about not generating\npaths for disabled path types. He did write \"just to save CPU cycles\"\nin the paragraph I quoted.\n\nI think we should concern ourselves with adding overhead to add_path()\n*only* when we actually see a patch which slows it down in a way that\nwe can measure. I find it hard to imagine that adding a single\ncomparison for every Path is measurable. Each of these paths has been\npalloced and costed, both of which are significantly more expensive\nthan adding another comparison to compare_path_costs_fuzzily(). I'm\nonly willing for benchmarks on an actual patch to prove me wrong on\nthat. Nothing else. add_path() has become a rat's nest of conditions\nover the years and those seem to have made it without concerns about\nperformance.\n\n> BTW, I looked through costsize.c just now to see exactly what we are\n> using disable_cost for, and it seemed like a majority of the cases are\n> just wrong. Where possible, we should implement a plan-type-disable\n> flag by not generating the associated Path in the first place, not by\n> applying disable_cost to it. But it looks like a lot of people have\n> erroneously copied the wrong logic. I would say that only these plan\n> types should use the disable_cost method:\n>\n> seqscan\n> nestloop join\n> sort\n\nI think this oversimplifies the situation. I only spent 30 seconds\nlooking and I saw cases where this would cause issues. If\nenable_hashagg is false, we could fail to produce some plans where the\ntype is sortable but not hashable. There's also an issue with nested\nloops being unable to FULL OUTER JOIN. However, I do agree that there\nare some in there that are adding disable_cost that should be done by\njust not creating the Path. enable_gathermerge is one.\nenable_bitmapscan is probably another.\n\nI understand you only talked about the cases adding disable_cost in\ncostsize.c. But just as a reminder, there are other things we need to\nbe careful not to break. For example, enable_indexonlyscan=false\nshould defer to still making an index scan. Nobody who disables\nenable_indexonlyscan without disabling enable_indexscan wants queries\nthat are eligible to use IOS to use seq scan instead. They'd still\nwant Index Scan to be considered, otherwise they'd have disabled\nenable_indexscan.\n\nDavid\n\n\n", "msg_date": "Sun, 5 May 2024 12:07:30 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Sat, May 4, 2024 at 9:16 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I don't think you'd need to wait longer than where we do set_cheapest\n> and find no paths to find out that there's going to be a problem.\n\nI'm confused by this response, because I thought that the main point\nof my previous email was explaining why that's not true. I showed an\nexample where you do find paths at set_cheapest() time and yet are\nunable to complete planning.\n\n> I don't think redoing planning is going to be easy or even useful. I\n> mean what do you change when you replan? You can't just do\n> enable_seqscan and enable_nestloop as if there's no index to provide\n> sorted input and the plan requires some sort, then you still can't\n> produce a plan. Adding enable_sort to the list does not give me much\n> confidence we'll never fail to produce a plan either. It just seems\n> impossible to know which of the disabled ones caused the RelOptInfo to\n> have no paths. Also, you might end up enabling one that caused the\n> planner to do something different than it would do today. For\n> example, a Path that today incurs 2x disable_cost vs a Path that only\n> receives 1x disable_cost might do something different if you just went\n> and enabled a bunch of enable* GUCs before replanning.\n\nI agree that there are problems here, both in terms of implementation\ncomplexity and also in terms of what behavior you actually get, but I\ndo not think that a proposal which changes some current behavior\nshould be considered dead on arrival. Whatever new behavior we might\nwant to implement needs to make sense, and there need to be good\nreasons for making whatever changes are contemplated, but I don't\nthink we should take the position that it has to be identical to\ncurrent.\n\n> I think the int Path.disabledness idea is worth coding up to try it.\n> I imagine that a Path will incur the maximum of its subpath's\n> disabledness's then add_path() just needs to prefer lower-valued\n> disabledness Paths.\n\nIt definitely needs to be sum, not max. Otherwise you can't get the\nmatest example from the regression tests right, where one child lacks\nthe ability to comply with the GUC setting.\n\n> That doesn't get you the benefits of fewer CPU cycles, but where did\n> that come from as a motive to change this? There's no shortage of\n> other ways to make the planner faster if that's an issue.\n\nWell, I don't agree with that at all. If there are lots of ways to\nmake the planner faster, we should definitely do a bunch of that\nstuff, because \"will slow down the planner too much\" has been a\nleading cause of proposed planner patches being rejected for as long\nas I've been involved with the project. My belief was that we were\nrather short of good ideas in that area, actually. But even if it's\ntrue that we have lots of other ways to speed up the planner, that\ndoesn't mean that it wouldn't be good to do it here, too.\n\nStepping back a bit, my current view of this area is: disable_cost is\nhighly imperfect both as an idea and as implemented in PostgreSQL.\nAlthough I'm discovering that the current implementation gets more\nthings right than I had realized, it also sometimes gets things wrong.\nThe original poster gave an example of that, and there are others.\nFurthermore, the current implementation has some weird\ninconsistencies. Therefore, I would like something better. Better, to\nme, could mean any combination of (a) superior behavior, (b) superior\nperformance, and (c) simpler, more elegant code. In a perfect world,\nwe'd be able to come up with something that wins in all three of those\nareas, but I'm not seeing a way to achieve that, so I'm trying to\nfigure out what is achievable. And because we need to reach consensus\non whatever is to be done, I'm sharing raw research results rather\nthan just dropping a completed patch. I don't think it's at all easy\nto understand what the realistic possibilities are in this area;\ncertainly it isn't for me. At some point I'm hoping that there will be\na patch (or a bunch of patches) that we can all agree are an\nimprovement over now and the best we can reasonably do, but I don't\nyet know what the shape of those will be, because I'm still trying to\nunderstand (and document on-list) what all the problems are.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 May 2024 08:27:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Sat, May 4, 2024 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There is also some weirdness around needing to force use of tidscan\n> if we have WHERE CURRENT OF. But perhaps a different hack could be\n> used for that.\n\nYeah, figuring out what to do about this was the trickiest part of the\nexperimental patch that I wrote last week. The idea of the current\ncode is that cost_qual_eval_walker charges disable_cost for\nCurrentOfExpr, but cost_tidscan then subtracts disable_cost if\ntidquals contains a CurrentOfExpr, so that we effectively disable\neverything except TID scan paths and, I think, also any TID scan paths\nthat don't use the CurrentOfExpr as a qual. I'm not entirely sure\nwhether the last can happen, but I imagine that it might be possible\nif the cursor refers to a query that itself contains some other kind\nof TID qual.\n\nIt's not very clear that this mechanism is actually 100% reliable,\nbecause we know it's possible in general for the costs of two paths to\nbe different by more than disable_cost. Maybe that's not possible in\nthis specific context, though: I'm not sure.\n\nThe approach I took for my experimental patch was pretty grotty, and\nprobably not quite complete, but basically I defined the case where we\ncurrently subtract out disable_cost as a \"forced TID-scan\". I passed\naround a Boolean called forcedTidScan which gets set to true if we\ndiscover that some plan is a forced TID-scan path, and then we discard\nany other paths and then only add other forced TID-scan paths after\nthat point. There can be more than one, because of parameterization.\n\nBut I think that the right thing to do is probably to pull some of the\nlogic up out of create_tidscan_paths() and decide ONCE whether we're\nin a forced TID-scan situation or not. If we are, then\nset_plain_rel_pathlist() should arrange to create only forced TID-scan\npaths; otherwise, it should proceed as it does now.\n\nMaybe if I try to do that I'll find problems, but the current approach\nseems backwards to me, like going to a restaurant and ordering one of\neverything on the menu, then cancelling all of the orders except the\nstuff you actually want.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 May 2024 09:39:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Mon, May 6, 2024 at 9:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> It's not very clear that this mechanism is actually 100% reliable,\n\nIt isn't. Here's a test case. As a non-superuser, do this:\n\ncreate table foo (a int, b text, primary key (a));\ninsert into foo values (1, 'Apple');\nalter table foo enable row level security;\nalter table foo force row level security;\ncreate policy p1 on foo as permissive using (ctid in ('(0,1)', '(0,2)'));\nbegin;\ndeclare c cursor for select * from foo;\nfetch from c;\nexplain update foo set b = 'Manzana' where current of c;\nupdate foo set b = 'Manzana' where current of c;\n\nThe explain produces this output:\n\n Update on foo (cost=10000000000.00..10000000008.02 rows=0 width=0)\n -> Tid Scan on foo (cost=10000000000.00..10000000008.02 rows=1 width=38)\n TID Cond: (ctid = ANY ('{\"(0,1)\",\"(0,2)\"}'::tid[]))\n Filter: CURRENT OF c\n\nUnless I'm quite confused, the point of the code is to force\nCurrentOfExpr to be a TID Cond, and it normally succeeds in doing so,\nbecause WHERE CURRENT OF cursor_name has to be the one and only WHERE\ncondition for a normal UPDATE. I tried various cases involving views\nand CTEs and got nowhere. But then I wrote a patch to make the\nregression tests fail if a baserel's restrictinfo list contains a\nCurrentOfExpr and also some other qual, and a couple of row-level\nsecurity tests failed (and nothing else). Which then allowed me to\nconstruct the example above, where there are two possible TID quals\nand the logic in tidpath.c latches onto the wrong one. The actual\nUPDATE fails like this:\n\nERROR: WHERE CURRENT OF is not supported for this table type\n\n...because ExecEvalCurrentOfExpr supposes that the only way it can be\nreached is for an FDW without the necessary support, but actually in\nthis case it's planner error that gets us here.\n\nFortunately, there's no real reason for anyone to ever do something\nlike this, or at least I can't see one, so the fact that it doesn't\nwork probably doesn't really matter that much. And you can argue that\nthe only problem here is that the costing hack just didn't get updated\nfor RLS and now needs to be a bit more clever. But I think it'd be\nbetter to find a way of making it less hacky. With the way the code is\nstructured right now, the chances of anyone understanding that RLS\nmight have an impact on its correctness were just about nil, IMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 May 2024 13:51:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, May 6, 2024 at 9:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> It's not very clear that this mechanism is actually 100% reliable,\n\n> It isn't. Here's a test case.\n\nVery interesting.\n\n> ... Which then allowed me to\n> construct the example above, where there are two possible TID quals\n> and the logic in tidpath.c latches onto the wrong one.\n\nHmm. Without having traced through it, I'm betting that the\nCurrentOfExpr qual is rejected as a tidqual because it's not\nconsidered leakproof. It's not obvious to me why we couldn't consider\nit as leakproof, though. If we don't want to do that in general,\nthen we need some kind of hack in TidQualFromRestrictInfo to accept\nCurrentOfExpr quals anyway.\n\nIn general I think you're right that something less rickety than\nthe disable_cost hack would be a good idea to ensure the desired\nTidPath gets chosen, but this problem is not the fault of that.\nWe're not making the TidPath with the correct contents in the first\nplace.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 May 2024 14:44:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "I wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> ... Which then allowed me to\n>> construct the example above, where there are two possible TID quals\n>> and the logic in tidpath.c latches onto the wrong one.\n\n> Hmm. Without having traced through it, I'm betting that the\n> CurrentOfExpr qual is rejected as a tidqual because it's not\n> considered leakproof.\n\nNah, I'm wrong: we do treat it as leakproof, and the comment about\nthat in contain_leaked_vars_walker shows that the interaction with\nRLS quals *was* thought about. What wasn't thought about was the\npossibility of RLS quals that themselves could be usable as tidquals,\nwhich breaks this assumption in TidQualFromRestrictInfoList:\n\n * Stop as soon as we find any usable CTID condition. In theory we\n * could get CTID equality conditions from different AND'ed clauses,\n * in which case we could try to pick the most efficient one. In\n * practice, such usage seems very unlikely, so we don't bother; we\n * just exit as soon as we find the first candidate.\n\nThe executor doesn't seem to be prepared to cope with multiple AND'ed\nTID clauses (only OR'ed ones). So we need to fix this at least to the\nextent of looking for a CurrentOfExpr qual, and preferring that over\nanything else.\n\nI'm also now wondering about this assumption in the executor:\n\n /* CurrentOfExpr could never appear OR'd with something else */\n Assert(list_length(tidstate->tss_tidexprs) == 1 ||\n !tidstate->tss_isCurrentOf);\n\nIt still seems OK, because anything that might come in from RLS quals\nwould be AND'ed not OR'ed with the CurrentOfExpr.\n\n> In general I think you're right that something less rickety than\n> the disable_cost hack would be a good idea to ensure the desired\n> TidPath gets chosen, but this problem is not the fault of that.\n> We're not making the TidPath with the correct contents in the first\n> place.\n\nStill true.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 May 2024 15:26:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Mon, May 6, 2024 at 3:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Nah, I'm wrong: we do treat it as leakproof, and the comment about\n> that in contain_leaked_vars_walker shows that the interaction with\n> RLS quals *was* thought about. What wasn't thought about was the\n> possibility of RLS quals that themselves could be usable as tidquals,\n> which breaks this assumption in TidQualFromRestrictInfoList:\n>\n> * Stop as soon as we find any usable CTID condition. In theory we\n> * could get CTID equality conditions from different AND'ed clauses,\n> * in which case we could try to pick the most efficient one. In\n> * practice, such usage seems very unlikely, so we don't bother; we\n> * just exit as soon as we find the first candidate.\n\nRight. I had noticed this but didn't spell it out.\n\n> The executor doesn't seem to be prepared to cope with multiple AND'ed\n> TID clauses (only OR'ed ones). So we need to fix this at least to the\n> extent of looking for a CurrentOfExpr qual, and preferring that over\n> anything else.\n>\n> I'm also now wondering about this assumption in the executor:\n>\n> /* CurrentOfExpr could never appear OR'd with something else */\n> Assert(list_length(tidstate->tss_tidexprs) == 1 ||\n> !tidstate->tss_isCurrentOf);\n>\n> It still seems OK, because anything that might come in from RLS quals\n> would be AND'ed not OR'ed with the CurrentOfExpr.\n\nThis stuff I had not noticed.\n\n> > In general I think you're right that something less rickety than\n> > the disable_cost hack would be a good idea to ensure the desired\n> > TidPath gets chosen, but this problem is not the fault of that.\n> > We're not making the TidPath with the correct contents in the first\n> > place.\n>\n> Still true.\n\nI'll look into this, unless you want to do it.\n\nIncidentally, another thing I just noticed is that\nIsCurrentOfClause()'s test for (node->cvarno == rel->relid) is\npossibly dead code. At least, there are no examples in our test suite\nwhere it fails to hold. Which seems like it makes sense, because if it\ndidn't, then how did the clause end up in baserestrictinfo? Maybe this\nis worth keeping as defensive coding, or maybe it should be changed to\nan Assert or something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 May 2024 15:58:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'll look into this, unless you want to do it.\n\nI have a draft patch already. Need to add a test case.\n\n> Incidentally, another thing I just noticed is that\n> IsCurrentOfClause()'s test for (node->cvarno == rel->relid) is\n> possibly dead code. At least, there are no examples in our test suite\n> where it fails to hold. Which seems like it makes sense, because if it\n> didn't, then how did the clause end up in baserestrictinfo? Maybe this\n> is worth keeping as defensive coding, or maybe it should be changed to\n> an Assert or something.\n\nI wouldn't remove it, but maybe an Assert is good enough. The tests\non Vars' varno should be equally pointless no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 May 2024 16:10:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Mon, May 6, 2024 at 8:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Stepping back a bit, my current view of this area is: disable_cost is\n> highly imperfect both as an idea and as implemented in PostgreSQL.\n> Although I'm discovering that the current implementation gets more\n> things right than I had realized, it also sometimes gets things wrong.\n> The original poster gave an example of that, and there are others.\n> Furthermore, the current implementation has some weird\n> inconsistencies. Therefore, I would like something better.\n\nFWIW I always found those weird inconsistencies to be annoying at\nbest, and confusing at worst. I speak as somebody that uses\ndisable_cost a lot.\n\nI certainly wouldn't ask anybody to make it a priority for that reason\nalone -- it's not *that* bad. I've given my opinion on this because\nit's already under discussion.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 May 2024 16:30:20 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Mon, May 6, 2024 at 4:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> FWIW I always found those weird inconsistencies to be annoying at\n> best, and confusing at worst. I speak as somebody that uses\n> disable_cost a lot.\n>\n> I certainly wouldn't ask anybody to make it a priority for that reason\n> alone -- it's not *that* bad. I've given my opinion on this because\n> it's already under discussion.\n\nThanks, it's good to have other perspectives.\n\nHere are some patches for discussion.\n\n0001 gets rid of disable_cost as a mechanism for forcing a TID scan\nplan to be chosen when CurrentOfExpr is present. Instead, it arranges\nto generate only the valid path when that case occurs, and skip\neverything else. I think this is a good cleanup, and it doesn't seem\ntotally impossible that it actually prevents a failure in some extreme\ncase.\n\n0002 cleans up the behavior of enable_indexscan and\nenable_indexonlyscan. Currently, setting enable_indexscan=false adds\ndisable_cost to both the cost of index scans and the cost of\nindex-only scans. I think that's indefensible and, in fact, a bug,\nalthough I believe David Rowley disagrees. With this patch, we simply\ndon't generate index scans if enable_indexscan=false, and we don't\ngenerate index-only scans if enable_indexonlyscan=false, which seems a\nlot more consistent to me. However, I did revise one major thing from\nthe patch I posted before, per feedback from David Rowley and also per\nmy own observations: in this version, if enable_indexscan=true and\nenable_indexonlyscan=false, we'll generate index-scan paths for any\ncases where, with both set to true, we would have only generated\nindex-only scan paths. That change makes the behavior of this patch a\nlot more comprehensible and intuitive: the only regression test\nchanges are places where somebody expected that they could disable\nboth index scans and index-only scans by setting\nenable_indexscan=false.\n\n0003 and 0004 extend the approach of \"just don't generate the disabled\npath\" to bitmap scans and gather merge, respectively. I think these\nare more debatable, mostly because it's not clear how far we can\nreally take this approach. Neither breaks any test cases, and 0003 is\nclosely related to the work done in 0002, which seems like a point in\nits favor. 0004 was simply the only other case where it was obvious to\nme that this kind of approach made sense. In my view, it makes most\nsense to use this kind of approach for planner behaviors that seem\nlike they're sort of optional: like if you don't use gather merge, you\ncan still use gather, and if you don't use index scans, you can still\nuse sequential scans. With all these patches applied, the remaining\ncases where we rely on disable_cost are:\n\nsequential scans\nsorts\nhash aggregation\nall 3 join types\nhash joins where a bucket holding the inner MCV would exceed hash_mem\n\nSequential scans are clearly a last-ditch method. I find it a bit hard\nto decide whether hashing or sorting is the default, especially giving\nthe asymmetry between enable_sort - presumptively anywhere - and\nenable_hashagg - specific to aggregation. As for the join types, it's\ntempting to consider nested-loop the default type -- it's the only way\nto satisfy parameterizations, for instance -- but the fact that it's\nthe only method that can't do a full join undermines that position in\nmy book. But, I don't want to pretend like I have all the answers\nhere, either; I'm just sharing some thoughts.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 May 2024 16:19:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, May 7, 2024 at 4:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Here are some patches for discussion.\n\nWell, that didn't generate much discussion, but here I am trying\nagain. Here I've got patches 0001 and 0002 from my previous posting;\nI've dropped 0003 and 0004 from the previous set for now so as not to\ndistract from the main event, but they may still be a good idea.\nInstead I've got an 0003 and an 0004 that implement the \"count of\ndisabled nodes\" approach that we have discussed previously. This seems\nto work fine, unlike the approaches I tried earlier. I think this is\nthe right direction to go, but I'd like to know what concerns people\nmight have.\n\nThis doesn't completely remove disable_cost, because hash joins still\nadd it to the cost when it's impossible to fit the MCV value into\nwork_mem. I'm not sure what to do with that. Continuing to use\ndisable_cost in that one scenario seems OK to me. We could\nalternatively make that scenario bump disabled_nodes, but I don't\nreally want to confuse the planner not wanting to do something with\nthe user telling the planner not to do something, so I don't think\nthat's a good idea. Or we could rejigger things so that in that case\nwe don't generate the plan at all. I'm not sure why we don't do that\nalready, actually.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 12 Jun 2024 11:35:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Hi,\n\nOn 2024-06-12 11:35:48 -0400, Robert Haas wrote:\n> Subject: [PATCH v2 3/4] Treat the # of disabled nodes in a path as a separate\n> cost metric.\n> \n> Previously, when a path type was disabled by e.g. enable_seqscan=false,\n> we either avoided generating that path type in the first place, or\n> more commonly, we added a large constant, called disable_cost, to the\n> estimated startup cost of that path. This latter approach can distort\n> planning. For instance, an extremely expensive non-disabled path\n> could seem to be worse than a disabled path, especially if the full\n> cost of that path node need not be paid (e.g. due to a Limit).\n> Or, as in the regression test whose expected output changes with this\n> commit, the addition of disable_cost can make two paths that would\n> normally be distinguishible cost seem to have fuzzily the same cost.\n> \n> To fix that, we now count the number of disabled path nodes and\n> consider that a high-order component of both the cost. Hence, the\n> path list is now sorted by disabled_nodes and then by total_cost,\n> instead of just by the latter, and likewise for the partial path list.\n> It is important that this number is a count and not simply a Boolean;\n> else, as soon as we're unable to respect disabled path types in all\n> portions of the path, we stop trying to avoid them where we can.\n\n\n> \tif (criterion == STARTUP_COST)\n> \t{\n> \t\tif (path1->startup_cost < path2->startup_cost)\n> @@ -118,6 +127,15 @@ compare_fractional_path_costs(Path *path1, Path *path2,\n> \tCost\t\tcost1,\n> \t\t\t\tcost2;\n> \n> +\t/* Number of disabled nodes, if different, trumps all else. */\n> +\tif (unlikely(path1->disabled_nodes != path2->disabled_nodes))\n> +\t{\n> +\t\tif (path1->disabled_nodes < path2->disabled_nodes)\n> +\t\t\treturn -1;\n> +\t\telse\n> +\t\t\treturn +1;\n> +\t}\n\nI suspect it's going to be ok, because the branch is going to be very\npredictable in normal workloads, but I still worry a bit about making\ncompare_path_costs_fuzzily() more expensive. For more join-heavy queries it\ncan really show up and there's plenty ORM generated join-heavy query\nworkloads.\n\nIf costs were 32 bit integers, I'd have suggested just stashing the disabled\ncounts in the upper 32 bits of a 64bit integer. But ...\n\n<can't resist trying if I see overhead>\n\n\nIn an extreme case i can see a tiny bit of overhead, but not enough to be\nworth worrying about. Mostly because we're so profligate in doing\nbms_overlap() that cost comparisons don't end up mattering as much - I seem to\nrecall that being different in the not distant past though.\n\n\nAside: I'm somewhat confused by add_paths_to_joinrel()'s handling of\nmergejoins_allowed. If mergejoins are disabled we end up reaching\nmatch_unsorted_outer() in more cases than with mergejoins enabled. E.g. we\nonly set mergejoin_enabled for right joins inside select_mergejoin_clauses(),\nbut we don't call select_mergejoin_clauses() if !enable_mergejoin and jointype\n!= FULL. I, what?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:11:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, Jun 12, 2024 at 2:11 PM Andres Freund <andres@anarazel.de> wrote:\n> <can't resist trying if I see overhead>\n>\n> In an extreme case i can see a tiny bit of overhead, but not enough to be\n> worth worrying about. Mostly because we're so profligate in doing\n> bms_overlap() that cost comparisons don't end up mattering as much - I seem to\n> recall that being different in the not distant past though.\n\nThere are very few things I love more than when you can't resist\ntrying to break my patches and yet fail to find a problem. Granted the\nlatter part only happens once a century or so, but I'll take it.\n\n> Aside: I'm somewhat confused by add_paths_to_joinrel()'s handling of\n> mergejoins_allowed. If mergejoins are disabled we end up reaching\n> match_unsorted_outer() in more cases than with mergejoins enabled. E.g. we\n> only set mergejoin_enabled for right joins inside select_mergejoin_clauses(),\n> but we don't call select_mergejoin_clauses() if !enable_mergejoin and jointype\n> != FULL. I, what?\n\nI agree this logic is extremely confusing, but \"we only set\nmergejoin_enabled for right joins inside select_mergejoin_clauses()\"\ndoesn't seem to be true. It starts out true, and always stays true\nexcept for right, right-anti, and full joins, where\nselect_mergejoin_clauses() can set it to false. Since the call to\nmatch_unsorted_outer() is gated by mergejoin_enabled, you might think\nthat we'd skip considering nested loops on the strength of not being\nable to do a merge join, but comment \"2.\" in add_paths_to_joinrel\nexplains that the join types for which mergejoin_enabled can end up\nfalse aren't supported by nested loops anyway. Still, this logic is\nreally tortured.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:33:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Hi,\n\nOn 2024-06-12 14:33:31 -0400, Robert Haas wrote:\n> On Wed, Jun 12, 2024 at 2:11 PM Andres Freund <andres@anarazel.de> wrote:\n> > <can't resist trying if I see overhead>\n> >\n> > In an extreme case i can see a tiny bit of overhead, but not enough to be\n> > worth worrying about. Mostly because we're so profligate in doing\n> > bms_overlap() that cost comparisons don't end up mattering as much - I seem to\n> > recall that being different in the not distant past though.\n> \n> There are very few things I love more than when you can't resist\n> trying to break my patches and yet fail to find a problem. Granted the\n> latter part only happens once a century or so, but I'll take it.\n\n:)\n\n\nToo high cost in path cost comparison is what made me look at the PG code for\nthe first time, IIRC :)\n\n\n\n> > Aside: I'm somewhat confused by add_paths_to_joinrel()'s handling of\n> > mergejoins_allowed. If mergejoins are disabled we end up reaching\n> > match_unsorted_outer() in more cases than with mergejoins enabled. E.g. we\n> > only set mergejoin_enabled for right joins inside select_mergejoin_clauses(),\n> > but we don't call select_mergejoin_clauses() if !enable_mergejoin and jointype\n> > != FULL. I, what?\n> \n> I agree this logic is extremely confusing, but \"we only set\n> mergejoin_enabled for right joins inside select_mergejoin_clauses()\"\n> doesn't seem to be true.\n\nSorry, should have been more precise. With \"set\" I didn't mean set to true,\nbut that that it's only modified within select_mergejoin_clauses().\n\n\n> It starts out true, and always stays true except for right, right-anti, and\n> full joins, where select_mergejoin_clauses() can set it to false. Since the\n> call to match_unsorted_outer() is gated by mergejoin_enabled, you might\n> think that we'd skip considering nested loops on the strength of not being\n> able to do a merge join, but comment \"2.\" in add_paths_to_joinrel explains\n> that the join types for which mergejoin_enabled can end up false aren't\n> supported by nested loops anyway. Still, this logic is really tortured.\n\nAgree that that's the logic - but doesn't that mean we'll consider nestloops\nfor e.g. right joins iff enable_mergejoin=false?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Jun 2024 11:48:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, Jun 12, 2024 at 2:48 PM Andres Freund <andres@anarazel.de> wrote:\n> Sorry, should have been more precise. With \"set\" I didn't mean set to true,\n> but that that it's only modified within select_mergejoin_clauses().\n\nOh. \"set\" has more than one relevant meaning here.\n\n> > It starts out true, and always stays true except for right, right-anti, and\n> > full joins, where select_mergejoin_clauses() can set it to false. Since the\n> > call to match_unsorted_outer() is gated by mergejoin_enabled, you might\n> > think that we'd skip considering nested loops on the strength of not being\n> > able to do a merge join, but comment \"2.\" in add_paths_to_joinrel explains\n> > that the join types for which mergejoin_enabled can end up false aren't\n> > supported by nested loops anyway. Still, this logic is really tortured.\n>\n> Agree that that's the logic - but doesn't that mean we'll consider nestloops\n> for e.g. right joins iff enable_mergejoin=false?\n\nNo, because that function has its own internal guards. See nestjoinOK.\n\nBut don't misunderstand me: I'm not defending the status quo. The\nwhole thing seems like a Rube Goldberg machine to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 15:11:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, Jun 12, 2024 at 11:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Well, that didn't generate much discussion, but here I am trying\n> again. Here I've got patches 0001 and 0002 from my previous posting;\n> I've dropped 0003 and 0004 from the previous set for now so as not to\n> distract from the main event, but they may still be a good idea.\n> Instead I've got an 0003 and an 0004 that implement the \"count of\n> disabled nodes\" approach that we have discussed previously. This seems\n> to work fine, unlike the approaches I tried earlier. I think this is\n> the right direction to go, but I'd like to know what concerns people\n> might have.\n\nHere is a rebased patch set, where I also fixed pgindent damage and a\ncouple of small oversights in 0004.\n\nI am hoping to get these committed some time in July. So if somebody\nthinks that's too soon or thinks it shouldn't happen at all, please\ndon't wait too long to let me know about that.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 28 Jun 2024 11:46:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 28/06/2024 18:46, Robert Haas wrote:\n> On Wed, Jun 12, 2024 at 11:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Well, that didn't generate much discussion, but here I am trying\n>> again. Here I've got patches 0001 and 0002 from my previous posting;\n>> I've dropped 0003 and 0004 from the previous set for now so as not to\n>> distract from the main event, but they may still be a good idea.\n>> Instead I've got an 0003 and an 0004 that implement the \"count of\n>> disabled nodes\" approach that we have discussed previously. This seems\n>> to work fine, unlike the approaches I tried earlier. I think this is\n>> the right direction to go, but I'd like to know what concerns people\n>> might have.\n> \n> Here is a rebased patch set, where I also fixed pgindent damage and a\n> couple of small oversights in 0004.\n> \n> I am hoping to get these committed some time in July. So if somebody\n> thinks that's too soon or thinks it shouldn't happen at all, please\n> don't wait too long to let me know about that.\n\nv3-0001-Remove-grotty-use-of-disable_cost-for-TID-scan-pl.patch:\n\n+1, this seems ready for commit\n\nv3-0002-Rationalize-behavior-of-enable_indexscan-and-enab.patch:\n\nI fear this will break people's applications, if they are currently \nforcing a sequential scan with \"set enable_indexscan=off\". Now they will \nneed to do \"set enable_indexscan=off; set enable_indexonlyscan=off\" for \nthe same effect. Maybe it's acceptable, disabling sequential scans to \nforce an index scan is much more common than the other way round.\n\nv3-0003-Treat-number-of-disabled-nodes-in-a-path-as-a-sep.patch:\n\n> @@ -1318,6 +1342,12 @@ cost_tidscan(Path *path, PlannerInfo *root,\n> \tstartup_cost += path->pathtarget->cost.startup;\n> \trun_cost += path->pathtarget->cost.per_tuple * path->rows;\n> \n> +\t/*\n> +\t * There are assertions above verifying that we only reach this function\n> +\t * either when enable_tidscan=true or when the TID scan is the only legal\n> +\t * path, so it's safe to set disabled_nodes to zero here.\n> +\t */\n> +\tpath->disabled_nodes = 0;\n> \tpath->startup_cost = startup_cost;\n> \tpath->total_cost = startup_cost + run_cost;\n> }\n\nSo if you have enable_tidscan=off, and have a query with \"WHERE CURRENT \nOF foo\" that is planned with a TID scan, we set disable_nodes = 0? That \nsounds wrong, shouldn't disable_nodes be 1 in that case? It probably \ncannot affect the rest of the plan, given that \"WHERE CURRENT OF\" is \nonly valid in an UPDATE or DELETE, but still. At least it deserves a \nbetter explanation in the comment.\n\n> diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c\n> index 6b64c4a362..20236e8c4d 100644\n> --- a/src/backend/optimizer/plan/createplan.c\n> +++ b/src/backend/optimizer/plan/createplan.c\n> @@ -25,6 +25,7 @@\n> #include \"nodes/extensible.h\"\n> #include \"nodes/makefuncs.h\"\n> #include \"nodes/nodeFuncs.h\"\n> +#include \"nodes/print.h\"\n> #include \"optimizer/clauses.h\"\n> #include \"optimizer/cost.h\"\n> #include \"optimizer/optimizer.h\"\n\nleft over from debugging?\n\n> @@ -68,6 +68,15 @@ static bool pathlist_is_reparameterizable_by_child(List *pathlist,\n> int\n> compare_path_costs(Path *path1, Path *path2, CostSelector criterion)\n> {\n> +\t/* Number of disabled nodes, if different, trumps all else. */\n> +\tif (unlikely(path1->disabled_nodes != path2->disabled_nodes))\n> +\t{\n> +\t\tif (path1->disabled_nodes < path2->disabled_nodes)\n> +\t\t\treturn -1;\n> +\t\telse\n> +\t\t\treturn +1;\n> +\t}\n> +\n> \tif (criterion == STARTUP_COST)\n> \t{\n> \t\tif (path1->startup_cost < path2->startup_cost)\n\nIs \"unlikely()\" really appropriate here (and elsewhere in the patch)? If \nyou run with enable_seqscan=off but have no indexes, you could take that \npath pretty often.\n\nIf this function needs optimizing, I'd suggest splitting it into two \nfunctions, one for comparing the startup cost and another for the total \ncost. Almost all callers pass a constant for that argument, so they \nmight as well call the correct function directly and avoid the branch \nfor that.\n\n> @@ -658,6 +704,20 @@ add_path_precheck(RelOptInfo *parent_rel,\n> \t\tPath\t *old_path = (Path *) lfirst(p1);\n> \t\tPathKeysComparison keyscmp;\n> \n> +\t\t/*\n> +\t\t * Since the pathlist is sorted by disabled_nodes and then by\n> +\t\t * total_cost, we can stop looking once we reach a path with more\n> +\t\t * disabled nodes, or the same number of disabled nodes plus a\n> +\t\t * total_cost larger than the new path's.\n> +\t\t */\n> +\t\tif (unlikely(old_path->disabled_nodes != disabled_nodes))\n> +\t\t{\n> +\t\t\tif (disabled_nodes < old_path->disabled_nodes)\n> +\t\t\t\tbreak;\n> +\t\t}\n> +\t\telse if (total_cost <= old_path->total_cost * STD_FUZZ_FACTOR)\n> +\t\t\tbreak;\n> +\n> \t\t/*\n> \t\t * We are looking for an old_path with the same parameterization (and\n> \t\t * by assumption the same rowcount) that dominates the new path on\n> @@ -666,39 +726,27 @@ add_path_precheck(RelOptInfo *parent_rel,\n> \t\t *\n> \t\t * Cost comparisons here should match compare_path_costs_fuzzily.\n> \t\t */\n> -\t\tif (total_cost > old_path->total_cost * STD_FUZZ_FACTOR)\n> +\t\t/* new path can win on startup cost only if consider_startup */\n> +\t\tif (startup_cost > old_path->startup_cost * STD_FUZZ_FACTOR ||\n> +\t\t\t!consider_startup)\n> \t\t{\n\nThe \"Cost comparisons here should match compare_path_costs_fuzzily\" \ncomment also applies to the check on total_cost that you moved up. Maybe \nmove up the comment to the beginning of the loop.\n\nv3-0004-Show-number-of-disabled-nodes-in-EXPLAIN-ANALYZE-.patch:\n\nIt's surprising that the \"Disable Nodes\" is printed even with the COSTS \nOFF option. It's handy for our regression tests, it's good to print them \nthere, but it feels wrong.\n\nCould we cram it into the \"cost=... rows=...\" part? And perhaps a marker \nthat a node was disabled would be more user friendly than showing the \ncumulative count? Something like:\n\npostgres=# set enable_material=off;\nSET\npostgres=# set enable_seqscan=off;\nSET\npostgres=# set enable_bitmapscan=off;\nSET\npostgres=# explain select * from foo, bar;\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Nested Loop (cost=0.15..155632.40 rows=6502500 width=8)\n -> Index Only Scan using foo_i_idx on foo (cost=0.15..82.41 \nrows=2550 width=4)\n -> Seq Scan on bar (cost=0.00..35.50 (disabled) rows=2550 width=4)\n(5 rows)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 17:57:27 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Thanks for the review!\n\nOn Tue, Jul 2, 2024 at 10:57 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> v3-0001-Remove-grotty-use-of-disable_cost-for-TID-scan-pl.patch:\n>\n> +1, this seems ready for commit\n\nCool.\n\n> v3-0002-Rationalize-behavior-of-enable_indexscan-and-enab.patch:\n>\n> I fear this will break people's applications, if they are currently\n> forcing a sequential scan with \"set enable_indexscan=off\". Now they will\n> need to do \"set enable_indexscan=off; set enable_indexonlyscan=off\" for\n> the same effect. Maybe it's acceptable, disabling sequential scans to\n> force an index scan is much more common than the other way round.\n\nWell, I think it's pretty important that the GUC does what the name\nand documentation say it does. One could of course argue that we ought\nnot to have two different GUCs -- or perhaps even that we ought not to\nhave two different plan nodes -- and I think those arguments might be\nquite defensible. One could also argue for another interface, like a\nGUC enable_indexscan and a value that is a comma-separated list\nconsisting of plain, bitmap, and index-only, or\nnone/0/false/any/1/true -- and that might also be quite defensible.\nBut I don't think one can have a GUC called enable_indexscan and\nanother GUC called enable_indexonlyscan and argue that it's OK for the\nformer one to affect both kinds of scans. That's extremely confusing\nand, well, just plain wrong. I think this is a bug, and I'm not going\nto back-patch the fix precisely because of the considerations you\nnote, but I really don't think we can leave it like this. The current\nbehavior is so nonsensical that the code is essentially unmaintable,\nor at least I think it is.\n\n> v3-0003-Treat-number-of-disabled-nodes-in-a-path-as-a-sep.patch:\n>\n> > @@ -1318,6 +1342,12 @@ cost_tidscan(Path *path, PlannerInfo *root,\n> > startup_cost += path->pathtarget->cost.startup;\n> > run_cost += path->pathtarget->cost.per_tuple * path->rows;\n> >\n> > + /*\n> > + * There are assertions above verifying that we only reach this function\n> > + * either when enable_tidscan=true or when the TID scan is the only legal\n> > + * path, so it's safe to set disabled_nodes to zero here.\n> > + */\n> > + path->disabled_nodes = 0;\n> > path->startup_cost = startup_cost;\n> > path->total_cost = startup_cost + run_cost;\n> > }\n>\n> So if you have enable_tidscan=off, and have a query with \"WHERE CURRENT\n> OF foo\" that is planned with a TID scan, we set disable_nodes = 0? That\n> sounds wrong, shouldn't disable_nodes be 1 in that case? It probably\n> cannot affect the rest of the plan, given that \"WHERE CURRENT OF\" is\n> only valid in an UPDATE or DELETE, but still. At least it deserves a\n> better explanation in the comment.\n\nSo, right now, when the planner disregards enable_WHATEVER because it\nthinks it's the only way to implement something, it doesn't add\ndisable_cost. So, I made the patch not increment disabled_nodes in\nthat case. Maybe we want to rethink that choice at some point, but it\ndoesn't seem like a good idea to do it right now. I've found while\nworking on this stuff that it's super-easy to have seemingly innocuous\nchanges disturb regression test results, and I don't really want to\nhave a bunch of extra regression test changes that are due to\nrethinking things other than disable_cost -> disabled_nodes. So for\nnow I'd like to increment disabled_nodes in just the cases where we\ncurrently add disable_cost.\n\n> left over from debugging?\n\nYeah, will fix.\n\n> Is \"unlikely()\" really appropriate here (and elsewhere in the patch)? If\n> you run with enable_seqscan=off but have no indexes, you could take that\n> path pretty often.\n\nThat's true, but I think it's right to assume that's the uncommon\ncase. If we speed up planning for people who disabled sequential scans\nand slow it down for people running with a normal planner\nconfiguration, no one will thank us.\n\n> If this function needs optimizing, I'd suggest splitting it into two\n> functions, one for comparing the startup cost and another for the total\n> cost. Almost all callers pass a constant for that argument, so they\n> might as well call the correct function directly and avoid the branch\n> for that.\n\nThat's not a bad idea but seems like a separate patch.\n\n> The \"Cost comparisons here should match compare_path_costs_fuzzily\"\n> comment also applies to the check on total_cost that you moved up. Maybe\n> move up the comment to the beginning of the loop.\n\nWill have a look.\n\n> v3-0004-Show-number-of-disabled-nodes-in-EXPLAIN-ANALYZE-.patch:\n>\n> It's surprising that the \"Disable Nodes\" is printed even with the COSTS\n> OFF option. It's handy for our regression tests, it's good to print them\n> there, but it feels wrong.\n\nI'm open to doing what people think is best here. Although we're\nregarding them as part of the cost for purposes of how to compare\npaths, they're not unpredictable in the way that costs are, so I think\nthe current handling is defensible and, as you say, it's useful for\nthe regression tests. However, I'm not going to fight tooth and nail\nif people really want it the other way.\n\n> Could we cram it into the \"cost=... rows=...\" part? And perhaps a marker\n> that a node was disabled would be more user friendly than showing the\n> cumulative count? Something like:\n\nThe problem is that we'd have to derive that. What we actually know is\nthe disable count; to figure out whether the node itself was disabled,\nwe'd have to subtract the value for the underlying nodes back out.\nThat seems like it might be buggy or confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jul 2024 13:24:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 2, 2024 at 10:57 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> I fear this will break people's applications, if they are currently\n>> forcing a sequential scan with \"set enable_indexscan=off\". Now they will\n>> need to do \"set enable_indexscan=off; set enable_indexonlyscan=off\" for\n>> the same effect. Maybe it's acceptable, disabling sequential scans to\n>> force an index scan is much more common than the other way round.\n\n> But I don't think one can have a GUC called enable_indexscan and\n> another GUC called enable_indexonlyscan and argue that it's OK for the\n> former one to affect both kinds of scans. That's extremely confusing\n> and, well, just plain wrong.\n\nFWIW, I disagree completely. I think it's entirely natural to\nconsider bitmap index scans to be a subset of index scans, so that\nenable_indexscan should affect both. I admit that the current set\nof GUCs doesn't let you force a bitmap scan over a plain one, but\nI can't recall many people complaining about that. I don't follow\nthe argument that this definition is somehow unmaintainable, either.\n\n>> Could we cram it into the \"cost=... rows=...\" part? And perhaps a marker\n>> that a node was disabled would be more user friendly than showing the\n>> cumulative count? Something like:\n\n> The problem is that we'd have to derive that.\n\nThe other problem is it'd break an awful lot of client code that knows\nthe format of those lines. (Sure, by now all such code should have\nbeen rewritten to look at JSON or other more machine-friendly output\nformats ... but since we haven't even done that in our own regression\ntests, we should know better than to assume other people have done it.)\n\nI'm not really convinced that we need to show anything about this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 13:40:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Jul 2, 2024 at 1:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I disagree completely. I think it's entirely natural to\n> consider bitmap index scans to be a subset of index scans, so that\n> enable_indexscan should affect both. I admit that the current set\n> of GUCs doesn't let you force a bitmap scan over a plain one, but\n> I can't recall many people complaining about that. I don't follow\n> the argument that this definition is somehow unmaintainable, either.\n\nWell... but that's not what the GUC does either. Not now, and not with\nthe patch.\n\nWhat happens right now is:\n\n- If you set enable_indexscan=false, then disable_cost is added to the\ncost of index scan paths and the cost of index-only scan paths.\n\n- If you set enable_indexonlyscan=false, then index-only scan paths\nare not generated at all.\n\nBitmap scans are controlled by enable_bitmapscan.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jul 2024 13:54:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> What happens right now is:\n\n> - If you set enable_indexscan=false, then disable_cost is added to the\n> cost of index scan paths and the cost of index-only scan paths.\n\n> - If you set enable_indexonlyscan=false, then index-only scan paths\n> are not generated at all.\n\nHm. The first part of that seems pretty weird to me --- why don't\nwe simply not generate the paths at all? There is no case AFAIR\nwhere that would prevent us from generating a valid plan.\n\n(I do seem to recall that index-only paths are built on top of regular\nindex paths, so that there might be implementation issues with trying\nto build the former and not the latter. But you've probably looked\nat that far more recently than I.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 14:37:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Jul 2, 2024 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > What happens right now is:\n>\n> > - If you set enable_indexscan=false, then disable_cost is added to the\n> > cost of index scan paths and the cost of index-only scan paths.\n>\n> > - If you set enable_indexonlyscan=false, then index-only scan paths\n> > are not generated at all.\n>\n> Hm. The first part of that seems pretty weird to me --- why don't\n> we simply not generate the paths at all? There is no case AFAIR\n> where that would prevent us from generating a valid plan.\n\nWell, yeah.\n\nWhat the patch does is: if you set either enable_indexscan=false or\nenable_indexonlyscan=false, then the corresponding path type is not\ngenerated, and the other is unaffected. To me, that seems like the\nlogical way to clean this up.\n\nOne could argue for other things, of course. And maybe those other\nthings are fine, if they're properly justified and documented.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:28:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> What the patch does is: if you set either enable_indexscan=false or\n> enable_indexonlyscan=false, then the corresponding path type is not\n> generated, and the other is unaffected. To me, that seems like the\n> logical way to clean this up.\n\n> One could argue for other things, of course. And maybe those other\n> things are fine, if they're properly justified and documented.\n\n[ shrug... ] This isn't a hill that I'm prepared to die on.\nBut I see no good reason to change the very long-standing\nbehaviors of these GUCs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 15:36:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Jul 2, 2024 at 3:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > One could argue for other things, of course. And maybe those other\n> > things are fine, if they're properly justified and documented.\n>\n> [ shrug... ] This isn't a hill that I'm prepared to die on.\n> But I see no good reason to change the very long-standing\n> behaviors of these GUCs.\n\nWell, I don't really know where to go from here. I mean, I think that\nthree committers (David, Heikki, yourself) have expressed some\nconcerns about changing the behavior. So maybe we shouldn't. But I\ndon't understand how it's reasonable to have two very similarly named\nGUCs behave (1) inconsistently with each other and (2) in a way that\ncannot be guessed from the documentation.\n\nI feel like we're just clinging to legacy behavior on the theory that\nsomebody, somewhere might be relying on it in some way, which they\ncertainly might be. But that doesn't seem like a great reason, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:54:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, I don't really know where to go from here. I mean, I think that\n> three committers (David, Heikki, yourself) have expressed some\n> concerns about changing the behavior. So maybe we shouldn't. But I\n> don't understand how it's reasonable to have two very similarly named\n> GUCs behave (1) inconsistently with each other and (2) in a way that\n> cannot be guessed from the documentation.\n\nIf the documentation isn't adequate, that's certainly an improvable\nsituation. It doesn't seem hard:\n\n- Enables or disables the query planner's use of index-scan plan\n- types. The default is <literal>on</literal>.\n+ Enables or disables the query planner's use of index-scan plan\n+ types (including index-only scans).\n+ The default is <literal>on</literal>.\n\nMore to the point, if we do change the longstanding meaning of this\nGUC, that will *also* require documentation work IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 16:43:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 02/07/2024 22:54, Robert Haas wrote:\n> On Tue, Jul 2, 2024 at 3:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> One could argue for other things, of course. And maybe those other\n>>> things are fine, if they're properly justified and documented.\n>>\n>> [ shrug... ] This isn't a hill that I'm prepared to die on.\n>> But I see no good reason to change the very long-standing\n>> behaviors of these GUCs.\n> \n> Well, I don't really know where to go from here. I mean, I think that\n> three committers (David, Heikki, yourself) have expressed some\n> concerns about changing the behavior. So maybe we shouldn't. But I\n> don't understand how it's reasonable to have two very similarly named\n> GUCs behave (1) inconsistently with each other and (2) in a way that\n> cannot be guessed from the documentation.\n> \n> I feel like we're just clinging to legacy behavior on the theory that\n> somebody, somewhere might be relying on it in some way, which they\n> certainly might be. But that doesn't seem like a great reason, either.\n\nI agree the status quo is weird too. I'd be OK to break \nbackwards-compatibility if we can make it better.\n\nTom mentioned enable_bitmapscan, and it reminded me that the current \nbehavior with that is actually a bit annoying. I go through this pattern \nvery often when I'm investigating query plans:\n\n1. Hmm, let's see what this query plan looks like:\n\npostgres=# explain analyze select * from foo where i=10;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------\n Index Scan using foo_i_idx on foo (cost=0.29..8.31 rows=1 width=36) \n(actual time=0.079..0.090 rows=2 loops=1)\n Index Cond: (i = 10)\n Planning Time: 2.220 ms\n Execution Time: 0.337 ms\n(4 rows)\n\n2. Ok, and how long would it take with a seq scan? Let's see:\n\npostgres=# set enable_indexscan=off;\nSET\npostgres=# explain analyze select * from foo where i=10;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on foo (cost=4.30..8.31 rows=1 width=36) (actual \ntime=0.102..0.113 rows=2 loops=1)\n Recheck Cond: (i = 10)\n Heap Blocks: exact=2\n -> Bitmap Index Scan on foo_i_idx (cost=0.00..4.30 rows=1 width=0) \n(actual time=0.067..0.068 rows=2 loops=1)\n Index Cond: (i = 10)\n Planning Time: 0.211 ms\n Execution Time: 0.215 ms\n(7 rows)\n\n3. Oh right, bitmap scan, I forgot about that one. Let's disable that too:\n\npostgres=# set enable_bitmapscan=off;\nSET\npostgres=# explain analyze select * from foo where i=10;\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..1862.00 rows=1 width=36) (actual \ntime=0.042..39.226 rows=2 loops=1)\n Filter: (i = 10)\n Rows Removed by Filter: 109998\n Planning Time: 0.118 ms\n Execution Time: 39.272 ms\n(5 rows)\n\nI would be somewhat annoyed if we add another step to that, to also \ndisable index-only scans separately. It would be nice if \nenable_indexscan=off would also disable bitmap scans, that would \neliminate one step from the above. Almost always when I want to disable \nindex scans, I really want to disable the use of the index altogether. \nThe problem then of course is, how do you force a bitmap scan without \nallowing other index scans, when you want to test them both?\n\nIt almost feels like we should have yet another GUC to disable index \nscans, index-only scans and bitmap index scans. \"enable_indexes=off\" or \nsomething.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 3 Jul 2024 00:39:43 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> 3. Oh right, bitmap scan, I forgot about that one. Let's disable that too:\n\nYeah, I've hit that too, although more often (for me) it's the first\nchoice of plan. In any case, it usually takes more than one change\nto get to a seqscan.\n\n> It almost feels like we should have yet another GUC to disable index \n> scans, index-only scans and bitmap index scans. \"enable_indexes=off\" or \n> something.\n\nThere's something to be said for that idea. Breaking compatibility is\na little easier to stomach if there's a clear convenience win, and\nthis'd offer that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 17:49:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Tue, Jul 2, 2024 at 5:39 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I would be somewhat annoyed if we add another step to that, to also\n> disable index-only scans separately. It would be nice if\n> enable_indexscan=off would also disable bitmap scans, that would\n> eliminate one step from the above. Almost always when I want to disable\n> index scans, I really want to disable the use of the index altogether.\n> The problem then of course is, how do you force a bitmap scan without\n> allowing other index scans, when you want to test them both?\n>\n> It almost feels like we should have yet another GUC to disable index\n> scans, index-only scans and bitmap index scans. \"enable_indexes=off\" or\n> something.\n\nThis is an interesting idea, and it seems like it could be convenient.\nHowever, the fact that it's so non-orthogonal is definitely not great.\nOne problem I've had with going through regression tests that rely on\nthe enable_* GUCs is that it's often not quite clear what values all\nof those GUCs have at a certain point in the test file, because the\nstatements that set them may be quite a bit higher up in the file and\nsome changes may also have been rolled back. I've found recently that\nthe addition of EXPLAIN (SETTINGS) helps with this quite a bit,\nbecause you can adjust the .sql file to use that option and then see\nwhat shows up in the output file. Still, it's annoying, and the same\nissue could occur in any other situation where you're using these\nGUCs. It's just more confusing when there are multiple ways of turning\nsomething off.\n\nWould we consider merging enable_indexscan, enable_indexonlyscan, and\nenable_bitmapscan into something like:\n\nenable_indexes = on | off | { plain | indexonly | bitmap } [, ...]\n\nI feel like that would solve the usability concern that you raise here\nwhile also (1) preserving orthogonality and (2) reducing the number of\nGUCs rather than first increasing it. When I first joined the project\nthere were a decent number of enable_* GUCs, but there's way more now.\nSome of them are a little random (which is a conversation for another\nday) but just cutting down on the number seems like it might not be\nsuch a bad idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 09:01:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, 3 Jul 2024 at 09:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > 3. Oh right, bitmap scan, I forgot about that one. Let's disable that too:\n>\n> Yeah, I've hit that too, although more often (for me) it's the first\n> choice of plan. In any case, it usually takes more than one change\n> to get to a seqscan.\n\nI commonly hit this too.\n\nI think the current behaviour is born out of the fact that we don't\nproduce both an Index Scan and an Index Only Scan for the same index.\nWe'll just make the IndexPath an index only scan, if possible based\non:\n\nindex_only_scan = (scantype != ST_BITMAPSCAN &&\n check_index_only(rel, index));\n\nThe same isn't true for Bitmap Index Scans. We'll create both\nIndexPaths and BitmapHeapPaths and let them battle it out in\nadd_path().\n\nI suspect this is why it's been coded that enable_indexscan also\ndisables Index Only Scans. Now, of course, it could work another way,\nbut I also still think that doing so is changing well-established\nbehaviour that I don't recall anyone ever complaining about besides\nRobert. Robert's complaint seems to have originated from something he\nnoticed while hacking on code rather than actually using the database\nfor something. I think the argument for changing it should have less\nweight due to that.\n\nI understand that we do have inconsistencies around this stuff. For\nexample, enable_sort has no influence on Incremental Sorts like\nenable_indexscan has over Index Only Scan. That might come from the\nfact that we used to, up until a couple of releases ago, produce both\nsort path types and let them compete in add_path(). That's no longer\nthe case, we now just do incremental sort when we can, just like we do\nIndex Only Scans when we can. Despite those inconsistencies, I\nwouldn't vote for changing either of them to align with the other. It\njust feels too long-established behaviour to be messing with.\n\nI feel it might be best to move this patch to the back of the series\nor just drop it for now as it seems to be holding up the other stuff\nfrom moving forward, and that stuff looks useful and worth changing.\n\nDavid\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:29:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "OK, here's a new patch version. I earlier committed the refactoring to\navoid using disable_cost to force WHERE CURRENT OF to be implemented\nby a TID scan. In this version, I've dropped everything related to\nreworking enable_indexscan or any other enable_* GUC. Hence, this\nversion of the patch set just focuses on adding the count of disabled\nnodes and removing the use of disable_cost. In addition to dropping\nthe controversial patches, I've also found and squashed a few bugs in\nthis version.\n\nBehavior: With the patch, whenever an enable_* GUC would cause\ndisable_cost to be added, disabled_nodes is incremented instead. There\nis one remaining use of disable_cost which is not triggered by an\nenable_* GUC but by the desire to avoid plans that we think will\noverflow work_mem. I welcome thoughts on what to do about that case;\nfor now, I do nothing. As before, 0001 adds the disabled_nodes field\nto paths and 0002 adds it to plans. I think we could plausibly commit\nonly 0001, both patches separately, or both patches squashed.\n\nNotes:\n\n- I favor committing both patches. Tom stated that he didn't think\nthat we needed to show anything related to disabled nodes, and that\ncould be true. However, today, you can tell which nodes are disabled\nas long as you print out the costs; if we don't propagate disabled\nnodes into the plan and print them out, that will no longer be\npossible. I found working on the patches that it was really hard to\ndebug the patch set without this, so my guess is that we'll find not\nhaving it pretty annoying, but we can also just commit 0001 for\nstarters and see how long it takes for the lack of 0002 to become\nannoying. If the answer is \"infinite time,\" that's cool; if it isn't,\nwe can reconsider committing 0002.\n\n- If we do commit 0002, I think it's a good idea to have the number of\ndisabled nodes displayed even with COSTS OFF, because it's stable, and\nit's pretty useful to be able to see this in the regression output. I\nhave found while working on this that I often need to adjust the .sql\nfiles to say EXPLAIN (COSTS ON) instead of EXPLAIN (COSTS OFF) in\norder to understand what's happening. Right now, there's no real\nalternative because costs aren't stable, but disabled-node counts\nshould be stable, so I feel this would be a step forward. Apart from\nthat, I also think it's good for features to have regression test\ncoverage, and since we use COSTS OFF everywhere or at least nearly\neverywhere in the regression test, if we don't print out the disabled\nnode counts when COSTS OFF is used, then we don't cover that case in\nour tests. Bummer.\n\nRegression test changes in 0001:\n\n- btree_index.sql executes a query \"select proname from pg_proc where\nproname ilike 'ri%foo' order by 1\" with everything but bitmap scans\ndisabled. Currently, that produces an index-only scan; with the patch,\nit produces a sort over a sequential scan. That's a little odd,\nbecause the test seems to be aimed at demonstrating that we can use a\nbitmap scan, and it doesn't, because we apparently can't. But, why\ndoes the patch change the plan?\nAt least on my machine, the index-only scan is significantly more\ncostly than the sequential scan. I think what's happening here is that\nwhen you add disable_cost to the cost of both paths, they compare\nfuzzily the same; without that, the cheaper one wins.\n\n- select_parallel.out executes a query with sequential scans disabled\nbut tenk2 must nevertheless be sequential-scanned. With the patch,\nthat changes to a parallel sequential scan. I think the explanation\nhere is the same as in the preceding case.\n\n- horizons.spec currently sets enable_seqscan=false,\nenable_indexscan=false, and enable_bitmapscan=false. I suspect that\nAndres thought that this would force the use of an index-only scan,\nsince nothing sets enable_indexonlyscan=false. But as discussed\nupthread, that is not true. Instead everything is disabled. For the\nsame reasons as in the previous two examples, this caused an\nassortment of plan changes which in turn caused the test to fail to\ntest what it was intended to test. So I removed enable_indexscan=false\nfrom the spec file, and now it gets index-only scans everywhere again,\nas desired.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 31 Jul 2024 12:23:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Thu, 1 Aug 2024 at 04:23, Robert Haas <robertmhaas@gmail.com> wrote:\n> OK, here's a new patch version.\n\nI think we're going down the right path here.\n\nI've reviewed both patches, here's what I noted down during my review:\n\n0. I've not seen any mention so far about postgres_fdw's\nuse_remote_estimate. Maybe changing the costs is fixing an issue that\nexisted before. I'm just not 100% sure on that.\n\nConsider:\n\nCREATE EXTENSION postgres_fdw;\n\nDO $d$\n BEGIN\n EXECUTE $$CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw\n OPTIONS (use_remote_estimate 'true',\n dbname '$$||current_database()||$$',\n port '$$||current_setting('port')||$$'\n )$$;\n END;\n$d$;\nCREATE USER MAPPING FOR CURRENT_USER SERVER loopback;\n\ncreate table t (a int);\ncreate foreign table ft (a int) server loopback OPTIONS (table_name 't');\n\nalter system set enable_seqscan=0;\nselect pg_Reload_conf();\nset enable_seqscan=1;\nexplain select * from ft;\n\npatched:\n Foreign Scan on ft (cost=100.00..671.00 rows=2550 width=4)\n\nmaster:\n Foreign Scan on ft (cost=10000000100.00..10000000671.00 rows=2550 width=4)\n\nI kinda think that might be fixing an issue that I don't recall being\nreported before. I think we shouldn't really care that much about what\nnodes are disabled on the remote server and not having disabled_cost\napplied to that gives us that.\n\n1. The final sentence of the function header comment needs to be\nupdated in estimate_path_cost_size().\n\n2. Does cost_tidscan() need to update the header comment to say\ntidquals must not be empty?\n\n3. final_cost_nestloop() seems to initially use the disabled_nodes\nfrom initial_cost_nestloop() but then it goes off and calculates it\nagain itself. One of these seems redundant. The \"We could include\ndisable_cost in the preliminary estimate\" comment explains why it was\noriginally left to final_cost_nestloop(), so maybe worth sticking to\nthat? I don't quite know the full implications, but it does not seem\nworth risking a behaviour change here.\n\n4. I wonder if it's worth doing a quick refactor of the code in\ninitial_cost_mergejoin() to get rid of the duplicate code in the \"if\n(outersortkeys)\" and \"if (innersortkeys)\" branches. It seems ok to do\nouter_path = &sort_path. Likewise for inner_path.\n\n5. final_cost_hashjoin() does the same thing as #3\n\n6. createplan.c adds #include \"nodes/print.h\" but doesn't seem to add\nany code that might use anything in there.\n\n7. create_lockrows_path() needs to propagate disabled_nodes.\n\ncreate table a (a int);\nset enable_seqscan=0;\n\nexplain select * from a for update limit 1;\n\n Limit (cost=0.00..0.02 rows=1 width=10)\n -> LockRows (cost=0.00..61.00 rows=2550 width=10)\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=10)\n Disabled Nodes: 1\n(4 rows)\n\n\nexplain select * from a limit 1;\n\n Limit (cost=0.00..0.01 rows=1 width=4)\n Disabled Nodes: 1\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=4)\n Disabled Nodes: 1\n(4 rows)\n\n8. There's something weird with CTEs too.\n\ncreate table b(a int);\nset enable_sort=0;\n\nPatched:\n\nexplain with cte as materialized (select * from b order by a) select *\nfrom cte order by a desc;\n\n Sort (cost=381.44..387.82 rows=2550 width=4)\n Disabled Nodes: 1\n Sort Key: cte.a DESC\n CTE cte\n -> Sort (cost=179.78..186.16 rows=2550 width=4)\n Disabled Nodes: 1\n Sort Key: b.a\n -> Seq Scan on b (cost=0.00..35.50 rows=2550 width=4)\n -> CTE Scan on cte (cost=0.00..51.00 rows=2550 width=4)\n(9 rows)\n\nmaster:\n\nexplain with cte as materialized (select * from a order by a) select *\nfrom cte order by a desc;\n\n Sort (cost=20000000381.44..20000000387.82 rows=2550 width=4)\n Sort Key: cte.a DESC\n CTE cte\n -> Sort (cost=10000000179.78..10000000186.16 rows=2550 width=4)\n Sort Key: a.a\n -> Seq Scan on a (cost=0.00..35.50 rows=2550 width=4)\n -> CTE Scan on cte (cost=0.00..51.00 rows=2550 width=4)\n(7 rows)\n\nI'd expect the final sort to have disabled_nodes == 2 since\ndisabled_cost has been added twice in master.\n\n9. create_set_projection_path() needs to propagate disabled_nodes too:\n\nexplain select b from (select a,generate_series(1,2) as b from b) a limit 1;\n\n Limit (cost=0.00..0.03 rows=1 width=4)\n -> Subquery Scan on a (cost=0.00..131.12 rows=5100 width=4)\n -> ProjectSet (cost=0.00..80.12 rows=5100 width=8)\n -> Seq Scan on b (cost=0.00..35.50 rows=2550 width=0)\n Disabled Nodes: 1\n\n10. create_setop_path() needs to propagate disabled_nodes.\n\nexplain select * from b except select * from b limit 1;\n\n Limit (cost=0.00..0.80 rows=1 width=8)\n -> HashSetOp Except (cost=0.00..160.25 rows=200 width=8)\n -> Append (cost=0.00..147.50 rows=5100 width=8)\n Disabled Nodes: 2\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..61.00\nrows=2550 width=8)\n Disabled Nodes: 1\n -> Seq Scan on b (cost=0.00..35.50 rows=2550 width=4)\n Disabled Nodes: 1\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..61.00\nrows=2550 width=8)\n Disabled Nodes: 1\n -> Seq Scan on b b_1 (cost=0.00..35.50 rows=2550 width=4)\n Disabled Nodes: 1\n(12 rows)\n\n11. create_modifytable_path() needs to propagate disabled_nodes.\n\nexplain with cte as (update b set a = a+1 returning *) select * from\ncte limit 1;\n\n Limit (cost=41.88..41.90 rows=1 width=4)\n CTE cte\n -> Update on b (cost=0.00..41.88 rows=2550 width=10)\n -> Seq Scan on b (cost=0.00..41.88 rows=2550 width=10)\n Disabled Nodes: 1\n -> CTE Scan on cte (cost=0.00..51.00 rows=2550 width=4)\n(6 rows)\n\n12. For the 0002 patch, I do agree that having this visible in EXPLAIN\nis a must. I'd much rather see: Disabled: true/false. And just\ndisplay this when the disabled_nodes is greater than the sum of the\nsubpaths. That might be much more complex to implement, but it's\ngoing to make it much easier to track down the disabled nodes in very\nlarge plans.\n\nDavid\n\n\n", "msg_date": "Thu, 1 Aug 2024 14:01:07 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, Jul 31, 2024 at 10:01 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've reviewed both patches, here's what I noted down during my review:\n\nThanks.\n\n> 0. I've not seen any mention so far about postgres_fdw's\n> use_remote_estimate. Maybe changing the costs is fixing an issue that\n> existed before. I'm just not 100% sure on that.\n>\n> patched:\n> Foreign Scan on ft (cost=100.00..671.00 rows=2550 width=4)\n>\n> master:\n> Foreign Scan on ft (cost=10000000100.00..10000000671.00 rows=2550 width=4)\n>\n> I kinda think that might be fixing an issue that I don't recall being\n> reported before. I think we shouldn't really care that much about what\n> nodes are disabled on the remote server and not having disabled_cost\n> applied to that gives us that.\n\nHmm, I think it's subjective which behavior is better. If somebody\nthought the new behavior was worse, they might want the remote side's\ncount of disabled nodes to be propagated to the local side, but I'm\ndisinclined to go there. My guess is that it doesn't matter much\neither way what we do here, so I'd rather not add more code.\n\n> 1. The final sentence of the function header comment needs to be\n> updated in estimate_path_cost_size().\n\nFixed.\n\n> 2. Does cost_tidscan() need to update the header comment to say\n> tidquals must not be empty?\n\nIMHO, no. The assertions I added to that function were intended as\ndocumentation of what that function was already assuming about the\nbehavior of its caller. I had to trace through the logic in tidpath.c\nfor quite a while to understand why cost_tidscan() was not completely\nbroken. To spare the next person the trouble of working that out, I\nadded assertions. Now we could additionally add commentary in English\nthat restates what the assertions already say, but I feel like having\nthe assertions is good enough. If somebody ever whacks around\ntidpath.c such that these assertions start failing, I think it will be\nfairly clear to them that they either need to revert their changes in\ntidpath.c or upgrade the logic in this function to cope.\n\n> 3. final_cost_nestloop() seems to initially use the disabled_nodes\n> from initial_cost_nestloop() but then it goes off and calculates it\n> again itself. One of these seems redundant.\n\nOops. Fixed.\n\n> The \"We could include\n> disable_cost in the preliminary estimate\" comment explains why it was\n> originally left to final_cost_nestloop(), so maybe worth sticking to\n> that? I don't quite know the full implications, but it does not seem\n> worth risking a behaviour change here.\n\nI don't really see how there could be a behavior change here, unless\nthere's a bug. Dealing with the enable_* flags in initial_cost_XXX\nrather than final_cost_XXX could be better or worse from a performance\nstandpoint and it could make for cleaner or less clean code, but the\nuser-facing behavior should be identical unless there are bugs.\n\nThe reason why I changed this is because of the logic in\nadd_path_precheck(): it exits early as soon as it sees a path whose\ntotal cost is greater than the cost of the proposed new path. Since\nthe patch's aim is to treat disabled_node as a high-order component of\nthe cost, we need to make the same decision by comparing the count of\ndisabled_nodes first and then if that is equal, we need to compare the\ntotal_cost. We can't do that if we don't have the count of\ndisabled_nodes for the proposed new path.\n\nI think this may be a bit hard to understand, so let me give a\nconcrete example. Suppose we're planning some join where one side can\nonly be planned with a sequential scan and sequential scans are\ndisabled. We have ten paths in the path list and they have costs of\n1e10+100, 1e10+200, ..., 1e10+1000. Now add_path_precheck() is asked\nto consider a new path where there is a disabled node on BOTH sides of\nthe join -- the one side has the disabled sequential scan, but now the\nother side also has something disabled, so the cost is let's say\n2e10+79. add_path_precheck() can see at once that this path is a\nloser: it can't possibly dominate any path that already exists,\nbecause it costs more than any of them. But when you take disable_cost\nout, things look quite different. Now you have a proposed path with a\ntotal_cost of 79 and a path list with costs of 100, ..., 1000. If\nyou're not allowed to know anything about disabled_nodes, the new path\nlooks like it might be valuable. You might decide to construct it and\ntry inserting into the pathlist, which will end up being useless, and\neven if you don't, you're going to compare its pathkeys and\nparameterization to each of the 10 existing paths before giving up.\nBummer.\n\nSo, to avoid getting much stupider than it is currently,\nadd_path_precheck() needs a preliminary estimate of the number of\ndisabled nodes just like it needs a preliminary estimate of the total\ncost. And to avoid regressions, that estimate needs to be pretty good.\nA naive estimate would be to just add up the number of disabled_nodes\non the inner and outer paths, but that would be a regression in the\nmerge-join case, because initial_cost_mergejoin() calls cost_sort()\nfor the inner and outer sides and that will add disable_cost if sorts\nare disabled. If you didn't take the effect of cost_sort() into\naccount, you might think that your number of disabled_nodes was going\nto be substantially lower than it really would be, leading to wasted\nwork as described in the last paragraph. Plus, since\ninitial_cost_mergejoin() is incurring the overhead of calling\ncost_sort() anyway to get the total cost numbers anyway, it would be\nsilly not to save the count of disabled nodes: if we did, we'd have to\nredo the cost_sort() call in final_cost_mergejoin(), which would be\nexpensive.\n\nIf we wanted to make our estimate of the # of disabled nodes exactly\ncomparable to what we now do with disable_cost, we would postpone if\n(!enable_WHATEVERjoin) ++disabled_nodes to the final_cost_XXX\nfunctions and do all of the other accounting related to disabled nodes\nat the initial_cost_XXX phase. But I do not like that approach.\nPostponing one trivial portion of the disabled_nodes calculation to a\nlater time won't save any significant number of CPU cycles, but it\nmight confuse people reading the code. You then have to know that the\ndisabled_nodes count that gets passed to final_cost_XXX is not yet the\nfinal count, but that you may still need to add 1 for the join itself\n(but not for the implicit sorts that the join requires, which have\nalready been accounted for). That's the kind of odd definition that\nbreeds bugs. Besides, it's not as if moving that tiny bit of logic to\nthe initial_cost_XXX functions has no upside: it could allow\nadd_path_precheck() to exit earlier, thus saving cycles.\n\n(For the record, the explanation above took about 3 hours to write, so\nI hope it's managed to be both correct and convincing. This stuff is\nreally complicated.)\n\n> 4. I wonder if it's worth doing a quick refactor of the code in\n> initial_cost_mergejoin() to get rid of the duplicate code in the \"if\n> (outersortkeys)\" and \"if (innersortkeys)\" branches. It seems ok to do\n> outer_path = &sort_path. Likewise for inner_path.\n\nI don't think that's better.\n\n> 5. final_cost_hashjoin() does the same thing as #3\n\nArgh. Fixed.\n\n> 6. createplan.c adds #include \"nodes/print.h\" but doesn't seem to add\n> any code that might use anything in there.\n\nFixed.\n\n> 8. There's something weird with CTEs too.\n>\n> I'd expect the final sort to have disabled_nodes == 2 since\n> disabled_cost has been added twice in master.\n\nRight now, disabled node counts don't propagate through SubPlans (see\nSS_process_ctes). Maybe that needs to be changed, but aside from\nlooking weird, does it do any harm?\n\n> 7. create_lockrows_path() needs to propagate disabled_nodes.\n> 9. create_set_projection_path() needs to propagate disabled_nodes too:\n> 10. create_setop_path() needs to propagate disabled_nodes.\n> 11. create_modifytable_path() needs to propagate disabled_nodes.\n\nI changed all of these, but I think these examples only establish that\nthose nodes DO NOT propagate disabled_nodes, not that they need to. If\nwe're past the point of making any choices based on costs, then\nmaintaining disabled_nodes or not doing so won't affect correctness.\nThat's not to say these aren't good to tidy up, and some of them may\nwell be bugs, but I don't think your test cases prove that. What\nprimarily matters is whether the enable_BLAH GUCs get respected; the\nexact contents of the EXPLAIN output are somewhat more arguable.\n\n> 12. For the 0002 patch, I do agree that having this visible in EXPLAIN\n> is a must. I'd much rather see: Disabled: true/false. And just\n> display this when the disabled_nodes is greater than the sum of the\n> subpaths. That might be much more complex to implement, but it's\n> going to make it much easier to track down the disabled nodes in very\n> large plans.\n\nI think it's going to be very unpleasant if we have the planner add\nthings up and then try to have EXPLAIN subtract them back out again.\nOne problem with that is that all of the test cases where you just\nshowed disabled_nodes not propagating upward wouldn't actually show\nanything any more, because disabled_nodes would not have been greater\nin the parent than in the child. So those are oversights in the code\nthat are easy to spot now but would become hard to spot with this\nimplementation. Another problem is that the EXPLAIN code itself could\ncontain bugs, or slightly more broadly, get out of sync with the logic\nthat decides what to add up. It won't be obvious what's happening:\nsome node that is actually disabled just won't appear to be, or the\nother way around, and it will be hard to understand what happened,\nbecause you won't be able to see the raw counts of disabled nodes that\nwould allow you to deduce where the error actually is.\n\nOne idea that occurs to me is to store TWO counts in each path node\nand each plan node: the count of self-exclusive disabled nodes, and\nthe count of self-include disabled nodes. Then explain can just test\nif they are different. If the answer is 1, the node is disabled; if 0,\nit's enabled; if anything else, there's a bug (and it could print the\ndelta, or each value separately, to help localize such bugs). The\nproblem with that is that it eats up more space in\nperformance-critical data structures, but perhaps that's OK: I don't\nknow.\n\nAnother thought is that right now you just see the disable_cost values\nadded up with the rest of the cost. So maybe propagating upward is not\nreally such a bad behavior; it's what we have now.\n\nThis point probably needs more thought and discussion, but I'm out of\ntime to work on this for today, and out of mental energy too. So for\nnow here's v5 as I have it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Aug 2024 14:03:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, 2 Aug 2024 at 06:03, Robert Haas <robertmhaas@gmail.com> wrote:\n> I think this may be a bit hard to understand, so let me give a\n> concrete example. Suppose we're planning some join where one side can\n> only be planned with a sequential scan and sequential scans are\n> disabled. We have ten paths in the path list and they have costs of\n> 1e10+100, 1e10+200, ..., 1e10+1000. Now add_path_precheck() is asked\n> to consider a new path where there is a disabled node on BOTH sides of\n> the join -- the one side has the disabled sequential scan, but now the\n> other side also has something disabled, so the cost is let's say\n> 2e10+79. add_path_precheck() can see at once that this path is a\n> loser: it can't possibly dominate any path that already exists,\n> because it costs more than any of them. But when you take disable_cost\n> out, things look quite different. Now you have a proposed path with a\n> total_cost of 79 and a path list with costs of 100, ..., 1000. If\n> you're not allowed to know anything about disabled_nodes, the new path\n> looks like it might be valuable. You might decide to construct it and\n> try inserting into the pathlist, which will end up being useless, and\n> even if you don't, you're going to compare its pathkeys and\n> parameterization to each of the 10 existing paths before giving up.\n> Bummer.\n\nOK, so it sounds like you'd like to optimise this code so that the\nplanner does a little less work when node types are disabled. The\nexisting comment does mention explicitly that we don't want to do\nthat:\n\n/*\n* We could include disable_cost in the preliminary estimate, but that\n* would amount to optimizing for the case where the join method is\n* disabled, which doesn't seem like the way to bet.\n*/\n\nAs far as I understand it from reading the comments in that file, I\nsee no offer of guarantees that the initial cost will be cheaper than\nthe final cost. So what you're proposing could end up rejecting paths\nbased on initial cost where the final cost might end up being the\ncheapest path. Imagine you're considering a Nested Loop and a Hash\nJoin, both of which are disabled. Merge Join is unavailable as the\njoin column types are not sortable. If the hash join costs 99 and the\ninitial nested loop costs 110, but the final nested loop ends up\ncosting 90, then the nested loop could be rejected before we even get\nto perform the final cost for it. The current code will run\nfinal_cost_nestloop() and find that 90 is cheaper than 99, whereas\nwhat you want to do is stop bothering with nested loop when we see the\ninitial cost come out at 110.\n\nPerhaps it's actually fine if the initial costs are always less than\nthe final costs as, if that's the case, we won't ever reject any paths\nbased on the initial cost that we wouldn't anyway based on the final\ncost. However, since there does not seem to be any comments mentioning\nthis guarantee and if you're just doing this to squeeze more\nperformance out of the planner, it seems risky to do for that reason\nalone.\n\nI'd say if you want to do this, you should be justifying it on its own\nmerit with some performance numbers and some evidence that we don't\nproduce inferior plans as a result. But per what I quoted above,\nyou're not doing that, you're doing this as a performance\noptimisation.\n\nI'm not planning on pushing this any further. I've just tried to\nhighlight that there's the possibility of a behavioural change. You're\nclaiming there isn't one. I claim there is.\n\nDavid\n\n\n", "msg_date": "Fri, 2 Aug 2024 15:34:16 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Thu, Aug 1, 2024 at 11:34 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm not planning on pushing this any further. I've just tried to\n> highlight that there's the possibility of a behavioural change. You're\n> claiming there isn't one. I claim there is.\n\nI don't know what to tell you. The original version of the patch\ndidn't change this stuff, and the result did not work. So I looked\ninto the problem and fixed it. I may have done that wrongly, or there\nmay be debatable points, but it seems like your argument is\nessentially that I shouldn't have done any of this and I should just\ntake it all back out, and I know that doesn't work because it's the\nfirst thing I tried.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Aug 2024 08:17:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Sat, 3 Aug 2024 at 00:17, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 1, 2024 at 11:34 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'm not planning on pushing this any further. I've just tried to\n> > highlight that there's the possibility of a behavioural change. You're\n> > claiming there isn't one. I claim there is.\n>\n> I don't know what to tell you. The original version of the patch\n> didn't change this stuff, and the result did not work. So I looked\n> into the problem and fixed it. I may have done that wrongly, or there\n> may be debatable points, but it seems like your argument is\n> essentially that I shouldn't have done any of this and I should just\n> take it all back out, and I know that doesn't work because it's the\n> first thing I tried.\n\nI've just read what you wrote again and I now realise something I didn't before.\n\nI now think neither of us got it right. I now think what you'd need to\ndo to be aligned to the current behaviour is have\ninitial_cost_nestloop() add the disabled_nodes for the join's subnodes\n*only* and have final_cost_nestloop() add the additional\ndisabled_nodes if enable_nestloop = off. That way you maintain the\nexisting behaviour of not optimising for disabled node types and don't\nrisk plan changes if the final cost comes out cheaper than the initial\ncost.\n\nDavid\n\n\n", "msg_date": "Sat, 3 Aug 2024 01:13:36 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 2, 2024 at 9:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I now think neither of us got it right. I now think what you'd need to\n> do to be aligned to the current behaviour is have\n> initial_cost_nestloop() add the disabled_nodes for the join's subnodes\n> *only* and have final_cost_nestloop() add the additional\n> disabled_nodes if enable_nestloop = off. That way you maintain the\n> existing behaviour of not optimising for disabled node types and don't\n> risk plan changes if the final cost comes out cheaper than the initial\n> cost.\n\nAll three initial_cost_XXX functions have a comment that says \"This\nmust quickly produce lower-bound estimates of the path's startup and\ntotal costs,\" i.e. the final cost should never be cheaper. I'm pretty\nsure that it was the design intention here that no path ever gets\nrejected at the initial cost stage that would have been accepted at\nthe final cost stage.\n\n(You can also see, as a matter of implementation, that they extract\nthe startup_cost and run_cost from the workspace and then add to those\nvalues.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Aug 2024 12:04:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 2, 2024 at 9:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> ... That way you maintain the\n>> existing behaviour of not optimising for disabled node types and don't\n>> risk plan changes if the final cost comes out cheaper than the initial\n>> cost.\n\n> All three initial_cost_XXX functions have a comment that says \"This\n> must quickly produce lower-bound estimates of the path's startup and\n> total costs,\" i.e. the final cost should never be cheaper. I'm pretty\n> sure that it was the design intention here that no path ever gets\n> rejected at the initial cost stage that would have been accepted at\n> the final cost stage.\n\nThat absolutely is the expectation, and we'd better be careful not\nto break it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Aug 2024 12:51:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 2, 2024 at 12:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That absolutely is the expectation, and we'd better be careful not\n> to break it.\n\nI have every intention of not breaking it. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Aug 2024 12:53:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 2, 2024 at 12:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Aug 2, 2024 at 12:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > That absolutely is the expectation, and we'd better be careful not\n> > to break it.\n>\n> I have every intention of not breaking it. :-)\n\nI went ahead and committed these patches. I know there's some debate\nover whether we want to show the # of disabled nodes and if so whether\nit should be controlled by COSTS, and I suspect I haven't completely\nallayed David's concerns about the initial_cost_XXX functions although\nI think that I did the right thing. But, I don't have the impression\nthat anyone is desperately opposed to the basic concept, so I think it\nmakes sense to put these into the tree and see what happens. We have\nquite a bit of time left in this release cycle to uncover bugs, hear\nfrom users or other developers, etc. about what problems there may be\nwith this. If we end up deciding to reverse course or need to fix a\nbunch of stuff, so be it, but let's see what the feedback is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Aug 2024 10:29:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, 31 Jul 2024 at 18:23, Robert Haas <robertmhaas@gmail.com> wrote:\n> - If we do commit 0002, I think it's a good idea to have the number of\n> disabled nodes displayed even with COSTS OFF, because it's stable, and\n> it's pretty useful to be able to see this in the regression output. I\n> have found while working on this that I often need to adjust the .sql\n> files to say EXPLAIN (COSTS ON) instead of EXPLAIN (COSTS OFF) in\n> order to understand what's happening. Right now, there's no real\n> alternative because costs aren't stable, but disabled-node counts\n> should be stable, so I feel this would be a step forward. Apart from\n> that, I also think it's good for features to have regression test\n> coverage, and since we use COSTS OFF everywhere or at least nearly\n> everywhere in the regression test, if we don't print out the disabled\n> node counts when COSTS OFF is used, then we don't cover that case in\n> our tests. Bummer.\n\nAre the disabled node counts still expected to be stable even with\nGEQO? If not, maybe we should have a way to turn them off after all.\nAlthough I agree that always disabling them when COSTS OFF is set is\nprobably also undesirable. How about a new option, e.g. EXPLAIN\n(DISABLED OFF)\n\n\n", "msg_date": "Thu, 22 Aug 2024 14:07:32 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Thu, Aug 22, 2024 at 8:07 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> Are the disabled node counts still expected to be stable even with\n> GEQO? If not, maybe we should have a way to turn them off after all.\n> Although I agree that always disabling them when COSTS OFF is set is\n> probably also undesirable. How about a new option, e.g. EXPLAIN\n> (DISABLED OFF)\n\nHmm, I hadn't thought about that. There are no GEQO-specific changes\nin this patch, which AFAIK is OK, because I think GEQO just relies on\nthe core planning machinery to decide everything about the cost of\npaths, and is really only experimenting with different join orders. So\nI think if it picks the same join order, it should get the same count\nof disabled nodes everywhere. If it doesn't pick the same order,\nyou'll get a different plan entirely.\n\nI don't think I quite want to jump into inventing a new EXPLAIN option\nright this minute. I'm not against the idea, but I don't want to jump\ninto engineering solutions before I understand what the problems are,\nso I think we should give this a little time. I'll be a bit surprised\nif this doesn't elicit a few strong reactions, but I want to see what\npeople are actually sad (or, potentially, happy) about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Aug 2024 08:43:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 8/21/24 10:29 AM, Robert Haas wrote:\r\n\r\n> I went ahead and committed these patches. I know there's some debate\r\n> over whether we want to show the # of disabled nodes and if so whether\r\n> it should be controlled by COSTS, and I suspect I haven't completely\r\n> allayed David's concerns about the initial_cost_XXX functions although\r\n> I think that I did the right thing. But, I don't have the impression\r\n> that anyone is desperately opposed to the basic concept, so I think it\r\n> makes sense to put these into the tree and see what happens. We have\r\n> quite a bit of time left in this release cycle to uncover bugs, hear\r\n> from users or other developers, etc. about what problems there may be\r\n> with this. If we end up deciding to reverse course or need to fix a\r\n> bunch of stuff, so be it, but let's see what the feedback is.\r\n\r\nWe hit an issue with pgvector[0] where a regular `SELECT count(*) FROM \r\ntable`[1] is attempting to scan the index on the vector column when \r\n`enable_seqscan` is disabled. Credit to Andrew Kane (CC'd) for flagging it.\r\n\r\nI was able to trace this back to e2225346. Here is a reproducer:\r\n\r\nSetup\r\n=====\r\n\r\nCREATE EXTENSION vector;\r\n\r\nCREATE OR REPLACE FUNCTION public.generate_random_normalized_vector(dim \r\ninteger)\r\nRETURNS vector\r\nLANGUAGE SQL\r\nAS $$\r\n SELECT public.l2_normalize(array_agg(random()::real)::vector)\r\n FROM generate_series(1, $1);\r\n$$;\r\n\r\nCREATE TABLE test (id int, embedding vector(128));\r\nINSERT INTO test\r\n SELECT n, public.generate_random_normalized_vector(128)\r\n FROM generate_series(1,5) n;\r\n\r\nCREATE INDEX ON test USING hnsw (embedding vector_cosine_ops);\r\n\r\nTest\r\n====\r\n\r\nSET enable_seqscan TO off;\r\nEXPLAIN ANALYZE\r\nSELECT count(*) FROM test;\r\n\r\nBefore e2225346:\r\n----------------\r\n\r\nAggregate (cost=10000041965.00..10000041965.01 rows=1 width=8) (actual \r\ntime=189.864..189.864 rows\r\n=1 loops=1)\r\n -> Seq Scan on test (cost=10000000000.00..10000040715.00 rows=5 \r\nwidth=0) (actual time=0.01\r\n8..168.294 rows=5 loops=1)\r\n(4 rows)\r\n\r\nWith e2225346:\r\n-------------\r\nERROR: cannot scan hnsw index without order\r\n\r\nSome things to note with the ivfflat/hnsw index AMs[3] in pgvector are \r\nthat they're used for \"ORDER BY\" scans exclusively. They currently don't \r\nsupport index only scans (noting as I tried reproducing the issue with \r\nGIST and couldn't do so because of that), but we wouldn't want to do a \r\nfull table \"count(*)\" on a IVFFlat/HNSW index anyway as it'd be more \r\nexpensive than just a full table scan.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[0] https://github.com/pgvector/pgvector\r\n[1] https://github.com/pgvector/pgvector/actions/runs/10519052945\r\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e2225346\r\n[3] https://github.com/pgvector/pgvector/blob/master/src/hnsw.c#L192", "msg_date": "Fri, 23 Aug 2024 11:16:57 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 23, 2024 at 11:17 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> We hit an issue with pgvector[0] where a regular `SELECT count(*) FROM\n> table`[1] is attempting to scan the index on the vector column when\n> `enable_seqscan` is disabled. Credit to Andrew Kane (CC'd) for flagging it.\n>\n> I was able to trace this back to e2225346. Here is a reproducer:\n\nIf I change EXPLAIN ANALYZE in this test to just EXPLAIN, I get this:\n\n Aggregate (cost=179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00..179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00\nrows=1 width=8)\n -> Index Only Scan using test_embedding_idx on test\n(cost=179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00..179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00\nrows=5 width=0)\n\nIt took me a moment to wrap my head around this: the cost estimate is\n312 decimal digits long. Apparently hnswcostestimate() just returns\nDBL_MAX when there are no scan keys because it really, really doesn't\nwant to do that. Before e2225346, that kept this plan from being\ngenerated because it was (much) larger than disable_cost. But now it\ndoesn't, because 1 disabled node makes a path more expensive than any\npossible non-disabled path. Since that was the whole point of the\npatch, I don't feel too bad about it.\n\nI find it a little weird that hnsw thinks itself able to return all\nthe tuples in an order the user chooses, but unable to return all of\nthe tuples in an arbitrary order. In core, we have precedent for index\ntypes that can't return individual tuples at all (gin, brin) but not\none that is able to return tuples in concept but has a panic attack if\nyou don't know how you want them sorted. I don't quite see why you\ncouldn't just treat that case the same as ORDER BY\nthe_first_column_of_the_index, or any other arbitrary rule that you\nwant to make up. Sure, it might be more expensive than a sequential\nscan, but the user said they didn't want a sequential scan. I'm not\nquite sure why pgvector thinks it gets to decide that it knows better\nthan the user, or the rest of the optimizer. I don't even think I\nreally believe it would always be worse: I've seen cases where a table\nwas badly bloated and mostly empty but its indexes were not bloated,\nand in that case an index scan can be a HUGE winner even though it\nwould normally be a lot worse than a sequential scan.\n\nIf you don't want to fix hnsw to work the way the core optimizer\nthinks it should, or if there's some reason it can't be done,\nalternatives might include (1) having the cost estimate function hack\nthe count of disabled nodes and (2) adding some kind of core support\nfor an index cost estimator refusing a path entirely. I haven't tested\n(1) so I don't know for sure that there are no issues, but I think we\nhave to do all of our cost estimating before we can think about adding\nthe path so I feel like there's a decent chance it would do what you\nwant.\n\nAlso, while I did take the initiative to download pgvector and compile\nit and hook up a debugger and figure out what was going on here, I'm\nnot really too sure that's my job. I do think I have a responsibility\nto help maintainers of out-of-core extensions who have problems as a\nresult of my commits, but I also think it's fair to hope that those\nmaintainers will try to minimize the amount of time that I need to\nspend trying to read code that I did not write and do not maintain.\nFortunately, this wasn't hard to figure out, but in a way that's kind\nof the point. That DBL_MAX hack was put there by somebody who must've\nunderstood that they were trying to use a very large cost to disable a\ncertain path shape completely, and it seems to me that if that person\nhad studied this case and the commit message for e2225346, they would\nhave likely understood what had happened pretty quickly. Do you think\nthat's an unfair feeling on my part?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 13:11:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 23, 2024 at 11:17 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> We hit an issue with pgvector[0] where a regular `SELECT count(*) FROM\n>> table`[1] is attempting to scan the index on the vector column when\n>> `enable_seqscan` is disabled. Credit to Andrew Kane (CC'd) for flagging it.\n\n> It took me a moment to wrap my head around this: the cost estimate is\n> 312 decimal digits long. Apparently hnswcostestimate() just returns\n> DBL_MAX when there are no scan keys because it really, really doesn't\n> want to do that. Before e2225346, that kept this plan from being\n> generated because it was (much) larger than disable_cost. But now it\n> doesn't, because 1 disabled node makes a path more expensive than any\n> possible non-disabled path. Since that was the whole point of the\n> patch, I don't feel too bad about it.\n\nYeah, I don't think it's necessary for v18 to be bug-compatible with\nthis hack.\n\n> If you don't want to fix hnsw to work the way the core optimizer\n> thinks it should, or if there's some reason it can't be done,\n> alternatives might include (1) having the cost estimate function hack\n> the count of disabled nodes and (2) adding some kind of core support\n> for an index cost estimator refusing a path entirely. I haven't tested\n> (1) so I don't know for sure that there are no issues, but I think we\n> have to do all of our cost estimating before we can think about adding\n> the path so I feel like there's a decent chance it would do what you\n> want.\n\nIt looks like amcostestimate could change the path's disabled_nodes\ncount, since that's set up before invoking amcostestimate. I guess\nit could be set to INT_MAX to have a comparable solution to before.\n\nI agree with you that it is not great that hnsw is refusing this case\nrather than finding a way to make it work, so I'm not excited about\nputting in support for refusing it in a less klugy way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Aug 2024 13:26:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 23, 2024 at 1:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It looks like amcostestimate could change the path's disabled_nodes\n> count, since that's set up before invoking amcostestimate. I guess\n> it could be set to INT_MAX to have a comparable solution to before.\n\nIt's probably better to add a more modest value, to avoid overflow.\nYou could add a million or so and be far away from overflow while\npresumably still being more disabled than any other path.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 13:37:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 23, 2024 at 1:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It looks like amcostestimate could change the path's disabled_nodes\n>> count, since that's set up before invoking amcostestimate. I guess\n>> it could be set to INT_MAX to have a comparable solution to before.\n\n> It's probably better to add a more modest value, to avoid overflow.\n> You could add a million or so and be far away from overflow while\n> presumably still being more disabled than any other path.\n\nBut that'd only matter if the path survived its first add_path\ntournament, which it shouldn't. If it does then you're at risk\nof the same run-time failure reported here.\n\n(Having said that, you're likely right that \"a million or so\"\nwould be a safer choice, since it doesn't require the assumption\nthat the path fails instantly.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Aug 2024 13:42:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 23/08/2024 20:11, Robert Haas wrote:\n> I find it a little weird that hnsw thinks itself able to return all\n> the tuples in an order the user chooses, but unable to return all of\n> the tuples in an arbitrary order.\n\nHNSW is weird in many ways:\n\n- There is no inherent sort order. It cannot do \"ORDER BY column\", only \nkNN-sort like \"ORDER BY column <-> value\".\n\n- It's approximate. It's not guaranteed to return the same set of rows \nas a sequential scan + sort.\n\n- The number of results it returns is limited by the hnsw.ef_search GUC, \ndefault 100.\n\n- It collects all the results (up to hnsw.ef_search) in memory, and only \nthen returns them. So if you tried to use it with a large number of \nresults, it can simply run out of memory.\n\nArguably all of those are bugs in HNSW, but it is what it is. The \nalgorithm is inherently approximate. Despite that, it's useful in practice.\n\n> In core, we have precedent for index\n> types that can't return individual tuples at all (gin, brin) but not\n> one that is able to return tuples in concept but has a panic attack if\n> you don't know how you want them sorted.\n\nWell, we do also have gin_fuzzy_search_limit. Two wrongs doesn't make it \nright, though; I'd love to get rid of that hack too somehow.\n\n> I don't quite see why you\n> couldn't just treat that case the same as ORDER BY\n> the_first_column_of_the_index, or any other arbitrary rule that you\n> want to make up. Sure, it might be more expensive than a sequential\n> scan, but the user said they didn't want a sequential scan. I'm not\n> quite sure why pgvector thinks it gets to decide that it knows better\n> than the user, or the rest of the optimizer. I don't even think I\n> really believe it would always be worse: I've seen cases where a table\n> was badly bloated and mostly empty but its indexes were not bloated,\n> and in that case an index scan can be a HUGE winner even though it\n> would normally be a lot worse than a sequential scan.\n\nSure, you could make it work. It could construct a vector out of thin \nair to compare with, when there's no scan key, or implement a completely \ndifferent codepath that traverses the full graph in no particular order.\n\n> If you don't want to fix hnsw to work the way the core optimizer\n> thinks it should, or if there's some reason it can't be done,\n> alternatives might include (1) having the cost estimate function hack\n> the count of disabled nodes and (2) adding some kind of core support\n> for an index cost estimator refusing a path entirely. I haven't tested\n> (1) so I don't know for sure that there are no issues, but I think we\n> have to do all of our cost estimating before we can think about adding\n> the path so I feel like there's a decent chance it would do what you\n> want.\n\nIt would seem useful for an index AM to be able to say \"nope, I can't do \nthis\". I don't remember how exactly this stuff works, but I'm surprised \nit doesn't already exist.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 21:18:32 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 8/23/24 1:11 PM, Robert Haas wrote:\r\n> On Fri, Aug 23, 2024 at 11:17 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> We hit an issue with pgvector[0] where a regular `SELECT count(*) FROM\r\n>> table`[1] is attempting to scan the index on the vector column when\r\n>> `enable_seqscan` is disabled. Credit to Andrew Kane (CC'd) for flagging it.\r\n>>\r\n>> I was able to trace this back to e2225346. Here is a reproducer:\r\n> \r\n> If I change EXPLAIN ANALYZE in this test to just EXPLAIN, I get this:\r\n> \r\n> Aggregate (cost=179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00..179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00\r\n> rows=1 width=8)\r\n> -> Index Only Scan using test_embedding_idx on test\r\n> (cost=179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00..179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00\r\n> rows=5 width=0)\r\n> \r\n> It took me a moment to wrap my head around this: the cost estimate is\r\n> 312 decimal digits long. Apparently hnswcostestimate() just returns\r\n> DBL_MAX when there are no scan keys because it really, really doesn't\r\n> want to do that. Before e2225346, that kept this plan from being\r\n> generated because it was (much) larger than disable_cost. But now it\r\n> doesn't, because 1 disabled node makes a path more expensive than any\r\n> possible non-disabled path. Since that was the whole point of the\r\n> patch, I don't feel too bad about it.\r\n> \r\n> I find it a little weird that hnsw thinks itself able to return all\r\n> the tuples in an order the user chooses, but unable to return all of\r\n> the tuples in an arbitrary order. \r\n\r\nFor HNSW, \"order\" is approximated - even when it's returning \"in the \r\norder the user chooses,\" the scan is making the best guess at what the \r\ncorrect order is based on the index structure. At the traditional \"leaf\" \r\nlevel of an index, you're actually traversing a graph-based neighborhood \r\nof values. And maybe we could say \"Hey, if you get the equivalent of a \r\ncount(*), just do the count at the bottom layer (Layer 0)\" but I think \r\nthis would be very expensive.\r\n\r\n> In core, we have precedent for index\r\n> types that can't return individual tuples at all (gin, brin) but not\r\n> one that is able to return tuples in concept but has a panic attack if\r\n> you don't know how you want them sorted. I don't quite see why you\r\n> couldn't just treat that case the same as ORDER BY\r\n> the_first_column_of_the_index, or any other arbitrary rule that you\r\n> want to make up. Sure, it might be more expensive than a sequential\r\n> scan, but the user said they didn't want a sequential scan. I'm not\r\n> quite sure why pgvector thinks it gets to decide that it knows better\r\n> than the user, or the rest of the optimizer. I don't even think I\r\n> really believe it would always be worse: I've seen cases where a table\r\n> was badly bloated and mostly empty but its indexes were not bloated,\r\n> and in that case an index scan can be a HUGE winner even though it\r\n> would normally be a lot worse than a sequential scan.\r\n\r\nThe challenge here is that HNSW is used specifically for approximating \r\nordering; it's not used to directly filter results in the traditional \r\nsense (e.g. via. a WHERE clause). It's a bit different than the others \r\nmentioned in that regard. However, maybe there are other options to \r\nconsider here based on this work.\r\n\r\n> If you don't want to fix hnsw to work the way the core optimizer\r\n> thinks it should, or if there's some reason it can't be done,\r\n> alternatives might include (1) having the cost estimate function hack\r\n> the count of disabled nodes and (2) adding some kind of core support\r\n> for an index cost estimator refusing a path entirely. I haven't tested\r\n> (1) so I don't know for sure that there are no issues, but I think we\r\n> have to do all of our cost estimating before we can think about adding\r\n> the path so I feel like there's a decent chance it would do what you\r\n> want.\r\n\r\nThanks for the options.\r\n\r\n> Also, while I did take the initiative to download pgvector and compile\r\n> it and hook up a debugger and figure out what was going on here, I'm\r\n> not really too sure that's my job. I do think I have a responsibility\r\n> to help maintainers of out-of-core extensions who have problems as a\r\n> result of my commits, but I also think it's fair to hope that those\r\n> maintainers will try to minimize the amount of time that I need to\r\n> spend trying to read code that I did not write and do not maintain.\r\n> Fortunately, this wasn't hard to figure out, but in a way that's kind\r\n> of the point. That DBL_MAX hack was put there by somebody who must've\r\n> understood that they were trying to use a very large cost to disable a\r\n> certain path shape completely, and it seems to me that if that person\r\n> had studied this case and the commit message for e2225346, they would\r\n> have likely understood what had happened pretty quickly. Do you think\r\n> that's an unfair feeling on my part?\r\n\r\nI don't think extension maintainers necessarily have the same level of \r\nPostgreSQL internals as you or many of the other people who frequent \r\n-hackers, so I think it's fair for them to ask questions or raise issues \r\nwith patches they don't understand. I was able to glean from the commit \r\nmessage that this was the commit that likely changed the behavior in \r\npgvector, but I can't immediately glean looking through the code as to \r\nwhy. (And using your logic, should an extension maintainer understand \r\nthe optimizer code when PostgreSQL is providing an interface to the \r\nextension maintainer to encapsulate its interactions)?\r\n\r\nYou can always push back and say \"Well, maybe try this, or try that\" - \r\nwhich would be a mentoring approach that could push it back on the \r\nextension maintainer, which is valid, but I don't see why an extension \r\nmaintainer can't raise an issue or ask a question here.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Fri, 23 Aug 2024 14:19:56 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 23, 2024 at 2:20 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> I don't think extension maintainers necessarily have the same level of\n> PostgreSQL internals as you or many of the other people who frequent\n> -hackers, so I think it's fair for them to ask questions or raise issues\n> with patches they don't understand. I was able to glean from the commit\n> message that this was the commit that likely changed the behavior in\n> pgvector, but I can't immediately glean looking through the code as to\n> why. (And using your logic, should an extension maintainer understand\n> the optimizer code when PostgreSQL is providing an interface to the\n> extension maintainer to encapsulate its interactions)?\n>\n> You can always push back and say \"Well, maybe try this, or try that\" -\n> which would be a mentoring approach that could push it back on the\n> extension maintainer, which is valid, but I don't see why an extension\n> maintainer can't raise an issue or ask a question here.\n\nI'm certainly not saying that extension maintainers can't raise issues\nor ask questions here. I just feel that the problem could have been\nanalyzed a bit more before posting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 14:29:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 23, 2024 at 2:18 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> It would seem useful for an index AM to be able to say \"nope, I can't do\n> this\". I don't remember how exactly this stuff works, but I'm surprised\n> it doesn't already exist.\n\nYeah, I think so, too. While this particular problem is due to a\nproblem with an out-of-core AM that may be doing some slightly\nquestionable things, there's not really any reason why we couldn't\nhave similar problems in core for some other reason. For example, we\ncould change amcostestimate's signature so that an extension can\nreturn true or false, with false meaning that the path can't be\nsupported. We could then change cost_index so that it can also return\ntrue or false, and then change create_index_path so it has the option\nto return NULL. Callers of create_index_path could then be adjusted\nnot to call add_path when NULL is returned.\n\nThere might be a more elegant way to do it with more refactoring, but\nthe above seems good enough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 14:36:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 23, 2024 at 2:18 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> It would seem useful for an index AM to be able to say \"nope, I can't do\n>> this\". I don't remember how exactly this stuff works, but I'm surprised\n>> it doesn't already exist.\n\n> Yeah, I think so, too. While this particular problem is due to a\n> problem with an out-of-core AM that may be doing some slightly\n> questionable things, there's not really any reason why we couldn't\n> have similar problems in core for some other reason. For example, we\n> could change amcostestimate's signature so that an extension can\n> return true or false, with false meaning that the path can't be\n> supported. We could then change cost_index so that it can also return\n> true or false, and then change create_index_path so it has the option\n> to return NULL. Callers of create_index_path could then be adjusted\n> not to call add_path when NULL is returned.\n\nIf we're going to do this, I'd prefer a solution that doesn't force\nAPI changes onto the vast majority of index AMs that don't have a\nproblem here.\n\nOne way could be to formalize the hack we were just discussing:\n\"To refuse a proposed path, amcostestimate can set the path's\ndisabled_nodes value to anything larger than 1\". I suspect that\nthat would actually be sufficient, since the path would then lose\nto the seqscan path in add_path even if that were disabled; but\nwe could put in a hack to prevent it from getting add_path'd at all.\n\nAnother way could be to bless what hnsw is already doing:\n\"To refuse a proposed path, amcostestimate can return an\nindexTotalCost of DBL_MAX\" (or maybe insisting on +Inf would\nbe better). That would still require changes comparable to\nwhat you specify above, but only in the core-code call path\nnot in every AM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Aug 2024 14:48:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Aug 23, 2024 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we're going to do this, I'd prefer a solution that doesn't force\n> API changes onto the vast majority of index AMs that don't have a\n> problem here.\n\nThat's a fair concern.\n\n> One way could be to formalize the hack we were just discussing:\n> \"To refuse a proposed path, amcostestimate can set the path's\n> disabled_nodes value to anything larger than 1\". I suspect that\n> that would actually be sufficient, since the path would then lose\n> to the seqscan path in add_path even if that were disabled; but\n> we could put in a hack to prevent it from getting add_path'd at all.\n>\n> Another way could be to bless what hnsw is already doing:\n> \"To refuse a proposed path, amcostestimate can return an\n> indexTotalCost of DBL_MAX\" (or maybe insisting on +Inf would\n> be better). That would still require changes comparable to\n> what you specify above, but only in the core-code call path\n> not in every AM.\n\nIf just setting disabled_nodes to a value larger than one works, I'd\nbe inclined to not do anything here at all, except possibly document\nthat you can do that. Otherwise, we should probably change the code\nsomehow.\n\nI find both of your proposed solutions above to be pretty inelegant,\nand I think if this problem occurred with a core AM, I'd push for an\nAPI break rather than accept the ugliness. \"This path is not valid\nbecause the AM cannot support it\", \"this path is crazy expensive\", and\n\"the user told us not to do it this way\" are three different things,\nand signalling two or more of them in the same way muddies the water\nin a way that I don't like. API breaks aren't free, though, so I\ncertainly understand why you're not very keen to introduce one where\nit can reasonably be avoided.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 15:05:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 23/08/2024 22:05, Robert Haas wrote:\n> On Fri, Aug 23, 2024 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we're going to do this, I'd prefer a solution that doesn't force\n>> API changes onto the vast majority of index AMs that don't have a\n>> problem here.\n> \n> That's a fair concern.\n\nYeah, although I don't think it's too bad. There are not that many \nout-of-tree index AM implementations to begin with, and we do change \nthings often enough that any interesting AM implementation will likely \nneed a few #ifdef PG_VERSION blocks for each PostgreSQL major version \nanyway. pgvector certainly does.\n\n>> One way could be to formalize the hack we were just discussing:\n>> \"To refuse a proposed path, amcostestimate can set the path's\n>> disabled_nodes value to anything larger than 1\". I suspect that\n>> that would actually be sufficient, since the path would then lose\n>> to the seqscan path in add_path even if that were disabled; but\n>> we could put in a hack to prevent it from getting add_path'd at all.\n>>\n>> Another way could be to bless what hnsw is already doing:\n>> \"To refuse a proposed path, amcostestimate can return an\n>> indexTotalCost of DBL_MAX\" (or maybe insisting on +Inf would\n>> be better). That would still require changes comparable to\n>> what you specify above, but only in the core-code call path\n>> not in every AM.\n> \n> If just setting disabled_nodes to a value larger than one works, I'd\n> be inclined to not do anything here at all, except possibly document\n> that you can do that. Otherwise, we should probably change the code\n> somehow.\n\nModifying the passed-in Path feels hacky. amcostestimate currently \nreturns all the estimates in *output parameters, it doesn't modify the \nPath at all.\n\n> I find both of your proposed solutions above to be pretty inelegant,\n> and I think if this problem occurred with a core AM, I'd push for an\n> API break rather than accept the ugliness. \"This path is not valid\n> because the AM cannot support it\", \"this path is crazy expensive\", and\n> \"the user told us not to do it this way\" are three different things,\n> and signalling two or more of them in the same way muddies the water\n> in a way that I don't like. API breaks aren't free, though, so I\n> certainly understand why you're not very keen to introduce one where\n> it can reasonably be avoided.\n\nThe +Inf approach seems fine to me. Or perhaps NaN. Your proposal would \ncertainly be the cleanest interface if we don't mind incurring churn to \nAM implementations.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 22:12:55 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I find both of your proposed solutions above to be pretty inelegant,\n\nThey are that. If we were working in a green field I'd not propose\nsuch things ... but we aren't. I believe there are now a fair number\nof out-of-core index AMs, so I'd rather not break all of them if we\ndon't have to.\n\n> and I think if this problem occurred with a core AM, I'd push for an\n> API break rather than accept the ugliness. \"This path is not valid\n> because the AM cannot support it\", \"this path is crazy expensive\", and\n> \"the user told us not to do it this way\" are three different things,\n> and signalling two or more of them in the same way muddies the water\n> in a way that I don't like.\n\nI think it's not that bad, because we can limit the knowledge of this\nhack to the amcostestimate interface, which doesn't really deal in\n\"the user told us not to do it this way\" at all. That argues against\nmy first proposal though (having amcostestimate touch disabled_nodes\ndirectly). I now think that a reasonable compromise is to say that\nsetting indexTotalCost to +Inf signals that \"the AM cannot support\nit\". That's not conflated too much with the other case, since even a\ncrazy-expensive cost estimate surely ought to be finite. We can have\ncost_index untangle that case into a separate failure return so that\nthe within-the-core-optimizer APIs remain clean.\n\nWhile that would require hnsw to make a small code change (return\n+Inf not DBL_MAX), that coding should work in back branches too,\nso they don't even need a version check.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Aug 2024 15:32:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 8/23/24 2:29 PM, Robert Haas wrote:\r\n> On Fri, Aug 23, 2024 at 2:20 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> I don't think extension maintainers necessarily have the same level of\r\n>> PostgreSQL internals as you or many of the other people who frequent\r\n>> -hackers, so I think it's fair for them to ask questions or raise issues\r\n>> with patches they don't understand. I was able to glean from the commit\r\n>> message that this was the commit that likely changed the behavior in\r\n>> pgvector, but I can't immediately glean looking through the code as to\r\n>> why. (And using your logic, should an extension maintainer understand\r\n>> the optimizer code when PostgreSQL is providing an interface to the\r\n>> extension maintainer to encapsulate its interactions)?\r\n>>\r\n>> You can always push back and say \"Well, maybe try this, or try that\" -\r\n>> which would be a mentoring approach that could push it back on the\r\n>> extension maintainer, which is valid, but I don't see why an extension\r\n>> maintainer can't raise an issue or ask a question here.\r\n> \r\n> I'm certainly not saying that extension maintainers can't raise issues\r\n> or ask questions here. I just feel that the problem could have been\r\n> analyzed a bit more before posting.\r\n\r\nThis assumes that the person posting the problem has the requisite \r\nexpertise to determine what the issue is. Frankly, I was happy I was \r\nable to at least trace the issue down to the particular commit and \r\nbrought what appeared to be a reliable reproducer, in absence of knowing \r\nif 1/ this was actually an issue with PG or pgvector, 2/ does it \r\nactually require a fix, or 3/ what the problem could actually be, given \r\na lack of understanding of the full inner working of the optimizer.\r\n\r\nBased on the above, I'm not sure what bar this needed to clear to begin \r\na discussion on the mailing list (which further downthread, seems to be \r\nraising some interesting points).\r\n\r\nJonathan", "msg_date": "Fri, 23 Aug 2024 17:29:10 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 8/23/24 3:32 PM, Tom Lane wrote:\r\n> Robert Haas <robertmhaas@gmail.com> writes:\r\n>> I find both of your proposed solutions above to be pretty inelegant,\r\n> \r\n> They are that. If we were working in a green field I'd not propose\r\n> such things ... but we aren't. I believe there are now a fair number\r\n> of out-of-core index AMs, so I'd rather not break all of them if we\r\n> don't have to.\r\n\r\nFor distribution of index AMs in the wild, it's certainly > 1 now, and \r\nincreasing. They're not the easiest extension types to build out, so \r\nit's not as widely distributed as some of the other APIs, but there are \r\na bunch out there, as well as language-specific libs (e.g. pgrx for \r\nRust) that offer wrappers around them.\r\n\r\n>> and I think if this problem occurred with a core AM, I'd push for an\r\n>> API break rather than accept the ugliness. \"This path is not valid\r\n>> because the AM cannot support it\", \"this path is crazy expensive\", and\r\n>> \"the user told us not to do it this way\" are three different things,\r\n>> and signalling two or more of them in the same way muddies the water\r\n>> in a way that I don't like.\r\n> \r\n> I think it's not that bad, because we can limit the knowledge of this\r\n> hack to the amcostestimate interface, which doesn't really deal in\r\n> \"the user told us not to do it this way\" at all. That argues against\r\n> my first proposal though (having amcostestimate touch disabled_nodes\r\n> directly). I now think that a reasonable compromise is to say that\r\n> setting indexTotalCost to +Inf signals that \"the AM cannot support\r\n> it\". That's not conflated too much with the other case, since even a\r\n> crazy-expensive cost estimate surely ought to be finite. We can have\r\n> cost_index untangle that case into a separate failure return so that\r\n> the within-the-core-optimizer APIs remain clean.\r\n> \r\n> While that would require hnsw to make a small code change (return\r\n> +Inf not DBL_MAX), that coding should work in back branches too,\r\n> so they don't even need a version check.\r\n\r\n+1 for this approach (I'll do a quick test in my pgvector workspace just \r\nto ensure it gets the same results in the older version).\r\n\r\nJonathan", "msg_date": "Fri, 23 Aug 2024 17:33:12 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On 8/23/24 5:33 PM, Jonathan S. Katz wrote:\r\n> On 8/23/24 3:32 PM, Tom Lane wrote:\r\n>> Robert Haas <robertmhaas@gmail.com> writes:\r\n\r\n>> I think it's not that bad, because we can limit the knowledge of this\r\n>> hack to the amcostestimate interface, which doesn't really deal in\r\n>> \"the user told us not to do it this way\" at all.  That argues against\r\n>> my first proposal though (having amcostestimate touch disabled_nodes\r\n>> directly).  I now think that a reasonable compromise is to say that\r\n>> setting indexTotalCost to +Inf signals that \"the AM cannot support\r\n>> it\".  That's not conflated too much with the other case, since even a\r\n>> crazy-expensive cost estimate surely ought to be finite.  We can have\r\n>> cost_index untangle that case into a separate failure return so that\r\n>> the within-the-core-optimizer APIs remain clean.\r\n>>\r\n>> While that would require hnsw to make a small code change (return\r\n>> +Inf not DBL_MAX), that coding should work in back branches too,\r\n>> so they don't even need a version check.\r\n> \r\n> +1 for this approach (I'll do a quick test in my pgvector workspace just \r\n> to ensure it gets the same results in the older version).\r\n\r\n...and I confirmed the +inf approach on PG16 +pgvector does still give \r\nthe same expected result.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Fri, 23 Aug 2024 17:44:37 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Hello Robert,\n\n21.08.2024 17:29, Robert Haas wrote:\n> I went ahead and committed these patches. ...\n\nPlease take a look at the following code:\nstatic void\nlabel_sort_with_costsize(PlannerInfo *root, Sort *plan, double limit_tuples)\n{\n...\n     cost_sort(&sort_path, root, NIL,\n               lefttree->total_cost,\n               plan->plan.disabled_nodes,\n               lefttree->plan_rows,\n               lefttree->plan_width,\n               0.0,\n               work_mem,\n               limit_tuples);\n\nGiven the cost_sort() declaration:\nvoid\ncost_sort(Path *path, PlannerInfo *root,\n           List *pathkeys, int input_disabled_nodes,\n           Cost input_cost, double tuples, int width,\n           Cost comparison_cost, int sort_mem,\n           double limit_tuples)\n\nAren't the input_disabled_nodes and input_cost arguments swapped in the\nabove call?\n\n(I've discovered this with UBSan, which complained\ncreateplan.c:5457:6: runtime error: 4.40465e+09 is outside the range of representable values of type 'int'\nwhile executing a query with a large estimated cost.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 6 Sep 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Sep 6, 2024 at 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> static void\n> label_sort_with_costsize(PlannerInfo *root, Sort *plan, double limit_tuples)\n> {\n> ...\n> cost_sort(&sort_path, root, NIL,\n> lefttree->total_cost,\n> plan->plan.disabled_nodes,\n> lefttree->plan_rows,\n> lefttree->plan_width,\n> 0.0,\n> work_mem,\n> limit_tuples);\n>\n> Given the cost_sort() declaration:\n> void\n> cost_sort(Path *path, PlannerInfo *root,\n> List *pathkeys, int input_disabled_nodes,\n> Cost input_cost, double tuples, int width,\n> Cost comparison_cost, int sort_mem,\n> double limit_tuples)\n>\n> Aren't the input_disabled_nodes and input_cost arguments swapped in the\n> above call?\n\nNice catch! I checked other callers to cost_sort, and they are all\ngood.\n\n(I'm a little surprised that this does not cause any plan diffs in the\nregression tests.)\n\nThanks\nRichard\n\n\n", "msg_date": "Fri, 6 Sep 2024 17:27:22 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Sep 6, 2024 at 5:27 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Fri, Sep 6, 2024 at 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > static void\n> > label_sort_with_costsize(PlannerInfo *root, Sort *plan, double limit_tuples)\n\n> (I'm a little surprised that this does not cause any plan diffs in the\n> regression tests.)\n\nAh I see. label_sort_with_costsize is only used to label the Sort\nnode nicely for EXPLAIN, and usually we do not display the cost\nnumbers in regression tests.\n\nThanks\nRichard\n\n\n", "msg_date": "Fri, 6 Sep 2024 17:51:05 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "Hello Richard,\n\n06.09.2024 12:51, Richard Guo wrote:\n> Ah I see. label_sort_with_costsize is only used to label the Sort\n> node nicely for EXPLAIN, and usually we do not display the cost\n> numbers in regression tests.\n\nIn fact, I see the error with the following (EXPLAIN-less) query:\ncreate table t (x int);\n\nselect * from t natural inner join\n(select * from (values(1)) v(x)\n   union all\n  select 1 from t t1 full join t t2 using (x),\n                t t3 full join t t4 using (x)\n);\n\n2024-09-06 10:01:48.034 UTC [696535:5] psql LOG:  statement: select * from t natural inner join\n     (select * from (values(1)) v(x)\n       union all\n      select 1 from t t1 full join t t2 using (x),\n                    t t3 full join t t4 using (x)\n     );\ncreateplan.c:5457:6: runtime error: 4.99254e+09 is outside the range of representable values of type 'int'\nSUMMARY: UndefinedBehaviorSanitizer: undefined-behavior createplan.c:5457:6 in\n\n(An UBSan-enabled build --with-blocksize=32 is required for this query to\ntrigger the failure.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 6 Sep 2024 13:10:10 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, Sep 6, 2024 at 5:27 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Fri, Sep 6, 2024 at 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > static void\n> > label_sort_with_costsize(PlannerInfo *root, Sort *plan, double limit_tuples)\n> > {\n> > ...\n> > cost_sort(&sort_path, root, NIL,\n> > lefttree->total_cost,\n> > plan->plan.disabled_nodes,\n> > lefttree->plan_rows,\n> > lefttree->plan_width,\n> > 0.0,\n> > work_mem,\n> > limit_tuples);\n> >\n> > Given the cost_sort() declaration:\n> > void\n> > cost_sort(Path *path, PlannerInfo *root,\n> > List *pathkeys, int input_disabled_nodes,\n> > Cost input_cost, double tuples, int width,\n> > Cost comparison_cost, int sort_mem,\n> > double limit_tuples)\n> >\n> > Aren't the input_disabled_nodes and input_cost arguments swapped in the\n> > above call?\n>\n> Nice catch! I checked other callers to cost_sort, and they are all\n> good.\n\nFixed.\n\nThanks\nRichard\n\n\n", "msg_date": "Mon, 9 Sep 2024 12:09:26 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Mon, Sep 9, 2024 at 12:09 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> Fixed.\n\nThanks to Alexander for the very good catch and to Richard for pushing the fix.\n\n(I started to respond to this last week but didn't quite get to it\nbefore I ran out of time/energy.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 9 Sep 2024 11:28:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Wed, 2024-08-21 at 10:29 -0400, Robert Haas wrote:\n> I went ahead and committed these patches. I know there's some debate\n> over whether we want to show the # of disabled nodes and if so whether\n> it should be controlled by COSTS, and I suspect I haven't completely\n> allayed David's concerns about the initial_cost_XXX functions although\n> I think that I did the right thing. But, I don't have the impression\n> that anyone is desperately opposed to the basic concept, so I think it\n> makes sense to put these into the tree and see what happens. We have\n> quite a bit of time left in this release cycle to uncover bugs, hear\n> from users or other developers, etc. about what problems there may be\n> with this. If we end up deciding to reverse course or need to fix a\n> bunch of stuff, so be it, but let's see what the feedback is.\n\nI am somewhat unhappy about the \"Disabled Nodes\" in EXPLAIN.\n\nFirst, the commit message confused me: it claims that the information\nis displayed with EXPLAIN ANALYZE, but it's shown with every EXPLAIN.\n\nBut that's not important. My complaints are:\n\n1. The \"disabled nodes\" are always displayed.\n I'd be happier if it were only shown for COSTS ON, but I think it\n would be best if they were only shown with VERBOSE ON.\n\n After all, the messages are pretty verbose...\n\n2. The \"disabled nodes\" are not only shown at the nodes where nodes\n were actually disabled, but also at every nodes above these nodes.\n\n This would be fine:\n\n Sort\n -> Nested Loop Join\n -> Hash Join\n -> Index Scan\n Disabled Nodes: 1\n -> Hash\n -> Index Scan\n Disabled Nodes: 1\n -> Index Scan\n Disabled Nodes: 1\n\n This is annoying:\n\n Sort\n Disabled Nodes: 3\n -> Nested Loop Join\n Disabled Nodes: 3\n -> Hash Join\n Disabled Nodes: 2\n -> Index Scan\n Disabled Nodes: 1\n -> Hash\n -> Index Scan\n Disabled Nodes: 1\n -> Index Scan\n Disabled Nodes: 1\n\nI have no idea how #2 could be implemented, but it would be nice to have.\nPlease, please, can we show the \"disabled nodes\" only with VERBOSE?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 27 Sep 2024 10:42:32 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Fri, 27 Sept 2024 at 20:42, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> 2. The \"disabled nodes\" are not only shown at the nodes where nodes\n> were actually disabled, but also at every nodes above these nodes.\n\nI'm also not a fan either and I'd like to see this output improved.\n\nIt seems like it's easy enough to implement some logic to detect when\na given node is disabled just by checking if the disable_nodes count\nis higher than the sum of the disabled_node field of the node's\nchildren. If there are no children (a scan node) and disabed_nodes >\n0 then it must be disabled. There's even a nice fast path where we\ndon't need to check the children if disabled_nodes == 0.\n\nHere's a POC grade patch of how I'd rather see it looking.\n\nI opted to have a boolean field as I didn't see any need for an\ninteger count. I also changed things around so we always display the\nboolean property in non-text EXPLAIN. Normally, we don't mind being\nmore verbose there.\n\nI also fixed a bug in make_sort() where disabled_nodes isn't being set\nproperly. I'll do an independent patch for that if this goes nowhere.\n\nDavid", "msg_date": "Sat, 28 Sep 2024 00:04:20 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" }, { "msg_contents": "On Sat, 2024-09-28 at 00:04 +1200, David Rowley wrote:\n> On Fri, 27 Sept 2024 at 20:42, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > 2. The \"disabled nodes\" are not only shown at the nodes where nodes\n> > were actually disabled, but also at every nodes above these nodes.\n> \n> I'm also not a fan either and I'd like to see this output improved.\n> \n> It seems like it's easy enough to implement some logic to detect when\n> a given node is disabled just by checking if the disable_nodes count\n> is higher than the sum of the disabled_node field of the node's\n> children. If there are no children (a scan node) and disabed_nodes >\n> 0 then it must be disabled. There's even a nice fast path where we\n> don't need to check the children if disabled_nodes == 0.\n> \n> Here's a POC grade patch of how I'd rather see it looking.\n> \n> I opted to have a boolean field as I didn't see any need for an\n> integer count. I also changed things around so we always display the\n> boolean property in non-text EXPLAIN. Normally, we don't mind being\n> more verbose there.\n> \n> I also fixed a bug in make_sort() where disabled_nodes isn't being set\n> properly. I'll do an independent patch for that if this goes nowhere.\n\nThanks, and the patch looks good.\n\nWhy did you change \"Disabled\" from an integer to a boolean?\nIf you see a join where two plans were disabled, that's useful information.\n\nI would still prefer to see the disabled nodes only in VERBOSE explain,\nbut I'm satisfied if the disabled nodes don't show up all over the place.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 30 Sep 2024 19:17:38 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: On disable_cost" } ]
[ { "msg_contents": "This patch moves the parse analysis component of ExecuteQuery() and\nEvaluateParams() into a new transformExecuteStmt() that is called from\ntransformStmt(). This makes EXECUTE behave more like other utility\ncommands.\n\nEffects are that error messages can have position information (see \nregression test case), and it allows using external parameters in the \narguments of the EXECUTE command.\n\nI had previously inquired about this in [0] and some vague concerns were \nraised. I haven't dug very deep on this, but I figure with an actual \npatch it might be easier to review and figure out if there are any problems.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/ed2767e5-c506-048d-8ddf-280ecbc9e1b7%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 1 Nov 2019 08:07:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Refactor parse analysis of EXECUTE command" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> This patch moves the parse analysis component of ExecuteQuery() and\n> EvaluateParams() into a new transformExecuteStmt() that is called from\n> transformStmt().\n\nUhmm ... no actual patch attached?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Nov 2019 11:00:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "On 2019-11-02 16:00, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> This patch moves the parse analysis component of ExecuteQuery() and\n>> EvaluateParams() into a new transformExecuteStmt() that is called from\n>> transformStmt().\n> \n> Uhmm ... no actual patch attached?\n\nOops, here it is.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 4 Nov 2019 08:53:18 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "Hello.\n\nAt Mon, 4 Nov 2019 08:53:18 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2019-11-02 16:00, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >> This patch moves the parse analysis component of ExecuteQuery() and\n> >> EvaluateParams() into a new transformExecuteStmt() that is called from\n> >> transformStmt().\n> > Uhmm ... no actual patch attached?\n> \n> Oops, here it is.\n\nThe patch just moves the first half of EvaluateParams that is\nirrelevant to executor state to before portal parameters are set. I\nlooked with a suspect that extended protocol or SPI are affected but\nAFAICS it doesn't seem to.\n\nI dug into repository and found that transformExecuteStmt existed at\nthe time of implementing PREPARE-EXECUTE statements(28e82066a1) and\nremoved by the commit b9527e9840 which is related to\nplan-invalidation.\n\ngit show -s --format=%B b9527e984092e838790b543b014c0c2720ea4f11\n> In service of this, rearrange utility-statement processing so that parse\n> analysis does not assume table schemas can't change before execution for\n> utility statements (necessary because we don't attempt to re-acquire locks\n> for utility statements when reusing a stored plan). This requires some\n\nIsn't this related to the current structure?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 05 Nov 2019 19:27:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "út 5. 11. 2019 v 11:28 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> Hello.\n>\n> At Mon, 4 Nov 2019 08:53:18 +0100, Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote in\n> > On 2019-11-02 16:00, Tom Lane wrote:\n> > > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > >> This patch moves the parse analysis component of ExecuteQuery() and\n> > >> EvaluateParams() into a new transformExecuteStmt() that is called from\n> > >> transformStmt().\n> > > Uhmm ... no actual patch attached?\n> >\n> > Oops, here it is.\n>\n> The patch just moves the first half of EvaluateParams that is\n> irrelevant to executor state to before portal parameters are set. I\n> looked with a suspect that extended protocol or SPI are affected but\n> AFAICS it doesn't seem to.\n>\n> I dug into repository and found that transformExecuteStmt existed at\n> the time of implementing PREPARE-EXECUTE statements(28e82066a1) and\n> removed by the commit b9527e9840 which is related to\n> plan-invalidation.\n>\n> git show -s --format=%B b9527e984092e838790b543b014c0c2720ea4f11\n> > In service of this, rearrange utility-statement processing so that parse\n> > analysis does not assume table schemas can't change before execution for\n> > utility statements (necessary because we don't attempt to re-acquire\n> locks\n> > for utility statements when reusing a stored plan). This requires some\n>\n> Isn't this related to the current structure?\n>\n\nI think so it should be ok, because the transformation is still in same\nstatement - if I understand well.\n\nSo visibility of system catalogue or access to plan cache should not be\nchanged.\n\nRegards\n\nPavel\n\n\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>\n>\n\nút 5. 11. 2019 v 11:28 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:Hello.\n\nAt Mon, 4 Nov 2019 08:53:18 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2019-11-02 16:00, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >> This patch moves the parse analysis component of ExecuteQuery() and\n> >> EvaluateParams() into a new transformExecuteStmt() that is called from\n> >> transformStmt().\n> > Uhmm ... no actual patch attached?\n> \n> Oops, here it is.\n\nThe patch just moves the first half of EvaluateParams that is\nirrelevant to executor state to before portal parameters are set. I\nlooked with a suspect that extended protocol or SPI are affected but\nAFAICS it doesn't seem to.\n\nI dug into repository and found that transformExecuteStmt existed at\nthe time of implementing PREPARE-EXECUTE statements(28e82066a1) and\nremoved by the commit b9527e9840 which is related to\nplan-invalidation.\n\ngit show -s --format=%B b9527e984092e838790b543b014c0c2720ea4f11\n> In service of this, rearrange utility-statement processing so that parse\n> analysis does not assume table schemas can't change before execution for\n> utility statements (necessary because we don't attempt to re-acquire locks\n> for utility statements when reusing a stored plan).  This requires some\n\nIsn't this related to the current structure?I think so it should be ok, because the transformation is still in same statement - if I understand well.  So visibility of system catalogue or access to plan cache should not be changed.RegardsPavel\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 8 Nov 2019 08:13:55 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "po 4. 11. 2019 v 8:53 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2019-11-02 16:00, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >> This patch moves the parse analysis component of ExecuteQuery() and\n> >> EvaluateParams() into a new transformExecuteStmt() that is called from\n> >> transformStmt().\n> >\n> > Uhmm ... no actual patch attached?\n>\n> Oops, here it is.\n>\n\nI checked this patch, and I think so it's correct and wanted. It introduce\ntransform stage for EXECUTE command, and move there the argument\ntransformation.\n\nThis has sensible change - the code is much more correct now.\n\nThe patching, compilation was without any problems, make check-world too.\n\nI was little bit confused about regress tests - the patch did some code\nrefactoring and I expect so main target is same behave before and after\npatching. But the regress tests shows new feature that is just side effect\n(nice) of patch. More, the example is little bit strange - nobody will use\nprepared statements and execution in SQL function. It should be better\ncommented.\n\nI'll mark this patch as ready for commiters.\n\nRegards\n\nPavel\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npo 4. 11. 2019 v 8:53 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2019-11-02 16:00, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> This patch moves the parse analysis component of ExecuteQuery() and\n>> EvaluateParams() into a new transformExecuteStmt() that is called from\n>> transformStmt().\n> \n> Uhmm ... no actual patch attached?\n\nOops, here it is.I checked this patch, and I think so it's correct and wanted. It introduce transform stage for EXECUTE command, and move there the argument transformation.This has sensible change - the code is much more correct now. The patching, compilation was without any problems, make check-world too. I was little bit confused about regress tests - the patch did some code refactoring and I expect so main target is same behave before and after patching. But the regress tests shows new feature that is just side effect (nice) of patch. More, the example is little bit strange - nobody will use prepared statements and execution in SQL function. It should be better commented.I'll mark this patch as ready for commiters.RegardsPavel\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 8 Nov 2019 08:38:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "On 2019-11-08 08:13, Pavel Stehule wrote:\n> I dug into repository and found that transformExecuteStmt existed at\n> the time of implementing PREPARE-EXECUTE statements(28e82066a1) and\n> removed by the commit b9527e9840 which is related to\n> plan-invalidation.\n> \n> git show -s --format=%B b9527e984092e838790b543b014c0c2720ea4f11\n> > In service of this, rearrange utility-statement processing so\n> that parse\n> > analysis does not assume table schemas can't change before\n> execution for\n> > utility statements (necessary because we don't attempt to\n> re-acquire locks\n> > for utility statements when reusing a stored plan).  This\n> requires some\n> \n> Isn't this related to the current structure?\n> \n> I think so it should be ok, because the transformation is still in same \n> statement - if I understand well.\n> \n> So visibility of system catalogue or access to plan cache should not be \n> changed.\n\nI think what that patch was addressing is, if you use a protocol-level \nprepare+execute with commands like CREATE INDEX, CREATE VIEW, or COPY \nand you change the table schema between the prepare and execute, things \nwould break, for the reasons explained in the commit message. So any \nparse analysis in utility statements that accesses table schemas needs \nto be done in the execute phase, not in the prepare phase, as one might \nthink.\n\nParse analysis of EXECUTE does not access any tables, so if I understood \nthis correctly, this concern doesn't apply here.\n\nInterestingly, the above commit also removed the prepare-time \ntransformation of ExplainStmt, but it was later put back and now has the \ncomment \"We used to postpone that until execution, but it's really \nnecessary to do it during the normal parse analysis phase to ensure that \nside effects of parser hooks happen at the expected time.\" So there \nappears to be a generally uneasy situation still about how to do this \ncorrectly.\n\nPerhaps something could be done about the issue \"because we don't \nattempt to re-acquire locks for utility statements when reusing a stored \nplan\"?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 Nov 2019 08:54:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "pá 8. 11. 2019 v 8:54 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2019-11-08 08:13, Pavel Stehule wrote:\n> > I dug into repository and found that transformExecuteStmt existed at\n> > the time of implementing PREPARE-EXECUTE statements(28e82066a1) and\n> > removed by the commit b9527e9840 which is related to\n> > plan-invalidation.\n> >\n> > git show -s --format=%B b9527e984092e838790b543b014c0c2720ea4f11\n> > > In service of this, rearrange utility-statement processing so\n> > that parse\n> > > analysis does not assume table schemas can't change before\n> > execution for\n> > > utility statements (necessary because we don't attempt to\n> > re-acquire locks\n> > > for utility statements when reusing a stored plan). This\n> > requires some\n> >\n> > Isn't this related to the current structure?\n> >\n> > I think so it should be ok, because the transformation is still in same\n> > statement - if I understand well.\n> >\n> > So visibility of system catalogue or access to plan cache should not be\n> > changed.\n>\n> I think what that patch was addressing is, if you use a protocol-level\n> prepare+execute with commands like CREATE INDEX, CREATE VIEW, or COPY\n> and you change the table schema between the prepare and execute, things\n> would break, for the reasons explained in the commit message. So any\n> parse analysis in utility statements that accesses table schemas needs\n> to be done in the execute phase, not in the prepare phase, as one might\n> think.\n>\n> Parse analysis of EXECUTE does not access any tables, so if I understood\n> this correctly, this concern doesn't apply here.\n>\n\nit should not be true - the subquery can be a expression.\n\nMinimally on SQL level is not possible do prepare on execute. So execute\nshould be evaluate as one step.\n\n\n\n> Interestingly, the above commit also removed the prepare-time\n> transformation of ExplainStmt, but it was later put back and now has the\n> comment \"We used to postpone that until execution, but it's really\n> necessary to do it during the normal parse analysis phase to ensure that\n> side effects of parser hooks happen at the expected time.\" So there\n> appears to be a generally uneasy situation still about how to do this\n> correctly.\n>\n> Perhaps something could be done about the issue \"because we don't\n> attempt to re-acquire locks for utility statements when reusing a stored\n> plan\"?\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npá 8. 11. 2019 v 8:54 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2019-11-08 08:13, Pavel Stehule wrote:\n>     I dug into repository and found that transformExecuteStmt existed at\n>     the time of implementing PREPARE-EXECUTE statements(28e82066a1) and\n>     removed by the commit b9527e9840 which is related to\n>     plan-invalidation.\n> \n>     git show -s --format=%B b9527e984092e838790b543b014c0c2720ea4f11\n>      > In service of this, rearrange utility-statement processing so\n>     that parse\n>      > analysis does not assume table schemas can't change before\n>     execution for\n>      > utility statements (necessary because we don't attempt to\n>     re-acquire locks\n>      > for utility statements when reusing a stored plan).  This\n>     requires some\n> \n>     Isn't this related to the current structure?\n> \n> I think so it should be ok, because the transformation is still in same \n> statement - if I understand well.\n> \n> So visibility of system catalogue or access to plan cache should not be \n> changed.\n\nI think what that patch was addressing is, if you use a protocol-level \nprepare+execute with commands like CREATE INDEX, CREATE VIEW, or COPY \nand you change the table schema between the prepare and execute, things \nwould break, for the reasons explained in the commit message.  So any \nparse analysis in utility statements that accesses table schemas needs \nto be done in the execute phase, not in the prepare phase, as one might \nthink.\n\nParse analysis of EXECUTE does not access any tables, so if I understood \nthis correctly, this concern doesn't apply here.it should not be true - the subquery can be a expression.Minimally on SQL level is not possible do prepare on execute. So execute should be evaluate as one step.\n\nInterestingly, the above commit also removed the prepare-time \ntransformation of ExplainStmt, but it was later put back and now has the \ncomment \"We used to postpone that until execution, but it's really \nnecessary to do it during the normal parse analysis phase to ensure that \nside effects of parser hooks happen at the expected time.\"  So there \nappears to be a generally uneasy situation still about how to do this \ncorrectly.\n\nPerhaps something could be done about the issue \"because we don't \nattempt to re-acquire locks for utility statements when reusing a stored \nplan\"?\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 8 Nov 2019 09:03:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "On 2019-11-08 09:03, Pavel Stehule wrote:\n> Parse analysis of EXECUTE does not access any tables, so if I\n> understood\n> this correctly, this concern doesn't apply here.\n> \n> \n> it should not be true - the subquery can be a expression.\n\nArguments of EXECUTE cannot be subqueries.\n\n> Minimally on SQL level is not possible do prepare on execute. So execute \n> should be evaluate as one step.\n\nWell, that's kind of the question that is being discussed in this thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 8 Nov 2019 13:34:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "pá 8. 11. 2019 v 13:34 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2019-11-08 09:03, Pavel Stehule wrote:\n> > Parse analysis of EXECUTE does not access any tables, so if I\n> > understood\n> > this correctly, this concern doesn't apply here.\n> >\n> >\n> > it should not be true - the subquery can be a expression.\n>\n> Arguments of EXECUTE cannot be subqueries.\n>\nok\n\n>\n> > Minimally on SQL level is not possible do prepare on execute. So execute\n> > should be evaluate as one step.\n>\n> Well, that's kind of the question that is being discussed in this thread.\n>\n\nI say it not cleanly - I think so this change should be safe, because\nparsing, transforming, and execution must be in one statement.\n\nRegards\n\nPavel\n\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npá 8. 11. 2019 v 13:34 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2019-11-08 09:03, Pavel Stehule wrote:\n>     Parse analysis of EXECUTE does not access any tables, so if I\n>     understood\n>     this correctly, this concern doesn't apply here.\n> \n> \n> it should not be true - the subquery can be a expression.\n\nArguments of EXECUTE cannot be subqueries.ok \n\n> Minimally on SQL level is not possible do prepare on execute. So execute \n> should be evaluate as one step.\n\nWell, that's kind of the question that is being discussed in this thread.I say it not cleanly - I think so this change should be safe, because parsing, transforming, and execution must be in one statement.RegardsPavel\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 8 Nov 2019 16:20:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-11-08 09:03, Pavel Stehule wrote:\n>> Minimally on SQL level is not possible do prepare on execute. So execute \n>> should be evaluate as one step.\n\n> Well, that's kind of the question that is being discussed in this thread.\n\nYeah. Having now taken a quick look at this patch, it makes me pretty\nqueasy. In particular, it doesn't appear to add any support for\ninvalidation of cached EXECUTE commands when their parameter expressions\nchange. You dismissed that as irrelevant because no table schemas would\nbe involved, but there's also the possibility of replacements of user\ndefined functions. I'm not sure how easy it is to create a situation\nwhere an EXECUTE statement is in plancache, but it's probably possible\n(maybe using some other PL than plpgsql). In that case, we really would\nneed the EXECUTE's transformed expressions to get invalidated if the\nuser drops or replaces a function they use.\n\nIn view of the ALTER TABLE bugs I'm struggling with over in [1], I feel\nlike this patch is probably going in the wrong direction. We should\ngenerally be striving to do all transformation of utility commands as\nlate as possible. As long as a plancached utility statement contains\nnothing beyond raw-parser output, it never needs invalidation.\n\nYou pointed to an old comment of mine about EXPLAIN that seems to argue\nin the other direction, but digging in the commit log, I see that it\ncame from commit 08f8d478, whose log entry is perhaps more informative\nthan the comment:\n\n Do parse analysis of an EXPLAIN's contained statement during the normal\n parse analysis phase, rather than at execution time. This makes parameter\n handling work the same as it does in ordinary plannable queries, and in\n particular fixes the incompatibility that Pavel pointed out with plpgsql's\n new handling of variable references. plancache.c gets a little bit\n grottier, but the alternatives seem worse.\n\nSo what this really is all about is still the same old issue of how we\nhandle external parameter references in utility statements. Maybe we\nought to focus on a redesign addressing that specific problem, rather\nthan nibbling around the edges. It seems like the core of the issue\nis that we have mechanisms for PLs to capture parameter references\nduring parse analysis, and those hooks aren't managed in a way that\nlets them be invoked if we do parse analysis during utility statement\nexecution. But we *need* to be able to do that. ALTER TABLE already\ndoes do that, yet we need to postpone its analysis to even later than\nit's doing it now.\n\nAnother issue in all this is that for many utility statements, you\ndon't actually want injections of PL parameter references, for instance\nit'd make little sense to allow \"alter table ... add check (f1 > p1)\"\nif p1 is a local variable in the function doing the ALTER. It's\nprobably time to have some explicit recognition and management of such\ncases, rather than just dodging them by not invoking the hooks.\n\ntl;dr: I think that we need to embrace parse analysis during utility\nstatement execution as a fully supported thing, not a stepchild.\nTrying to make it go away is the wrong approach.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/10365.1558909428@sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 08 Nov 2019 11:21:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "After digesting the discussion, let's reshuffle this a bit.\n\nI have committed the change that adds the error location in one place. \nThat worked independently.\n\nAttached is a new patch that refactors things a bit to pass the \nParseState into functions such as PrepareQuery() and ExecuteQuery() \ninstead of passing the query string and query environment as a separate \narguments. We had already done that for most utility commands; there \nwere just some left that happened to be involved in the current thread's \ndiscussion anyway.\n\nThat's a nice cosmetic improvement in any case, but I think that it \nwould also help with the issue of passing parameters into some utility \ncommands later on. I will look into that some other time.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 29 Nov 2019 11:39:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThis patch replaced query string by parse state on few places. It increase code consistency.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 02 Jan 2020 13:26:04 +0000", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" }, { "msg_contents": "On 2020-01-02 14:26, Pavel Stehule wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> This patch replaced query string by parse state on few places. It increase code consistency.\n> \n> The new status of this patch is: Ready for Committer\n\ncommitted, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 4 Jan 2020 13:43:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Refactor parse analysis of EXECUTE command" } ]
[ { "msg_contents": "Hi.sorry for my English.I want to once again open the topic of 64 bit transaction id. I did not manage to find in the archive of the option that I want to discuss, so I write. If I searched poorly, then please forgive me.The idea is not very original and probably has already been considered, again I repeat - I did not find it. Therefore, please do not scold me severely.In discussions of 64-bit transaction id, I did not find mention of an algorithm for storing them, as it was done, for example, in MS SQL Server.What if instead of 2 fields (xmin and xmax) with a total length of 64 bits - use 1 field (let's call it xid) with a length of 64 bits in tuple header? In this field store the xid of the transaction that created the version. In this case, the new transaction in order to understand whether the read version is suitable for it or not, will have to read the next version as well. Those. The downside of such  decision is of course an increase in I / O. Transactions will have to read the +1 version. On the plus side, the title of the tuple remains the same length.  regards, Eremin Pavel.", "msg_date": "Fri, 01 Nov 2019 12:05:12 +0300", "msg_from": "=?utf-8?B?0J/QsNCy0LXQuyDQldGA0ZHQvNC40L0=?= <shnoor111gmail@yandex.ru>", "msg_from_op": true, "msg_subject": "64 bit transaction id" }, { "msg_contents": "Hi\n\npá 1. 11. 2019 v 10:11 odesílatel Павел Ерёмин <shnoor111gmail@yandex.ru>\nnapsal:\n\n> Hi.\n> sorry for my English.\n> I want to once again open the topic of 64 bit transaction id. I did not\n> manage to find in the archive of the option that I want to discuss, so I\n> write. If I searched poorly, then please forgive me.\n> The idea is not very original and probably has already been considered,\n> again I repeat - I did not find it. Therefore, please do not scold me\n> severely.\n> In discussions of 64-bit transaction id, I did not find mention of an\n> algorithm for storing them, as it was done, for example, in MS SQL Server.\n> What if instead of 2 fields (xmin and xmax) with a total length of 64 bits\n> - use 1 field (let's call it xid) with a length of 64 bits in tuple header?\n> In this field store the xid of the transaction that created the version. In\n> this case, the new transaction in order to understand whether the read\n> version is suitable for it or not, will have to read the next version as\n> well. Those. The downside of such decision is of course an increase in I /\n> O. Transactions will have to read the +1 version. On the plus side, the\n> title of the tuple remains the same length.\n>\n\nis 32 bit tid really problem? Why you need to know state of last 2^31\ntransactions? Is not problem in too low usage (or maybe too high overhead)\nof VACUUM FREEZE.\n\nI am not sure if increasing this range can has much more fatal problems\n(maybe later)\n\nPavel\n\n\n\n>\n> regards, Eremin Pavel.\n>\n\nHipá 1. 11. 2019 v 10:11 odesílatel Павел Ерёмин <shnoor111gmail@yandex.ru> napsal:Hi.sorry for my English.I want to once again open the topic of 64 bit transaction id. I did not manage to find in the archive of the option that I want to discuss, so I write. If I searched poorly, then please forgive me.The idea is not very original and probably has already been considered, again I repeat - I did not find it. Therefore, please do not scold me severely.In discussions of 64-bit transaction id, I did not find mention of an algorithm for storing them, as it was done, for example, in MS SQL Server.What if instead of 2 fields (xmin and xmax) with a total length of 64 bits - use 1 field (let's call it xid) with a length of 64 bits in tuple header? In this field store the xid of the transaction that created the version. In this case, the new transaction in order to understand whether the read version is suitable for it or not, will have to read the next version as well. Those. The downside of such  decision is of course an increase in I / O. Transactions will have to read the +1 version. On the plus side, the title of the tuple remains the same length.is 32 bit tid really problem? Why you need to know state of last 2^31 transactions? Is not problem in too low usage (or maybe too high overhead) of VACUUM FREEZE.I am not sure if increasing this range can has much more fatal problems (maybe later)Pavel  regards, Eremin Pavel.", "msg_date": "Fri, 1 Nov 2019 10:25:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Fri, Nov 01, 2019 at 10:25:17AM +0100, Pavel Stehule wrote:\n>Hi\n>\n>pá 1. 11. 2019 v 10:11 odesílatel Павел Ерёмин <shnoor111gmail@yandex.ru>\n>napsal:\n>\n>> Hi.\n>> sorry for my English.\n>> I want to once again open the topic of 64 bit transaction id. I did not\n>> manage to find in the archive of the option that I want to discuss, so I\n>> write. If I searched poorly, then please forgive me.\n>> The idea is not very original and probably has already been considered,\n>> again I repeat - I did not find it. Therefore, please do not scold me\n>> severely.\n>> In discussions of 64-bit transaction id, I did not find mention of an\n>> algorithm for storing them, as it was done, for example, in MS SQL Server.\n>> What if instead of 2 fields (xmin and xmax) with a total length of 64 bits\n>> - use 1 field (let's call it xid) with a length of 64 bits in tuple header?\n>> In this field store the xid of the transaction that created the version. In\n>> this case, the new transaction in order to understand whether the read\n>> version is suitable for it or not, will have to read the next version as\n>> well. Those. The downside of such decision is of course an increase in I /\n>> O. Transactions will have to read the +1 version. On the plus side, the\n>> title of the tuple remains the same length.\n>>\n>\n>is 32 bit tid really problem? Why you need to know state of last 2^31\n>transactions? Is not problem in too low usage (or maybe too high overhead)\n>of VACUUM FREEZE.\n>\n\nIt certainly can be an issue for large and busy systems, that may need\nanti-wraparoud vacuum every couple of days. If that requires rewriting a\ncouple of TB of data, it's not particularly nice. That's why 64-bit XIDs\nwere discussed repeatedly in the past, and it's likely to get even more\npressing as the systems get larger.\n\n>I am not sure if increasing this range can has much more fatal problems\n>(maybe later)\n>\n\nWell, not fatal, but naive approaches can increase per-tuple overhead.\nAnd we already have plenty of that, hence there were proposals to use\npage epochs and so on.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 1 Nov 2019 18:10:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Fri, Nov 01, 2019 at 12:05:12PM +0300, Павел Ерёмин wrote:\n> Hi.\n> sorry for my English.\n> I want to once again open the topic of 64 bit transaction id. I did not\n> manage to find in the archive of the option that I want to discuss, so I\n> write. If I searched poorly, then please forgive me.\n> The idea is not very original and probably has already been considered,\n> again I repeat - I did not find it. Therefore, please do not scold me\n> severely.\n> In discussions of 64-bit transaction id, I did not find mention of an\n> algorithm for storing them, as it was done, for example, in MS SQL Server.\n> What if instead of 2 fields (xmin and xmax) with a total length of 64 bits\n> - use 1 field (let's call it xid) with a length of 64 bits in tuple\n> header? In this field store the xid of the transaction that created the\n> version. In this case, the new transaction in order to understand whether\n> the read version is suitable for it or not, will have to read the next\n> version as well. Those. The downside of such  decision is of course an\n> increase in I / O. Transactions will have to read the +1 version. On the\n> plus side, the title of the tuple remains the same length.\n>  \n\nI think that assumes we can easily identify the next version of a tuple,\nand I don't think we can do that. We may be able to do that for for HOT\nchains, but that only works when the next version fits onto the same\npage (and does not update indexed columns). But when we store the new\nversion on a separate page, we don't have any link between those tuples.\nAnd adding it may easily mean more overhead than the 8B we'd save by\nonly storing a single XID.\n\nIMO the most promising solution to this is the \"page epoch\" approach\ndiscussed some time ago (1-2 years?).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 1 Nov 2019 18:14:50 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": " The proposed option does not need to change the length of either the page header or tuple header. Therefore, you will not need to physically change the data. regards01.11.2019, 20:10, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com>:On Fri, Nov 01, 2019 at 10:25:17AM +0100, Pavel Stehule wrote:Hipá 1. 11. 2019 v 10:11 odesílatel Павел Ерёмин <shnoor111gmail@yandex.ru>napsal: Hi. sorry for my English. I want to once again open the topic of 64 bit transaction id. I did not manage to find in the archive of the option that I want to discuss, so I write. If I searched poorly, then please forgive me. The idea is not very original and probably has already been considered, again I repeat - I did not find it. Therefore, please do not scold me severely. In discussions of 64-bit transaction id, I did not find mention of an algorithm for storing them, as it was done, for example, in MS SQL Server. What if instead of 2 fields (xmin and xmax) with a total length of 64 bits - use 1 field (let's call it xid) with a length of 64 bits in tuple header? In this field store the xid of the transaction that created the version. In this case, the new transaction in order to understand whether the read version is suitable for it or not, will have to read the next version as well. Those. The downside of such decision is of course an increase in I / O. Transactions will have to read the +1 version. On the plus side, the title of the tuple remains the same length.is 32 bit tid really problem? Why you need to know state of last 2^31transactions? Is not problem in too low usage (or maybe too high overhead)of VACUUM FREEZE.It certainly can be an issue for large and busy systems, that may needanti-wraparoud vacuum every couple of days. If that requires rewriting acouple of TB of data, it's not particularly nice. That's why 64-bit XIDswere discussed repeatedly in the past, and it's likely to get even morepressing as the systems get larger.I am not sure if increasing this range can has much more fatal problems(maybe later)Well, not fatal, but naive approaches can increase per-tuple overhead.And we already have plenty of that, hence there were proposals to usepage epochs and so on.regards-- Tomas Vondra http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ", "msg_date": "Sat, 02 Nov 2019 19:07:17 +0300", "msg_from": "=?utf-8?B?0J/QsNCy0LXQuyDQldGA0ZHQvNC40L0=?= <shnoor111gmail@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Sat, Nov 02, 2019 at 07:07:17PM +0300, Павел Ерёмин wrote:\n>  \n> The proposed option does not need to change the length of either the page\n> header or tuple header. Therefore, you will not need to physically change\n> the data.\n>  \n\nSo how do you link the tuple versions together? Clearly, that has to be\nstored somewhere ...\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 2 Nov 2019 19:15:34 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "The proposed option is not much different from what it is now.We are not trying to save some space - we will reuse the existing one. We just work in 64 bit transaction counters. Correct me if I'm wrong - the address of the next version of the line is stored in the 6 byte field t_cid in the tuple header - which is not attached to the current page in any way - and can be stored anywhere in the table. Nothing changes.I often explain things very poorly, but I will try.for exampleEach transaction is assigned a unique 64-bit xid.In the tuple header, we replace 32-bit xmin and xmax - with one 64-bit field - let's call it xid.SupposeTransaction 1 does INSERTThe first version is created (Tuple1).Tuple1. Tuple_header.xid = Transacrion1.xid and Tuple1. Tuple_header. t_cid = 0;Transaction 3 (started after transaction 1) does UPDATEThe second version is created (Tuple2).Tuple1. Tuple_header. t_cid = (address) Tuple2;Tuple2. Tuple_header.xid = Transacrion3.xid and Tuple2. Tuple_header. t_cid = 0;Transaction 2 (started between transaction1 and transaction2) makes SELECTReads Tuple1. Transaction 2 sees that Tuple1.Tuple_header.xid <Transacrion2.xid, sees that the address Tuple1 is filled. Tuple_header. t_cid, follow it and read the version of Tuple2.02.11.2019, 21:15, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com>:On Sat, Nov 02, 2019 at 07:07:17PM +0300, Павел Ерёмин wrote:       The proposed option does not need to change the length of either the page   header or tuple header. Therefore, you will not need to physically change   the data.    So how do you link the tuple versions together? Clearly, that has to bestored somewhere ...-- Tomas Vondra http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ", "msg_date": "Sat, 02 Nov 2019 23:35:09 +0300", "msg_from": "=?utf-8?B?0J/QsNCy0LXQuyDQldGA0ZHQvNC40L0=?= <shnoor111gmail@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Sat, Nov 02, 2019 at 11:35:09PM +0300, Павел Ерёмин wrote:\n> The proposed option is not much different from what it is now.\n> We are not trying to save some space - we will reuse the existing one. We\n> just work in 64 bit transaction counters. Correct me if I'm wrong - the\n> address of the next version of the line is stored in the 6 byte field\n> t_cid in the tuple header - which is not attached to the current page in\n> any way - and can be stored anywhere in the table. Nothing changes.\n\nI think you mean t_ctid, not t_cid (which is a 4-byte CommandId, not any\nsort of item pointer).\n\nI think this comment from htup_details.h explains the issue:\n\n * ... Beware however that VACUUM might\n * erase the pointed-to (newer) tuple before erasing the pointing (older)\n * tuple. Hence, when following a t_ctid link, it is necessary to check\n * to see if the referenced slot is empty or contains an unrelated tuple.\n * Check that the referenced tuple has XMIN equal to the referencing tuple's\n * XMAX to verify that it is actually the descendant version and not an\n * unrelated tuple stored into a slot recently freed by VACUUM. If either\n * check fails, one may assume that there is no live descendant version.\n\nNow, imagine you have a tuple that gets updated repeatedly (say, 3x) and\neach version gets to a different page. Say, pages #1, #2, #3. And then\nVACUUM happens on some of the \"middle\" page (this may happen when trying\nto fit new row onto a page to allow HOT, but it might happen even during\nregular VACUUM).\n\nSo we started with 3 tuples on pages #1, #2, #3, but now we have this\n\n #1 - tuple exists, points to tuple on page #2\n #2 - tuple no longer exists, cleaned up by vacuum\n #3 - tuple exists\n\nThe scheme you proposed requires existence of all the tuples in the\nchain to determine visibility. When tuple #2 no longer exists, it's\nimpossible to decide whether tuple on page #1 is visible or not.\n\nThis also significantly increases the amount of random I/O, pretty much\nby factor of 2, because whenever you look at a row, you also have to\nlook at the \"next version\" which may be on another page. That's pretty\nbad, bot for I/O and cache hit ratio. I don't think that's a reasonable\ntrade-off (at least compared to simply making the XIDs 64bit).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 3 Nov 2019 00:20:43 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "I completely agree with all of the above. Therefore, the proposed mechanism may entail larger improvements (and not only VACUUM).I can offer the following solution.For VACUUM, create a hash table.VACUUM scanning the table sees that the version (tuple1) has t_ctid filled and refers to the address tuple2, it creates a structure into which it writes the address tuple1, tuple1.xid, length tuple1 (well, and other information that is needed), puts this structure in the hash table by key tuple2 addresses.VACUUM reaches tuple2, checks the address of tuple2 in the hash table - if it finds it, it evaluates the connection between them and makes a decision on cleaning.  regards03.11.2019, 02:20, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com>:On Sat, Nov 02, 2019 at 11:35:09PM +0300, Павел Ерёмин wrote:   The proposed option is not much different from what it is now.   We are not trying to save some space - we will reuse the existing one. We   just work in 64 bit transaction counters. Correct me if I'm wrong - the   address of the next version of the line is stored in the 6 byte field   t_cid in the tuple header - which is not attached to the current page in   any way - and can be stored anywhere in the table. Nothing changes.I think you mean t_ctid, not t_cid (which is a 4-byte CommandId, not anysort of item pointer).I think this comment from htup_details.h explains the issue: * ... Beware however that VACUUM might * erase the pointed-to (newer) tuple before erasing the pointing (older) * tuple. Hence, when following a t_ctid link, it is necessary to check * to see if the referenced slot is empty or contains an unrelated tuple. * Check that the referenced tuple has XMIN equal to the referencing tuple's * XMAX to verify that it is actually the descendant version and not an * unrelated tuple stored into a slot recently freed by VACUUM. If either * check fails, one may assume that there is no live descendant version.Now, imagine you have a tuple that gets updated repeatedly (say, 3x) andeach version gets to a different page. Say, pages #1, #2, #3. And thenVACUUM happens on some of the \"middle\" page (this may happen when tryingto fit new row onto a page to allow HOT, but it might happen even duringregular VACUUM).So we started with 3 tuples on pages #1, #2, #3, but now we have this  #1 - tuple exists, points to tuple on page #2  #2 - tuple no longer exists, cleaned up by vacuum  #3 - tuple existsThe scheme you proposed requires existence of all the tuples in thechain to determine visibility. When tuple #2 no longer exists, it'simpossible to decide whether tuple on page #1 is visible or not.This also significantly increases the amount of random I/O, pretty muchby factor of 2, because whenever you look at a row, you also have tolook at the \"next version\" which may be on another page. That's prettybad, bot for I/O and cache hit ratio. I don't think that's a reasonabletrade-off (at least compared to simply making the XIDs 64bit).regards-- Tomas Vondra http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ", "msg_date": "Sun, 03 Nov 2019 14:17:15 +0300", "msg_from": "=?utf-8?B?0J/QsNCy0LXQuyDQldGA0ZHQvNC40L0=?= <shnoor111gmail@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Sun, Nov 03, 2019 at 02:17:15PM +0300, Павел Ерёмин wrote:\n> I completely agree with all of the above. Therefore, the proposed\n> mechanism may entail larger improvements (and not only VACUUM).\n\nI think the best think you can do is try implementing this ...\n\nI'm afraid the \"improvements\" essentially mean making various imporant\nparts of the system much more complicated and expensive. There's a\ntrade-off between saving 8B per row and additional overhead (during\nvacuum etc.), and it does not seem like a winning strategy. What started\nas \"we can simply look at the next row version\" is clearly way more\ncomplicated and expensive.\n\nThe trouble here is that it adds dependency between pages in the data\nfile. That for example means that during cleanup of a page it may be \nnecessary to modify the other page, when originally that would be \nread-only in that checkpoint interval. That's essentially write \namplification, and may significantly increase the amount of WAL due to \ngenerating FPW for the other page.\n\n> I can offer the following solution.\n> For VACUUM, create a hash table.\n> VACUUM scanning the table sees that the version (tuple1) has t_ctid filled\n> and refers to the address tuple2, it creates a structure into which it\n> writes the address tuple1, tuple1.xid, length tuple1 (well, and other\n> information that is needed), puts this structure in the hash table by key\n> tuple2 addresses.\n> VACUUM reaches tuple2, checks the address of tuple2 in the hash table - if\n> it finds it, it evaluates the connection between them and makes a decision\n> on cleaning.\n> \n\nWe know VACUUM is already pretty expensive, so making it even more\nexpensive seems pretty awful. And the proposed solution seems damn\nexpensive. We already do something similar for indexes - we track\npointers for removed rows, so that we can remove them from indexes. And\nit's damn expensive because we don't know where in the index the tuples\nare - so we have to scan the whole indexes.\n\nThis would mean we have to do the same thing for table, because we don't\nknow where in the table are the older versions of those rows, because we\ndon't know where the other rows are. That seems mighty expensive.\n\nNot to mention that this does nothing for page-level vacuum, which we\ndo when trying to fit another row on a page (e.g. for HOT). This has to\nbe absolutely cheap, we certainly are not going to do lookups of other\npages or looking for older versions of the row, and so on.\n\nBeing able to do visibility decisions based on the tuple alone (or\npossibly page-level + tuple information) has a lot of value, and I don't\nthink we want to make this more complicated.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 3 Nov 2019 20:15:22 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Sat, Nov 2, 2019 at 6:15 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Fri, Nov 01, 2019 at 12:05:12PM +0300, Павел Ерёмин wrote:\n> > Hi.\n> > sorry for my English.\n> > I want to once again open the topic of 64 bit transaction id. I did not\n> > manage to find in the archive of the option that I want to discuss, so I\n> > write. If I searched poorly, then please forgive me.\n> > The idea is not very original and probably has already been considered,\n> > again I repeat - I did not find it. Therefore, please do not scold me\n> > severely.\n> > In discussions of 64-bit transaction id, I did not find mention of an\n> > algorithm for storing them, as it was done, for example, in MS SQL Server.\n> > What if instead of 2 fields (xmin and xmax) with a total length of 64 bits\n> > - use 1 field (let's call it xid) with a length of 64 bits in tuple\n> > header? In this field store the xid of the transaction that created the\n> > version. In this case, the new transaction in order to understand whether\n> > the read version is suitable for it or not, will have to read the next\n> > version as well. Those. The downside of such decision is of course an\n> > increase in I / O. Transactions will have to read the +1 version. On the\n> > plus side, the title of the tuple remains the same length.\n> >\n>\n> I think that assumes we can easily identify the next version of a tuple,\n> and I don't think we can do that. We may be able to do that for for HOT\n> chains, but that only works when the next version fits onto the same\n> page (and does not update indexed columns). But when we store the new\n> version on a separate page, we don't have any link between those tuples.\n> And adding it may easily mean more overhead than the 8B we'd save by\n> only storing a single XID.\n>\n> IMO the most promising solution to this is the \"page epoch\" approach\n> discussed some time ago (1-2 years?).\n\nThere have been so many discussions of this topic that it's hard to search for.\n\nSince we have in fact begun to take some baby steps towards using 64\nbit transaction IDs in a few places, I decided to create a new wiki\npage to try to keep track of the various discussions. If you know\nwhere to find the 'best' discussions (for example the one where, if I\nrecall correctly, it was Heikki who proposed a 'reference'\nFullTranasctionId on the page header) and any proposals that came with\npatches, then I'd be grateful if you could add links to it to this\nwiki page!\n\nhttps://wiki.postgresql.org/wiki/FullTransactionId\n\n\n", "msg_date": "Mon, 4 Nov 2019 10:41:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "And yet, if I try to implement a similar mechanism, if successful, will my revision be considered? regards03.11.2019, 22:15, \"Tomas Vondra\" <tomas.vondra@2ndquadrant.com>:On Sun, Nov 03, 2019 at 02:17:15PM +0300, Павел Ерёмин wrote:   I completely agree with all of the above. Therefore, the proposed   mechanism may entail larger improvements (and not only VACUUM).I think the best think you can do is try implementing this ...I'm afraid the \"improvements\" essentially mean making various imporantparts of the system much more complicated and expensive. There's atrade-off between saving 8B per row and additional overhead (duringvacuum etc.), and it does not seem like a winning strategy. What startedas \"we can simply look at the next row version\" is clearly way morecomplicated and expensive.The trouble here is that it adds dependency between pages in the datafile. That for example means that during cleanup of a page it may be necessary to modify the other page, when originally that would be read-only in that checkpoint interval. That's essentially write amplification, and may significantly increase the amount of WAL due to generating FPW for the other page.   I can offer the following solution.   For VACUUM, create a hash table.   VACUUM scanning the table sees that the version (tuple1) has t_ctid filled   and refers to the address tuple2, it creates a structure into which it   writes the address tuple1, tuple1.xid, length tuple1 (well, and other   information that is needed), puts this structure in the hash table by key   tuple2 addresses.   VACUUM reaches tuple2, checks the address of tuple2 in the hash table - if   it finds it, it evaluates the connection between them and makes a decision   on cleaning.We know VACUUM is already pretty expensive, so making it even moreexpensive seems pretty awful. And the proposed solution seems damnexpensive. We already do something similar for indexes - we trackpointers for removed rows, so that we can remove them from indexes. Andit's damn expensive because we don't know where in the index the tuplesare - so we have to scan the whole indexes.This would mean we have to do the same thing for table, because we don'tknow where in the table are the older versions of those rows, because wedon't know where the other rows are. That seems mighty expensive.Not to mention that this does nothing for page-level vacuum, which wedo when trying to fit another row on a page (e.g. for HOT). This has tobe absolutely cheap, we certainly are not going to do lookups of otherpages or looking for older versions of the row, and so on.Being able to do visibility decisions based on the tuple alone (orpossibly page-level + tuple information) has a lot of value, and I don'tthink we want to make this more complicated.regards-- Tomas Vondra http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services ", "msg_date": "Mon, 04 Nov 2019 16:39:44 +0300", "msg_from": "=?utf-8?B?0J/QsNCy0LXQuyDQldGA0ZHQvNC40L0=?= <shnoor111gmail@yandex.ru>", "msg_from_op": true, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Mon, Nov 04, 2019 at 04:39:44PM +0300, Павел Ерёмин wrote:\n> And yet, if I try to implement a similar mechanism, if successful, will my\n> revision be considered?\n>  \n\nWhy wouldn't it be considered? If you submit a patch that demonstrably\nimproves the behavior (in this case reduces per-tuple overhead without\ncausing significant issues elsewhere), we'd be crazy not to consider it.\n\nThe bar is pretty high, though, because this touches one of the core\npieces of the database.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 4 Nov 2019 16:07:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "Hi,\n\n(I've not read the rest of this thread yet)\n\nOn 2019-11-04 16:07:23 +0100, Tomas Vondra wrote:\n> On Mon, Nov 04, 2019 at 04:39:44PM +0300, Павел Ерёмин wrote:\n> > And yet, if I try to implement a similar mechanism, if successful, will my\n> > revision be considered?\n> >  \n> \n> Why wouldn't it be considered? If you submit a patch that demonstrably\n> improves the behavior (in this case reduces per-tuple overhead without\n> causing significant issues elsewhere), we'd be crazy not to consider it.\n\nAnd \"without causing significant issues elsewhere\" unfortunately\nincludes continuing to allow pg_upgrade to work.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Nov 2019 10:04:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:\n>Hi,\n>\n>(I've not read the rest of this thread yet)\n>\n>On 2019-11-04 16:07:23 +0100, Tomas Vondra wrote:\n>> On Mon, Nov 04, 2019 at 04:39:44PM +0300, Павел Ерёмин wrote:\n>> > And yet, if I try to implement a similar mechanism, if successful, will my\n>> > revision be considered?\n>> >  \n>>\n>> Why wouldn't it be considered? If you submit a patch that demonstrably\n>> improves the behavior (in this case reduces per-tuple overhead without\n>> causing significant issues elsewhere), we'd be crazy not to consider it.\n>\n>And \"without causing significant issues elsewhere\" unfortunately\n>includes continuing to allow pg_upgrade to work.\n>\n\nYeah. I suppose we could have a different AM implementing this, but\nmaybe that's not possible ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 4 Nov 2019 19:39:18 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "Hi,\n\nOn 2019-11-04 19:39:18 +0100, Tomas Vondra wrote:\n> On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:\n> > And \"without causing significant issues elsewhere\" unfortunately\n> > includes continuing to allow pg_upgrade to work.\n\n> Yeah. I suppose we could have a different AM implementing this, but\n> maybe that's not possible ...\n\nEntirely possible. But the amount of code duplication / unnecessary\nbranching and the user confusion from two very similar AMs, would have\nto be weighed against the benefits.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Nov 2019 10:44:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Mon, Nov 04, 2019 at 10:44:53AM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2019-11-04 19:39:18 +0100, Tomas Vondra wrote:\n>> On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:\n>> > And \"without causing significant issues elsewhere\" unfortunately\n>> > includes continuing to allow pg_upgrade to work.\n>\n>> Yeah. I suppose we could have a different AM implementing this, but\n>> maybe that's not possible ...\n>\n>Entirely possible. But the amount of code duplication / unnecessary\n>branching and the user confusion from two very similar AMs, would have\n>to be weighed against the benefits.\n>\n\nAgreed. I think code complexity is part of the trade-off. IMO it's fine\nto hack existing heap AM initially, and only explore the separate AM if\nthat turns out to be promising.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Mon, 4 Nov 2019 20:44:55 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Tue, Nov 5, 2019 at 8:45 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Mon, Nov 04, 2019 at 10:44:53AM -0800, Andres Freund wrote:\n> >On 2019-11-04 19:39:18 +0100, Tomas Vondra wrote:\n> >> On Mon, Nov 04, 2019 at 10:04:09AM -0800, Andres Freund wrote:\n> >> > And \"without causing significant issues elsewhere\" unfortunately\n> >> > includes continuing to allow pg_upgrade to work.\n> >\n> >> Yeah. I suppose we could have a different AM implementing this, but\n> >> maybe that's not possible ...\n> >\n> >Entirely possible. But the amount of code duplication / unnecessary\n> >branching and the user confusion from two very similar AMs, would have\n> >to be weighed against the benefits.\n> >\n>\n> Agreed. I think code complexity is part of the trade-off. IMO it's fine\n> to hack existing heap AM initially, and only explore the separate AM if\n> that turns out to be promising.\n\nI thought a bit about how to make a minimally-diffferent-from-heap\nnon-freezing table AM using 64 bit xids, as a thought experiment when\ntrying to understand or explain to others what zheap is about.\nCommitted transactions are easy (you don't have to freeze fxid\nreferences from the ancient past because they don't wrap around so\nthey always look old), but how do you deal with *aborted* transactions\nwhen truncating the CLOG (given that our current rule is \"if it's\nbefore the CLOG begins, it must be committed\")? I see three\npossibilities: (1) don't truncate the CLOG anymore (use 64 bit\naddressing and let it leak disk forever, like we did before commit\n2589735d and later work), (2) freeze aborted transactions only, using\na wraparound vacuum (and now you have failed, if the goal was to avoid\nhaving to scan all tuples periodically to freeze stuff, though\nadmittedly it will require less IO to freeze only the aborted\ntransactions), (3) go and remove aborted fxid references eagerly, when\nyou roll back (this could be done using the undo technology that we\nhave been developing to support zheap). Another way to explain (3) is\nthat this hypothetical table AM, let's call it \"yheap\", takes the\nminimum parts of the zheap technology stack required to get rid of\nvacuum-for-wraparound, without doing in-place updates or any of that\nhard stuff. To make this really work you'd also have to deal with\nmultixacts, which also require freezing. If that all sounds too\ncomplicated, you're back to (2) which seems a bit weak to me. Or\nperhaps I'm missing something?\n\n\n", "msg_date": "Tue, 5 Nov 2019 09:34:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Tue, Nov 5, 2019 at 09:34:52AM +1300, Thomas Munro wrote:\n> On Tue, Nov 5, 2019 at 8:45 AM Tomas Vondra\n> > Agreed. I think code complexity is part of the trade-off. IMO it's fine\n> > to hack existing heap AM initially, and only explore the separate AM if\n> > that turns out to be promising.\n> \n> I thought a bit about how to make a minimally-diffferent-from-heap\n> non-freezing table AM using 64 bit xids, as a thought experiment when\n> trying to understand or explain to others what zheap is about.\n> Committed transactions are easy (you don't have to freeze fxid\n> references from the ancient past because they don't wrap around so\n> they always look old), but how do you deal with *aborted* transactions\n> when truncating the CLOG (given that our current rule is \"if it's\n> before the CLOG begins, it must be committed\")? I see three\n> possibilities: (1) don't truncate the CLOG anymore (use 64 bit\n> addressing and let it leak disk forever, like we did before commit\n> 2589735d and later work), (2) freeze aborted transactions only, using\n> a wraparound vacuum (and now you have failed, if the goal was to avoid\n> having to scan all tuples periodically to freeze stuff, though\n> admittedly it will require less IO to freeze only the aborted\n> transactions), (3) go and remove aborted fxid references eagerly, when\n> you roll back (this could be done using the undo technology that we\n> have been developing to support zheap). Another way to explain (3) is\n> that this hypothetical table AM, let's call it \"yheap\", takes the\n> minimum parts of the zheap technology stack required to get rid of\n> vacuum-for-wraparound, without doing in-place updates or any of that\n> hard stuff. To make this really work you'd also have to deal with\n> multixacts, which also require freezing. If that all sounds too\n> complicated, you're back to (2) which seems a bit weak to me. Or\n> perhaps I'm missing something?\n\nThe above is a very good summary of the constraints that have led to our\ncurrent handling of XID wraparound. If we are concerned about excessive\nvacuum freeze overhead, why is the default autovacuum_freeze_max_age =\n200000000 so low? That causes feezing when the pg_xact directory holds\n200 million xids or 50 megabytes of xid status?\n\nAs far as I understand it, we cause the database to stop writes when the\nxid counter gets within 2 billion xids of the current transaction\ncounter, so 200 million is only 1/10th to that limit, and even then, I\nam not sure why we couldn't make it stop writes at 3 billion or\nsomething. My point is that increasing the default\nautovacuum_freeze_max_age value seems like an easy way to reduce vacuum\nfreeze. (While, the visibility map helps avoid vacuum freeze from\nreading all heap pages, and we still need to read all index pages.)\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 7 Nov 2019 10:28:31 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" }, { "msg_contents": "On Thu, Nov 7, 2019 at 10:28 AM Bruce Momjian <bruce@momjian.us> wrote:\n> The above is a very good summary of the constraints that have led to our\n> current handling of XID wraparound. If we are concerned about excessive\n> vacuum freeze overhead, why is the default autovacuum_freeze_max_age =\n> 200000000 so low? That causes feezing when the pg_xact directory holds\n> 200 million xids or 50 megabytes of xid status?\n>\n> As far as I understand it, we cause the database to stop writes when the\n> xid counter gets within 2 billion xids of the current transaction\n> counter, so 200 million is only 1/10th to that limit, and even then, I\n> am not sure why we couldn't make it stop writes at 3 billion or\n> something. My point is that increasing the default\n> autovacuum_freeze_max_age value seems like an easy way to reduce vacuum\n> freeze. (While, the visibility map helps avoid vacuum freeze from\n> reading all heap pages, and we still need to read all index pages.)\n\nYeah, I've also wondered why this isn't higher by default, but it's a\nsomewhat tricky topic.\n\nThree billion won't work, because it's deeply baked into PostgreSQL's\narchitecture that at most two billion XIDs are used at one time. For\ncomparison purposes, the four billion XIDs form a ring, so that from\nthe perspective of any individual XID, half of the XIDs are in the\nfuture and the other half are in the past. If three billion XIDs were\nin use simultaneously, say starting with XID 5 and ending with XID\n3,000,000,004, then XID 5 would see XID 3,000,000,004 as being the\npast rather than the future, while XID 1,500,000,000 would (correctly)\nsee XID 5 as in the past and XID 3,000,000,004 as in the future. So\nXID comparison would not be transitive, which would break a lot of\ncode. Allowing at most two billion XIDs to be in use at one time fixes\nthis problem.\n\nThat doesn't mean we couldn't raise the setting. It just means that\nthe hard limit is two billion, not four billion. But, how high should\nwe raise it? The highest safe value depends on how many XIDs you'll\nburn while the freezing vacuums are running, which depends on both the\nsize of the database and the rate of XID consumption, and those values\ncan be very different on different systems. I think most people could\nget by with a significantly higher value, but even with the current\nvalue I think there are probably some people who run out of XIDs, at\nwhich point they can no longer write to the database. The higher we\nmake the default, the more people are going to have that problem. It's\ntrue that a lot of people only hit the limit because something has\ngone wrong, like they've forgotten about a prepared transaction or an\nunused replication slot, but still, on high-velocity systems you can't\nafford to cut it too close because you're still going to be burning\nthrough XIDs while vacuum is running.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 6 Dec 2019 08:30:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64 bit transaction id" } ]
[ { "msg_contents": "Hi, hackers\n\nAs the $Subject, does anyone have one? I'd like to refer to it, and\nwrite an example for people who is also looking for the document.\n\nThanks.\n\n-- \nAdam Lee\n\n\n", "msg_date": "Fri, 1 Nov 2019 20:18:02 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": true, "msg_subject": "Looking for a demo of extensible nodes" }, { "msg_contents": "Hi,\n\nI've a basic experimental extension where I use extensible nodes. This is\nthe commit which adds the extensible node to the project:\nhttps://github.com/onderkalaci/pgcolor/commit/10cba5d02a828dbee4bc140f5e88d6c44b40e5c2\n\nHope that gives you some pointers.\n\n\nOn Fri, Nov 1, 2019 at 1:18 PM Adam Lee <ali@pivotal.io> wrote:\n\n> Hi, hackers\n>\n> As the $Subject, does anyone have one? I'd like to refer to it, and\n> write an example for people who is also looking for the document.\n>\n> Thanks.\n>\n> --\n> Adam Lee\n>\n>\n>\n\nHi,I've a basic experimental extension where I use extensible nodes. This is the commit which adds the extensible node to the project: https://github.com/onderkalaci/pgcolor/commit/10cba5d02a828dbee4bc140f5e88d6c44b40e5c2Hope that gives you some pointers.On Fri, Nov 1, 2019 at 1:18 PM Adam Lee <ali@pivotal.io> wrote:Hi, hackers\n\nAs the $Subject, does anyone have one? I'd like to refer to it, and\nwrite an example for people who is also looking for the document.\n\nThanks.\n\n-- \nAdam Lee", "msg_date": "Fri, 1 Nov 2019 17:43:44 +0100", "msg_from": "Onder Kalaci <onder@citusdata.com>", "msg_from_op": false, "msg_subject": "Re: Looking for a demo of extensible nodes" } ]
[ { "msg_contents": "Hi,\n\n\nAs per the following code, t1 is a remote table through postgres_fdw:\n\n\n test=# BEGIN;\n BEGIN\n test=# SELECT * FROM t1;\n ...\n\n test=# PREPARE TRANSACTION 'gxid1';\n ERROR:  cannot prepare a transaction that modified remote tables\n\n\nI have attached a patch to the documentation that adds remote tables to\nthe list of objects where any operation prevent using a prepared\ntransaction, currently it is just notified \"operations involving\ntemporary tables or the session's temporary namespace\".\n\n\nThe second patch modify the message returned by postgres_fdw as per the\nSELECT statement above the message should be more comprehensible with:\n\n    ERROR:  cannot PREPARE a transaction that has operated on remote tables\n\nlike for temporary objects:\n\n    ERROR:  cannot PREPARE a transaction that has operated on temporary\nobjects\n\n\nBest regards,\n\n--\n\nGilles\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Fri, 1 Nov 2019 17:29:23 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "[PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Fri, Nov 01, 2019 at 05:29:23PM +0100, Gilles Darold wrote:\n> I have attached a patch to the documentation that adds remote tables to\n> the list of objects where any operation prevent using a prepared\n> transaction, currently it is just notified \"operations involving\n> temporary tables or the session's temporary namespace\".\n\nPerhaps we had better use foreign tables for the error message and the\ndocs?\n--\nMichael", "msg_date": "Sat, 2 Nov 2019 16:31:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Le 02/11/2019 � 08:31, Michael Paquier a �crit�:\n> On Fri, Nov 01, 2019 at 05:29:23PM +0100, Gilles Darold wrote:\n>> I have attached a patch to the documentation that adds remote tables to\n>> the list of objects where any operation prevent using a prepared\n>> transaction, currently it is just notified \"operations involving\n>> temporary tables or the session's temporary namespace\".\n> Perhaps we had better use foreign tables for the error message and the\n> docs?\n> --\n> Michael\n\n\nAgree, attached is a new version of the patches that replace word remote\nby foreign.\n\n--\n\nGilles", "msg_date": "Sun, 3 Nov 2019 09:12:38 +0100", "msg_from": "Gilles Darold <gillesdarold@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Gilles,\n\nOn Sat, Nov 2, 2019 at 1:29 AM Gilles Darold <gilles@darold.net> wrote:\n> As per the following code, t1 is a remote table through postgres_fdw:\n\n> test=# BEGIN;\n> BEGIN\n> test=# SELECT * FROM t1;\n> ...\n>\n> test=# PREPARE TRANSACTION 'gxid1';\n> ERROR: cannot prepare a transaction that modified remote tables\n\n> I have attached a patch to the documentation that adds remote tables to the list of objects where any operation prevent using a prepared transaction, currently it is just notified \"operations involving temporary tables or the session's temporary namespace\".\n\nI'm not sure that's a good idea because file_fdw works well for\nPREPARE TRANSACTION! How about adding a note about that to the\nsection of Transaction Management in the postgres_fdw documentation\nlike the attached?\n\n> The second patch modify the message returned by postgres_fdw as per the SELECT statement above the message should be more comprehensible with:\n>\n> ERROR: cannot PREPARE a transaction that has operated on remote tables\n>\n> like for temporary objects:\n>\n> ERROR: cannot PREPARE a transaction that has operated on temporary objects\n\n+1 (I too think it would be better to use \"foreign tables\" rather\nthan \"remote tables\" as pointed by Michael-san, but I think it might\nbe much better to use \"postgres_fdw foreign tables\", not just \"foreign\ntables\".)\n\nBest regards,\nEtsuro Fujita", "msg_date": "Tue, 5 Nov 2019 18:35:54 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Esturo,\n\nLe 05/11/2019 à 10:35, Etsuro Fujita a écrit :\n> Hi Gilles,\n>\n> On Sat, Nov 2, 2019 at 1:29 AM Gilles Darold <gilles@darold.net> wrote:\n>> As per the following code, t1 is a remote table through postgres_fdw:\n>> test=# BEGIN;\n>> BEGIN\n>> test=# SELECT * FROM t1;\n>> ...\n>>\n>> test=# PREPARE TRANSACTION 'gxid1';\n>> ERROR: cannot prepare a transaction that modified remote tables\n>> I have attached a patch to the documentation that adds remote tables to the list of objects where any operation prevent using a prepared transaction, currently it is just notified \"operations involving temporary tables or the session's temporary namespace\".\n> I'm not sure that's a good idea because file_fdw works well for\n> PREPARE TRANSACTION! How about adding a note about that to the\n> section of Transaction Management in the postgres_fdw documentation\n> like the attached?\n\n\nYou are right, read only FDW can be used. A second point in favor of\nyour remark is that this is the responsibility of the FDW implementation\nto throw an error when used with a prepared transaction and I see that\nthis is not the case for all FDW.\n\n\nI have attached a single patch that include Etsuro Fujita's patch on\npostgres_fdw documentation and mine on postgres_fdw error message with\nthe precision that it comes from postgres_fdw. The modification about\nprepared transaction in PostgreSQL documentation has been removed.\n\n\n-- \nGilles Darold", "msg_date": "Tue, 5 Nov 2019 12:41:41 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Gilles,\n\nOn Tue, Nov 5, 2019 at 8:41 PM Gilles Darold <gilles@darold.net> wrote:\n> I have attached a single patch that include Etsuro Fujita's patch on\n> postgres_fdw documentation and mine on postgres_fdw error message with\n> the precision that it comes from postgres_fdw. The modification about\n> prepared transaction in PostgreSQL documentation has been removed.\n\nThanks for the patch! I added the commit message. Does that make\nsense? If there are no objections, I'll apply the patch to all\nsupported branches.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 6 Nov 2019 12:57:10 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Wed, Nov 06, 2019 at 12:57:10PM +0900, Etsuro Fujita wrote:\n> Thanks for the patch! I added the commit message. Does that make\n> sense? If there are no objections, I'll apply the patch to all\n> supported branches.\n\n\"postgres_fdw foreign tables\" sounds weird to me. Could \"foreign\ntables using postgres_fdw\" be a better wording? I am wondering as\nwell if we should not split this information into two parts: one for\nthe actual error message which only mentions foreign tables, and a\nsecond one with a hint to mention that postgres_fdw has been used.\n\nWe could have more test cases with 2PC in this module, not necessarily\nthe responsibility of this patch, but while we're on it..\n--\nMichael", "msg_date": "Wed, 6 Nov 2019 13:13:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Michael-san,\n\nOn Wed, Nov 6, 2019 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \"postgres_fdw foreign tables\" sounds weird to me. Could \"foreign\n> tables using postgres_fdw\" be a better wording? I am wondering as\n> well if we should not split this information into two parts: one for\n> the actual error message which only mentions foreign tables, and a\n> second one with a hint to mention that postgres_fdw has been used.\n\nWe use \"postgres_fdw foreign tables\" or \"postgres_fdw tables\" in\nrelease notes, so I thought it was OK to use that in error messages as\nwell. But actually, these wordings are not suitable for error\nmessages?\n\n> We could have more test cases with 2PC in this module, not necessarily\n> the responsibility of this patch, but while we're on it..\n\nAgreed. Will do.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 6 Nov 2019 15:12:04 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Wed, Nov 06, 2019 at 03:12:04PM +0900, Etsuro Fujita wrote:\n> On Wed, Nov 6, 2019 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> \"postgres_fdw foreign tables\" sounds weird to me. Could \"foreign\n>> tables using postgres_fdw\" be a better wording? I am wondering as\n>> well if we should not split this information into two parts: one for\n>> the actual error message which only mentions foreign tables, and a\n>> second one with a hint to mention that postgres_fdw has been used.\n> \n> We use \"postgres_fdw foreign tables\" or \"postgres_fdw tables\" in\n> release notes, so I thought it was OK to use that in error messages as\n> well. But actually, these wordings are not suitable for error\n> messages?\n\nIt is true that the docs of postgres_fdw use that and that it is used\nin some comments. Still, I found this wording a bit weird.. If you\nthink that what you have is better, I am also fine to let you have the \nfinal word, so please feel to ignore me :)\n--\nMichael", "msg_date": "Wed, 6 Nov 2019 16:35:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Michael-san,\n\nOn Wed, Nov 6, 2019 at 4:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Nov 06, 2019 at 03:12:04PM +0900, Etsuro Fujita wrote:\n> > On Wed, Nov 6, 2019 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> \"postgres_fdw foreign tables\" sounds weird to me. Could \"foreign\n> >> tables using postgres_fdw\" be a better wording? I am wondering as\n> >> well if we should not split this information into two parts: one for\n> >> the actual error message which only mentions foreign tables, and a\n> >> second one with a hint to mention that postgres_fdw has been used.\n> >\n> > We use \"postgres_fdw foreign tables\" or \"postgres_fdw tables\" in\n> > release notes, so I thought it was OK to use that in error messages as\n> > well. But actually, these wordings are not suitable for error\n> > messages?\n>\n> It is true that the docs of postgres_fdw use that and that it is used\n> in some comments. Still, I found this wording a bit weird.. If you\n> think that what you have is better, I am also fine to let you have the\n> final word, so please feel to ignore me :)\n\nI'd like to hear the opinions of others.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 6 Nov 2019 20:13:10 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hello.\n\nAt Wed, 6 Nov 2019 20:13:10 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> Hi Michael-san,\n> \n> On Wed, Nov 6, 2019 at 4:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Wed, Nov 06, 2019 at 03:12:04PM +0900, Etsuro Fujita wrote:\n> > > On Wed, Nov 6, 2019 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >> \"postgres_fdw foreign tables\" sounds weird to me. Could \"foreign\n> > >> tables using postgres_fdw\" be a better wording? I am wondering as\n> > >> well if we should not split this information into two parts: one for\n> > >> the actual error message which only mentions foreign tables, and a\n> > >> second one with a hint to mention that postgres_fdw has been used.\n> > >\n> > > We use \"postgres_fdw foreign tables\" or \"postgres_fdw tables\" in\n> > > release notes, so I thought it was OK to use that in error messages as\n> > > well. But actually, these wordings are not suitable for error\n> > > messages?\n> >\n> > It is true that the docs of postgres_fdw use that and that it is used\n> > in some comments. Still, I found this wording a bit weird.. If you\n> > think that what you have is better, I am also fine to let you have the\n> > final word, so please feel to ignore me :)\n> \n> I'd like to hear the opinions of others.\n\nFWIW, I see it a bit weird, too. And perhaps \"prepare\" should be in\nupper case letters. Plus, any operation including a SELECT on a\ntemporary table inhibits PREAPRE TRANSACTION, but the same on a\npostgres_fdw foreign tables is not. So the error message is rather\nwrong.\n\nA verbose alternative can be:\n\n\"cannot PREPARE a transaction that has modified data on foreign tables using postgres_fdw\"\n\nOr I think different style is OK here since the message is not of core\nbut of an extension.\n\n\"postgres_fdw doesn't support PREPARE of a transaction that has modified data on foreign tables\"\n\nThat could be shorter or simpler, of course.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 Nov 2019 16:10:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Kyotaro,\n\nLe 07/11/2019 à 08:10, Kyotaro Horiguchi a écrit :\n> Hello.\n>\n> At Wed, 6 Nov 2019 20:13:10 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n>> Hi Michael-san,\n>>\n>> On Wed, Nov 6, 2019 at 4:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>> On Wed, Nov 06, 2019 at 03:12:04PM +0900, Etsuro Fujita wrote:\n>>>> On Wed, Nov 6, 2019 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>>>> \"postgres_fdw foreign tables\" sounds weird to me. Could \"foreign\n>>>>> tables using postgres_fdw\" be a better wording? I am wondering as\n>>>>> well if we should not split this information into two parts: one for\n>>>>> the actual error message which only mentions foreign tables, and a\n>>>>> second one with a hint to mention that postgres_fdw has been used.\n>>>> We use \"postgres_fdw foreign tables\" or \"postgres_fdw tables\" in\n>>>> release notes, so I thought it was OK to use that in error messages as\n>>>> well. But actually, these wordings are not suitable for error\n>>>> messages?\n>>> It is true that the docs of postgres_fdw use that and that it is used\n>>> in some comments. Still, I found this wording a bit weird.. If you\n>>> think that what you have is better, I am also fine to let you have the\n>>> final word, so please feel to ignore me :)\n>> I'd like to hear the opinions of others.\n> FWIW, I see it a bit weird, too. And perhaps \"prepare\" should be in\n> upper case letters. Plus, any operation including a SELECT on a\n> temporary table inhibits PREAPRE TRANSACTION, but the same on a\n> postgres_fdw foreign tables is not. So the error message is rather\n> wrong.\n\n\nThis is not what I've experienced, see the first message of the thread.\nA SELECT on foreign table prevent to use PREPARE TRANSACTION like with\ntemporary table. Perhaps postgres_fdw should not throw an error with\nreadonly queries on foreign tables but I guess that it is pretty hard to\nknow especially on a later PREPARE event. But maybe I'm wrong, it is not\neasy every day :-) Can you share the SQL code you have executed to allow\nPREPARE transaction after a SELECT on a postgres_fdw foreign table?\n\n\n-- \nGilles Darold\n\n\n\n\n", "msg_date": "Thu, 7 Nov 2019 09:05:55 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Horiguchi-san,\n\nOn Thu, Nov 7, 2019 at 4:11 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 6 Nov 2019 20:13:10 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > On Wed, Nov 6, 2019 at 4:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > On Wed, Nov 06, 2019 at 03:12:04PM +0900, Etsuro Fujita wrote:\n> > > > On Wed, Nov 6, 2019 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >> \"postgres_fdw foreign tables\" sounds weird to me. Could \"foreign\n> > > >> tables using postgres_fdw\" be a better wording? I am wondering as\n> > > >> well if we should not split this information into two parts: one for\n> > > >> the actual error message which only mentions foreign tables, and a\n> > > >> second one with a hint to mention that postgres_fdw has been used.\n> > > >\n> > > > We use \"postgres_fdw foreign tables\" or \"postgres_fdw tables\" in\n> > > > release notes, so I thought it was OK to use that in error messages as\n> > > > well. But actually, these wordings are not suitable for error\n> > > > messages?\n> > >\n> > > It is true that the docs of postgres_fdw use that and that it is used\n> > > in some comments. Still, I found this wording a bit weird.. If you\n> > > think that what you have is better, I am also fine to let you have the\n> > > final word, so please feel to ignore me :)\n> >\n> > I'd like to hear the opinions of others.\n>\n> FWIW, I see it a bit weird, too.\n\nOnly two people complaining about the wording? Considering as well\nthat we use that wording in the docs and there were no complains about\nthat IIRC, I don't feel a need to change that, TBH.\n\n> And perhaps \"prepare\" should be in\n> upper case letters.\n\nSeems like a good idea.\n\n> Plus, any operation including a SELECT on a\n> temporary table inhibits PREAPRE TRANSACTION, but the same on a\n> postgres_fdw foreign tables is not. So the error message is rather\n> wrong.\n>\n> A verbose alternative can be:\n>\n> \"cannot PREPARE a transaction that has modified data on foreign tables using postgres_fdw\"\n\nI don't think that's better, because that doesn't address the original\nissue reported in this thread, as Gilles pointed out just before in\nhis email. See the commit message in the patch I posted.\n\nThanks for the comments!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 7 Nov 2019 17:20:07 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hello Gilles. I made a silly mistake.\n\nAt Thu, 7 Nov 2019 09:05:55 +0100, Gilles Darold <gilles@darold.net> wrote in \n> > FWIW, I see it a bit weird, too. And perhaps \"prepare\" should be in\n> > upper case letters. Plus, any operation including a SELECT on a\n> > temporary table inhibits PREAPRE TRANSACTION, but the same on a\n> > postgres_fdw foreign tables is not. So the error message is rather\n> > wrong.\n> \n> \n> This is not what I've experienced, see the first message of the thread.\n> A SELECT on foreign table prevent to use PREPARE TRANSACTION like with\n> temporary table. Perhaps postgres_fdw should not throw an error with\n> readonly queries on foreign tables but I guess that it is pretty hard to\n> know especially on a later PREPARE event. But maybe I'm wrong, it is not\n> easy every day :-) Can you share the SQL code you have executed to allow\n> PREPARE transaction after a SELECT on a postgres_fdw foreign table?\n\nOooops!\n\nAfter reading this, I came to be afraid that I did something wrong,\nthen I rechecked actual behavior. Finally I found that SELECT * FROM\nforegn_tbl prohibits PREPARE TRANSACTION. I might have used a local\ntable instead of foreign tabel at the previous trial.\n\nSorry for the mistake and thank you for pointing it.\n\nSo my fixed proposals are:\n\n\"cannot PREPARE a transaction that has operated on foreign tables using postgres_fdw\"\n\nOr\n\n\"postgres_fdw doesn't support PREPARE of a transaction that has accessed foreign tables\"\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 Nov 2019 17:22:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hello, Fujita-san.\n\nAt Thu, 7 Nov 2019 17:20:07 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> Only two people complaining about the wording? Considering as well\n> that we use that wording in the docs and there were no complains about\n> that IIRC, I don't feel a need to change that, TBH.\n> \n> > And perhaps \"prepare\" should be in\n> > upper case letters.\n> \n> Seems like a good idea.\n> \n> > Plus, any operation including a SELECT on a\n> > temporary table inhibits PREAPRE TRANSACTION, but the same on a\n> > postgres_fdw foreign tables is not. So the error message is rather\n> > wrong.\n> >\n> > A verbose alternative can be:\n> >\n> > \"cannot PREPARE a transaction that has modified data on foreign tables using postgres_fdw\"\n> \n> I don't think that's better, because that doesn't address the original\n> issue reported in this thread, as Gilles pointed out just before in\n> his email. See the commit message in the patch I posted.\n\n\"modified\" is my mistake as in the just posted mail. But the most\nsignificant point in the previous mail is using \"foreign tables using\npostgres_fdw\" instead of \"postgres_fdw foreign tables\". And the other\npoint is using different message from temporary tables.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 Nov 2019 17:27:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "At Thu, 07 Nov 2019 17:27:47 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> \"modified\" is my mistake as in the just posted mail. But the most\n> significant point in the previous mail is using \"foreign tables using\n> postgres_fdw\" instead of \"postgres_fdw foreign tables\". And the other\n> point is using different message from temporary tables.\n\nI forgot to mention that the comment in XACT_EVENT_PRE_PREPARE\ncontains the same mistake and needs more or less the same fix.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 07 Nov 2019 17:31:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Horiguchi-san,\n\nOn Thu, Nov 7, 2019 at 5:28 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 7 Nov 2019 17:20:07 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > Only two people complaining about the wording? Considering as well\n> > that we use that wording in the docs and there were no complains about\n> > that IIRC, I don't feel a need to change that, TBH.\n\n> But the most\n> significant point in the previous mail is using \"foreign tables using\n> postgres_fdw\" instead of \"postgres_fdw foreign tables\".\n\nOK, but as I said above, I don't feel the need to change that. How\nabout leaving it for another patch to improve the wording in that\nmessage and/or the documentation if we really need to do it.\n\n> And the other\n> point is using different message from temporary tables.\n\nYou mean we should do s/prepare/PREPARE/?\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 7 Nov 2019 18:40:36 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Horiguchi-san,\n\nOn Thu, Nov 7, 2019 at 5:31 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I forgot to mention that the comment in XACT_EVENT_PRE_PREPARE\n> contains the same mistake and needs more or less the same fix.\n\nGood catch! How about rewriting \"We disallow remote transactions that\nmodified anything\" in the comment simply to \"We disallow any remote\ntransactions\" or something like that? Attached is an updated patch.\nIn the patch, I did s/prepare/PREPARE/ to the error message as well,\nas you proposed.\n\nThanks again for reviewing!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 7 Nov 2019 19:52:42 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Thu, Nov 07, 2019 at 06:40:36PM +0900, Etsuro Fujita wrote:\n> On Thu, Nov 7, 2019 at 5:28 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> At Thu, 7 Nov 2019 17:20:07 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n>>> Only two people complaining about the wording? Considering as well\n\nThat's like.. Half the folks participating to this thread ;)\n\n>>> that we use that wording in the docs and there were no complains about\n>>> that IIRC, I don't feel a need to change that, TBH.\n>> But the most\n>> significant point in the previous mail is using \"foreign tables using\n>> postgres_fdw\" instead of \"postgres_fdw foreign tables\".\n> \n> OK, but as I said above, I don't feel the need to change that. How\n> about leaving it for another patch to improve the wording in that\n> message and/or the documentation if we really need to do it.\n\nFine by me. If I were to put a number on that, I would be around +-0,\nso I don't have an actual objection with your point of view either.\n--\nMichael", "msg_date": "Fri, 8 Nov 2019 09:10:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Le 07/11/2019 à 11:52, Etsuro Fujita a écrit :\n> Horiguchi-san,\n>\n> On Thu, Nov 7, 2019 at 5:31 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> I forgot to mention that the comment in XACT_EVENT_PRE_PREPARE\n>> contains the same mistake and needs more or less the same fix.\n> Good catch! How about rewriting \"We disallow remote transactions that\n> modified anything\" in the comment simply to \"We disallow any remote\n> transactions\" or something like that? Attached is an updated patch.\n> In the patch, I did s/prepare/PREPARE/ to the error message as well,\n> as you proposed.\n>\n> Thanks again for reviewing!\n>\n> Best regards,\n> Etsuro Fujita\n\n\nLooks good for me,\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Fri, 8 Nov 2019 08:55:08 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Michael-san,\n\nOn Fri, Nov 8, 2019 at 9:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Nov 07, 2019 at 06:40:36PM +0900, Etsuro Fujita wrote:\n> > On Thu, Nov 7, 2019 at 5:28 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> At Thu, 7 Nov 2019 17:20:07 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> >>> Only two people complaining about the wording? Considering as well\n>\n> That's like.. Half the folks participating to this thread ;)\n\nRight...\n\n> >>> that we use that wording in the docs and there were no complains about\n> >>> that IIRC, I don't feel a need to change that, TBH.\n> >> But the most\n> >> significant point in the previous mail is using \"foreign tables using\n> >> postgres_fdw\" instead of \"postgres_fdw foreign tables\".\n> >\n> > OK, but as I said above, I don't feel the need to change that. How\n> > about leaving it for another patch to improve the wording in that\n> > message and/or the documentation if we really need to do it.\n>\n> Fine by me. If I were to put a number on that, I would be around +-0,\n> so I don't have an actual objection with your point of view either.\n\nOK, pushed as-is. Thanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 8 Nov 2019 17:19:34 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Gilles,\n\nOn Fri, Nov 8, 2019 at 4:55 PM Gilles Darold <gilles@darold.net> wrote:\n> Le 07/11/2019 à 11:52, Etsuro Fujita a écrit :\n> > On Thu, Nov 7, 2019 at 5:31 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> I forgot to mention that the comment in XACT_EVENT_PRE_PREPARE\n> >> contains the same mistake and needs more or less the same fix.\n> > Good catch! How about rewriting \"We disallow remote transactions that\n> > modified anything\" in the comment simply to \"We disallow any remote\n> > transactions\" or something like that? Attached is an updated patch.\n> > In the patch, I did s/prepare/PREPARE/ to the error message as well,\n> > as you proposed.\n\n> Looks good for me,\n>\n> --\n> Gilles Darold\n>\n\n\n", "msg_date": "Fri, 8 Nov 2019 17:20:33 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Gilles,\n\nSorry, I have sent an unfinished email.\n\nOn Fri, Nov 8, 2019 at 5:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Nov 8, 2019 at 4:55 PM Gilles Darold <gilles@darold.net> wrote:\n> > Le 07/11/2019 à 11:52, Etsuro Fujita a écrit :\n> > > On Thu, Nov 7, 2019 at 5:31 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > >> I forgot to mention that the comment in XACT_EVENT_PRE_PREPARE\n> > >> contains the same mistake and needs more or less the same fix.\n> > > Good catch! How about rewriting \"We disallow remote transactions that\n> > > modified anything\" in the comment simply to \"We disallow any remote\n> > > transactions\" or something like that? Attached is an updated patch.\n> > > In the patch, I did s/prepare/PREPARE/ to the error message as well,\n> > > as you proposed.\n>\n> > Looks good for me,\n\nPushed after modifying the commit message a bit. Thanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 8 Nov 2019 17:25:52 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Fri, Nov 08, 2019 at 05:25:52PM +0900, Etsuro Fujita wrote:\n> Pushed after modifying the commit message a bit. Thanks!\n\nShould we have more tests for 2PC then?\n--\nMichael", "msg_date": "Fri, 8 Nov 2019 18:05:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Hi Michael,\n\n\nLe 08/11/2019 à 10:05, Michael Paquier a écrit :\n> On Fri, Nov 08, 2019 at 05:25:52PM +0900, Etsuro Fujita wrote:\n>> Pushed after modifying the commit message a bit. Thanks!\n> Should we have more tests for 2PC then?\n> --\n> Michael\n\n\nI don't think so. The support or not of 2PC is on foreign data wrapper\nside. In postgres_fdw contrib the error for use with 2PC is not part of\nthe test but it will be thrown anyway. I guess that a test will be\nvaluable only if there is support for readonly query.\n\n\n-- \nGilles Darold\n\n\n\n\n", "msg_date": "Fri, 8 Nov 2019 10:19:01 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Fri, Nov 08, 2019 at 10:19:01AM +0100, Gilles Darold wrote:\n> I don't think so. The support or not of 2PC is on foreign data wrapper\n> side. In postgres_fdw contrib the error for use with 2PC is not part of\n> the test but it will be thrown anyway. I guess that a test will be\n> valuable only if there is support for readonly query.\n\nThat's what I call a case for negative testing. We don't allow 2PC to\nbe used so there is a point in having a test to make sure of that.\nThis way, if the code in this area is refactored or changed, we still\nmake sure that 2PC is correctly prevented. My suggestion is to close\nthis gap. One point here is that this requires an alternate output\nfile because of max_prepared_transactions and there is no point in\ncreating one with all the tests of postgres_fdw in a single file as we\nhave now as it would create 8k lines of expected file bloat, so it\nwould be better to split the tests first. My 2c.\n--\nMichael", "msg_date": "Sat, 9 Nov 2019 10:22:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "Le 09/11/2019 à 02:22, Michael Paquier a écrit :\n> On Fri, Nov 08, 2019 at 10:19:01AM +0100, Gilles Darold wrote:\n>> I don't think so. The support or not of 2PC is on foreign data wrapper\n>> side. In postgres_fdw contrib the error for use with 2PC is not part of\n>> the test but it will be thrown anyway. I guess that a test will be\n>> valuable only if there is support for readonly query.\n> That's what I call a case for negative testing. We don't allow 2PC to\n> be used so there is a point in having a test to make sure of that.\n> This way, if the code in this area is refactored or changed, we still\n> make sure that 2PC is correctly prevented. My suggestion is to close\n> this gap. One point here is that this requires an alternate output\n> file because of max_prepared_transactions and there is no point in\n> creating one with all the tests of postgres_fdw in a single file as we\n> have now as it would create 8k lines of expected file bloat, so it\n> would be better to split the tests first. My 2c.\n> --\n> Michael\n\n\nHi Michael, it looks that a separate test is not required at least for\nthis test. Here is a patch that enable the test in\ncontrib/postgres_fdw/, expected output:\n\n\n -- Make sure that 2PC is correctly prevented\n BEGIN;\n SELECT count(*) FROM ft1;\n  count\n -------\n    822\n (1 row)\n\n -- Must throw an error\n PREPARE TRANSACTION 'fdw_tpc';\n ERROR:  cannot PREPARE a transaction that has operated on\n postgres_fdw foreign tables\n ROLLBACK;\n WARNING:  there is no transaction in progress\n\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Mon, 11 Nov 2019 16:43:18 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": true, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Mon, Nov 11, 2019 at 04:43:18PM +0100, Gilles Darold wrote:\n> Hi Michael, it looks that a separate test is not required at least for\n> this test. Here is a patch that enable the test in\n> contrib/postgres_fdw/, expected output:\n\nIndeed, thanks for looking. I thought that the callback was called\nafter checking for max_prepared_transaction, but that's not the case.\nSo let's add at least a test case. Any objections?\n--\nMichael", "msg_date": "Tue, 12 Nov 2019 09:35:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." }, { "msg_contents": "On Tue, Nov 12, 2019 at 09:35:03AM +0900, Michael Paquier wrote:\n> Indeed, thanks for looking. I thought that the callback was called\n> after checking for max_prepared_transaction, but that's not the case.\n> So let's add at least a test case. Any objections?\n\nOkay, done.\n--\nMichael", "msg_date": "Wed, 13 Nov 2019 13:53:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH][DOC] Fix for PREPARE TRANSACTION doc and postgres_fdw\n message." } ]
[ { "msg_contents": "It would be useful to have CREATE INDEX CONCURRENTLY be ignored by\nvacuuming's OldestXmin. Frequently in OLTP scenarios, CIC transactions\nare severely disruptive because they are the only long-running\ntransactions in the system, and VACUUM has to keep rows only for their\nsake, pointlessly. The motivation for this change seems well justified\nto me (but feel free to argue if you think otherwise).\n\nSo the question is how to implement this. Here's a very small patch for\nit:\n\ndiff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c\nindex 374e2d0efe..9081dfe384 100644\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -532,6 +532,12 @@ DefineIndex(Oid relationId,\n \t\t\t\t errmsg(\"cannot use more than %d columns in an index\",\n \t\t\t\t\t\tINDEX_MAX_KEYS)));\n \n+\tif (stmt->concurrent && !IsTransactionBlock())\n+\t{\n+\t\tAssert(GetCurrentTransactionIdIfAny() == InvalidTransactionId);\n+\t\tMyPgXact->vacuumFlags |= PROC_IN_VACUUM;\n+\t}\n+\n \t/*\n \t * Only SELECT ... FOR UPDATE/SHARE are allowed while doing a standard\n \t * index build; but for concurrent builds we allow INSERT/UPDATE/DELETE\n\nThere's an obvious flaw, which is that this doesn't consider expressions\nin partial indexes and column definitions. That's moderately easy to\nfix. But there are less obvious flaws, such as: is it possible that\nCIC's xmin is required for other reasons? (such as access to catalogs,\nwhich get cleaned by concurrent vacuum) If it is, can we fix that\nproblem by keeping track of a separate Xmin for catalog vacuuming\npurposes? (We already have catalog_xmin for replication slots, so this\nis not completely ridiculous I think ...)\n\n-- \n�lvaro Herrera\n\n\n", "msg_date": "Fri, 1 Nov 2019 17:33:10 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "excluding CREATE INDEX CONCURRENTLY from OldestXmin" } ]
[ { "msg_contents": "Hello,\n\nI came across a surprising behavior when upgrading our PostgreSQL 10 DBs that\nalso serve as a destination for the logical replication of some reference tables.\n\npg_upgrade turns off all subscriptions on the cluster and doesn't turn them on.\nSpecifically, it creates them with connect = false, as discussed at the thread\nstarting at\nhttps://www.postgresql.org/message-id/e4fbfad5-c6ac-fd50-6777-18c84b34eb2f@2ndquadrant.com\n\nUnfortunately, I can't find any mention of this in the docs of pg_upgrade, so\nI am at leas willing to add those if we can't resolve this in a more automated\nway (read below).\n\nSince we can determine those subscriptions that were active on the old cluster\nimmediately before the upgrade, we could collect those and emit a script at\nthe end of the pg_upgrade to turn them on, similar to the one pg_upgrade\nproduces for the analyze.\n\nThere are two options when re-enabling the subscriptions: either continue from\nthe current position (possibly discarding all changes that happened while the\ndatabase was offline), or truncate the destination tables and copy the data again.\nThe first one corresponds to setting copy_data=false when doing refresh publication.\nThe second one is a combination of a prior truncate + refresh publication with\ncopy_data=true and doesn't seem like an action that is performed in a\nsingle transaction. Would it make sense to add a copy_truncate flag, false\nby default, that would instruct the logical replication worker to truncate the\ndestination table immediately before resyncing it from the origin?\n\nRegards,\nOleksii\n\n\n\n", "msg_date": "Sat, 2 Nov 2019 17:55:25 +0100", "msg_from": "Oleksii Kliukin <alexk@hintbits.com>", "msg_from_op": true, "msg_subject": "pg_upgrade and subscriptions" }, { "msg_contents": "Em sáb., 2 de nov. de 2019 às 13:55, Oleksii Kliukin\n<alexk@hintbits.com> escreveu:\n>\n> I came across a surprising behavior when upgrading our PostgreSQL 10 DBs that\n> also serve as a destination for the logical replication of some reference tables.\n>\n> pg_upgrade turns off all subscriptions on the cluster and doesn't turn them on.\n> Specifically, it creates them with connect = false, as discussed at the thread\n> starting at\n> https://www.postgresql.org/message-id/e4fbfad5-c6ac-fd50-6777-18c84b34eb2f@2ndquadrant.com\n>\n> Unfortunately, I can't find any mention of this in the docs of pg_upgrade, so\n> I am at leas willing to add those if we can't resolve this in a more automated\n> way (read below).\n>\nIt is documented in step 13 \"Post-upgrade processing\". In this case,\nwe need to provide a new script for subscriptions.\n\n> Since we can determine those subscriptions that were active on the old cluster\n> immediately before the upgrade, we could collect those and emit a script at\n> the end of the pg_upgrade to turn them on, similar to the one pg_upgrade\n> produces for the analyze.\n>\n+1. It seems to be an oversight (missing feature) that nobody bothers to fix it.\n\n> There are two options when re-enabling the subscriptions: either continue from\n> the current position (possibly discarding all changes that happened while the\n> database was offline), or truncate the destination tables and copy the data again.\n> The first one corresponds to setting copy_data=false when doing refresh publication.\n> The second one is a combination of a prior truncate + refresh publication with\n> copy_data=true and doesn't seem like an action that is performed in a\n> single transaction. Would it make sense to add a copy_truncate flag, false\n> by default, that would instruct the logical replication worker to truncate the\n> destination table immediately before resyncing it from the origin?\n>\nIt seems the common case is the former. However, I don't think people\nwant to \"discard all the changes that happen while database was\noffline\" because the slot will remain in the publisher and we are\nupgrading the subscriber. Since we are providing a script for the\ncommon case, you are free to ignore it and create a new script that\nfulfill your requirements.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n", "msg_date": "Fri, 8 Nov 2019 11:38:53 -0300", "msg_from": "Euler Taveira <euler@timbira.com.br>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade and subscriptions" } ]
[ { "msg_contents": "Hi hackers,\n\nI just noticed that when contrib/seg was converted to V1 calling\nconvention (commit 389bb2818f4), the PG_GETARG_SEG_P() macro got defined\nin terms of PG_GETARG_POINTER(). But it itself calls DatumGetPointer(),\nso shouldn't it be using PG_GETARG_DATUM()?\n\nAttached is a patch that fixes it, and brings it in line with all the\nother PG_GETARG_FOO_P() macros.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen", "msg_date": "Sat, 02 Nov 2019 23:14:47 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "[PATCH] contrib/seg: Fix PG_GETARG_SEG_P definition" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> I just noticed that when contrib/seg was converted to V1 calling\n> convention (commit 389bb2818f4), the PG_GETARG_SEG_P() macro got defined\n> in terms of PG_GETARG_POINTER(). But it itself calls DatumGetPointer(),\n> so shouldn't it be using PG_GETARG_DATUM()?\n\nYup, I agree. Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Nov 2019 10:58:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] contrib/seg: Fix PG_GETARG_SEG_P definition" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n>> I just noticed that when contrib/seg was converted to V1 calling\n>> convention (commit 389bb2818f4), the PG_GETARG_SEG_P() macro got defined\n>> in terms of PG_GETARG_POINTER(). But it itself calls DatumGetPointer(),\n>> so shouldn't it be using PG_GETARG_DATUM()?\n>\n> Yup, I agree. Pushed.\n\nThanks!\n\n> \t\t\tregards, tom lane\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n", "msg_date": "Mon, 04 Nov 2019 11:30:23 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] contrib/seg: Fix PG_GETARG_SEG_P definition" }, { "msg_contents": "Hi,\n\nOn 2019-11-04 11:30:23 +0000, Dagfinn Ilmari Manns�ker wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> >> I just noticed that when contrib/seg was converted to V1 calling\n> >> convention (commit 389bb2818f4), the PG_GETARG_SEG_P() macro got defined\n> >> in terms of PG_GETARG_POINTER(). But it itself calls DatumGetPointer(),\n> >> so shouldn't it be using PG_GETARG_DATUM()?\n> >\n> > Yup, I agree. Pushed.\n> \n> Thanks!\n\nThanks both of you.\n\n- Andres\n\n\n", "msg_date": "Mon, 4 Nov 2019 10:01:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] contrib/seg: Fix PG_GETARG_SEG_P definition" } ]
[ { "msg_contents": "While monitoring pg_stat_subscription, I noticed that last_msg_send_time\nwas usually NULL, which doesn't make sense. Why would we have a message,\nbut not know when it was sent?\n\nLooking in src/backend/replication/walsender.c, there is this:\n\n /* output previously gathered data in a CopyData packet */\n pq_putmessage_noblock('d', ctx->out->data, ctx->out->len);\n\n /*\n * Fill the send timestamp last, so that it is taken as late as\npossible.\n * This is somewhat ugly, but the protocol is set as it's already used\nfor\n * several releases by streaming physical replication.\n */\n resetStringInfo(&tmpbuf);\n now = GetCurrentTimestamp();\n pq_sendint64(&tmpbuf, now);\n memcpy(&ctx->out->data[1 + sizeof(int64) + sizeof(int64)],\n tmpbuf.data, sizeof(int64));\n\nFilling out the timestamp after the message has already been sent is taking\n\"as late as possible\" a little too far. It results in every message having\na zero timestamp, other than keep-alives which go through a different path.\n\nRe-ordering the code blocks as in the attached seems to fix it. But I have\nto wonder, if this has been broken from the start and no one ever noticed,\nis this even valuable information to relay in the first place? We could\njust take the column out of the view, and not bother calling\nGetCurrentTimestamp() for each message.\n\nCheers,\n\nJeff", "msg_date": "Sat, 2 Nov 2019 21:54:54 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "Logical replication wal sender timestamp bug" }, { "msg_contents": "On Sat, Nov 02, 2019 at 09:54:54PM -0400, Jeff Janes wrote:\n> While monitoring pg_stat_subscription, I noticed that last_msg_send_time\n> was usually NULL, which doesn't make sense. Why would we have a message,\n> but not know when it was sent?\n\nSo... The timestamp is received and stored in LogicalRepApplyLoop()\nwith send_time when a 'w' message is received in the subscription\ncluster. And it gets computed with a two-phase process:\n- WalSndPrepareWrite() reserves the space in the message for the\ntimestamp.\n- WalSndWriteData() computes the timestamp in the reserved space once\nthe write message is computed and ready to go.\n\n> Filling out the timestamp after the message has already been sent is taking\n> \"as late as possible\" a little too far. It results in every message having\n> a zero timestamp, other than keep-alives which go through a different path.\n\nIt seems to me that you are right: the timestamp is computed too\nlate.\n\n> Re-ordering the code blocks as in the attached seems to fix it. But I have\n> to wonder, if this has been broken from the start and no one ever noticed,\n> is this even valuable information to relay in the first place? We could\n> just take the column out of the view, and not bother calling\n> GetCurrentTimestamp() for each message.\n\nI think that there are use cases for such monitoring capabilities,\nsee for example 7fee252. So I would rather keep it.\n--\nMichael", "msg_date": "Tue, 5 Nov 2019 13:19:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Logical replication wal sender timestamp bug" }, { "msg_contents": "On Tue, Nov 05, 2019 at 01:19:37PM +0900, Michael Paquier wrote:\n> On Sat, Nov 02, 2019 at 09:54:54PM -0400, Jeff Janes wrote:\n>> Filling out the timestamp after the message has already been sent is taking\n>> \"as late as possible\" a little too far. It results in every message having\n>> a zero timestamp, other than keep-alives which go through a different path.\n> \n> It seems to me that you are right: the timestamp is computed too\n> late.\n\nIt is easy enough to reproduce the problem by setting for example\nlogical replication between two nodes and pgbench to produce some\nload and then monitor pg_stat_subscription periodically. However, it\nis a problem since logical decoding has been introduced (5a991ef) so\ncommitted your fix down to 9.4.\n--\nMichael", "msg_date": "Wed, 6 Nov 2019 16:15:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Logical replication wal sender timestamp bug" }, { "msg_contents": "On Wed, Nov 6, 2019 at 2:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Nov 05, 2019 at 01:19:37PM +0900, Michael Paquier wrote:\n> > On Sat, Nov 02, 2019 at 09:54:54PM -0400, Jeff Janes wrote:\n> >> Filling out the timestamp after the message has already been sent is\n> taking\n> >> \"as late as possible\" a little too far. It results in every message\n> having\n> >> a zero timestamp, other than keep-alives which go through a different\n> path.\n> >\n> > It seems to me that you are right: the timestamp is computed too\n> > late.\n>\n> It is easy enough to reproduce the problem by setting for example\n> logical replication between two nodes and pgbench to produce some\n> load and then monitor pg_stat_subscription periodically. However, it\n> is a problem since logical decoding has been introduced (5a991ef) so\n> committed your fix down to 9.4.\n>\n\nThanks. This column looks much more reasonable now.\n\nCheers,\n\nJeff\n\nOn Wed, Nov 6, 2019 at 2:15 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Nov 05, 2019 at 01:19:37PM +0900, Michael Paquier wrote:\n> On Sat, Nov 02, 2019 at 09:54:54PM -0400, Jeff Janes wrote:\n>> Filling out the timestamp after the message has already been sent is taking\n>> \"as late as possible\" a little too far.  It results in every message having\n>> a zero timestamp, other than keep-alives which go through a different path.\n> \n> It seems to me that you are right: the timestamp is computed too\n> late.\n\nIt is easy enough to reproduce the problem by setting for example\nlogical replication between two nodes and pgbench to produce some\nload and then monitor pg_stat_subscription periodically.  However, it\nis a problem since logical decoding has been introduced (5a991ef) so\ncommitted your fix down to 9.4.Thanks.  This column looks much more reasonable now.Cheers,Jeff", "msg_date": "Fri, 8 Nov 2019 15:01:35 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical replication wal sender timestamp bug" } ]
[ { "msg_contents": "Hello Folks\n\nI would like to update unaccent.rules file to support Arabic letters. so could someone help me or tell me how could I add such contribution. I attached the file including the modifications, only the last 4 lines.\n\nthank you", "msg_date": "Sun, 3 Nov 2019 06:02:19 +0000", "msg_from": "kerbrose khaled <kerbrose@hotmail.com>", "msg_from_op": true, "msg_subject": "updating unaccent.rules for Arabic letters" }, { "msg_contents": "Hello Folks\n\nI would like to update unaccent.rules file to support Arabic letters. so could someone help me or tell me how could I add such contribution. I attached the file including the modifications, only the last 4 lines.\n\nthank you", "msg_date": "Sun, 3 Nov 2019 06:05:25 +0000", "msg_from": "kerbrose khaled <kerbrose@hotmail.com>", "msg_from_op": true, "msg_subject": "updating unaccent.rules for Arabic letters" }, { "msg_contents": "kerbrose khaled <kerbrose@hotmail.com> writes:\n> I would like to update unaccent.rules file to support Arabic letters. so could someone help me or tell me how could I add such contribution. I attached the file including the modifications, only the last 4 lines.\n\nHi! I've got no objection to including Arabic in the set of covered\nlanguages, but handing us a new unaccent.rules file isn't the way to\ndo it, because that's a generated file. The adjacent script\ngenerate_unaccent_rules.py generates it from the official Unicode\nsource data (see comments in that script). What we need, ultimately,\nis a patch to that script so it will emit these additional translations.\nPast commits that might be useful sources of inspiration include\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=456e3718e7b72efe4d2639437fcbca2e4ad83099\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=5e8d670c313531c0dca245943fb84c94a477ddc4\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=ec0a69e49bf41a37b5c2d6f6be66d8abae00ee05\n\nIf you're not good with Python, maybe you could just explain to us\nhow to recognize these characters from Unicode character properties.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Nov 2019 11:12:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: updating unaccent.rules for Arabic letters" }, { "msg_contents": "kerbrose khaled wrote:\n\n> I would like to update unaccent.rules file to support Arabic letters. so\n> could someone help me or tell me how could I add such contribution. I\n> attached the file including the modifications, only the last 4 lines.\n\nThe Arabic letters are found in the Unicode block U+0600 to U+06FF \n(https://www.fileformat.info/info/unicode/block/arabic/list.htm)\nThere has been no coverage of this block until now by the unaccent\nmodule. Since Arabic uses several diacritics [1] , it would be best to\nfigure out all the transliterations that should go in and let them in\none go (plus coding that in the Python script).\n\nThe canonical way to unaccent is normally to apply a Unicode\ntransformation: NFC -> NFD and remove the non-spacing marks.\n\nI've tentatively did that with each codepoint in the 0600-06FF block\nin SQL with icu_transform in icu_ext [2], and it produces the\nattached result, with 60 (!) entries, along with Unicode names for\nreadability.\n\nDoes that make sense to people who know Arabic?\n\nFor the record, here's the query:\n\nWITH block(cp) AS (select * FROM generate_series(x'600'::int,x'6ff'::int) AS\ncp),\n dest AS (select cp, icu_transform(chr(cp), 'any-NFD;[:nonspacing mark:]\nany-remove; any-NFC') AS unaccented FROM block)\nSELECT\n chr(cp) as \"src\",\n icu_transform(chr(cp), 'Name') as \"srcName\",\n dest.unaccented as \"dest\",\n icu_transform(dest.unaccented, 'Name') as \"destName\"\nFROM dest\nWHERE chr(cp) <> dest.unaccented;\n\n\n[1] https://en.wikipedia.org/wiki/Arabic_diacritics\n[2] https://github.com/dverite/icu_ext#icu_transform\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite", "msg_date": "Mon, 04 Nov 2019 18:41:59 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: updating unaccent.rules for Arabic letters" } ]
[ { "msg_contents": "Hello!\n\nRecently I got few times into situation where I was trying to find out what\nis blocking DELETE queries. Running EXPLAIN (even VERBOSE one) wasn't\nuseful, since the reason was slow trigger (missing index on foreign key\ncolumn). I had to create testing entry and run EXPLAIN ANALYZE DELETE to\nget this information.\n\nIt will be really valuable for me to show triggers in EXPLAIN query since\nit will make clear for me there will be any trigger \"activated\" during\nexecution of DELETE query and that can be the reason for slow DELETE.\n\nI have seen initial discussion at\nhttps://www.postgresql.org/message-id/flat/20693.1111732761%40sss.pgh.pa.us\nto show time spent in triggers in EXPLAIN ANALYZE including quick\ndiscussion to possibly show triggers during EXPLAIN. Anyway since it\ndoesn't show any additional cost and just inform about the possibilities, I\nstill consider this feature useful. This is probably implementation of idea\nmentioned at\nhttps://www.postgresql.org/message-id/21221.1111736869%40sss.pgh.pa.us by\nTom Lane.\n\nAfter initial discussion with Pavel Stěhule and Tomáš Vondra at czech\npostgresql maillist (\nhttps://groups.google.com/forum/#!topic/postgresql-cz/Dq1sT7huVho) I was\nable to prepare initial version of this patch. I have added EXPLAIN option\ncalled TRIGGERS enabled by default.There's already autoexplain property for\nthis. I understand it is not possible to show only triggers which will be\nreally activated unless query is really executed. EXPLAIN ANALYZE remains\nuntouched with this patch.\n\n- patch with examples can be found at\nhttps://github.com/simi/postgres/pull/2\n- DIFF format https://github.com/simi/postgres/pull/2.diff\n- PATCH format (also attached) https://github.com/simi/postgres/pull/2.patch\n\nAll regression tests passed with this change locally on latest git master.\nI would like to cover this patch with more regression tests, but I wasn't\nsure where to place them, since there's no \"EXPLAIN\" related test \"group\".\nIs \"src/test/regress/sql/triggers.sql\" the best place to add tests related\nto this change?\n\nPS: This is my first try to contribute to postgresql codebase. The quality\nof patch is probably not the best, but I will be more than happy to do any\nrequested change if needed.\n\nRegards,\nJosef Šimánek", "msg_date": "Sun, 3 Nov 2019 18:25:45 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Include triggers in EXPLAIN" }, { "msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> Recently I got few times into situation where I was trying to find out what\n> is blocking DELETE queries. Running EXPLAIN (even VERBOSE one) wasn't\n> useful, since the reason was slow trigger (missing index on foreign key\n> column). I had to create testing entry and run EXPLAIN ANALYZE DELETE to\n> get this information.\n\n> It will be really valuable for me to show triggers in EXPLAIN query since\n> it will make clear for me there will be any trigger \"activated\" during\n> execution of DELETE query and that can be the reason for slow DELETE.\n\nI don't really see the point of this patch? You do get the trigger\ntimes during EXPLAIN ANALYZE, and I don't believe that a plain EXPLAIN\nis going to have full information about what triggers might fire or\nnot fire.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Nov 2019 16:49:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Include triggers in EXPLAIN" }, { "msg_contents": "Hello Tom.\n\nThanks for quick response. As I was testing this feature it shows all\n\"possible\" triggers to be executed running given query. The benefit of\nhaving this information in EXPLAIN as well is you do not need to execute\nthe query (as EXPLAIN ANALYZE does). My usecase is to take a look at query\nbefore it is executed to get some idea about the plan with EXPLAIN.\n\nDo you have idea about some case where actual trigger will be missing in\nEXPLAIN with current implementation, but will be present in EXPLAIN\nANALYZE? I can take a look if there's any way how to handle those cases as\nwell.\n\nne 3. 11. 2019 v 22:49 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> > Recently I got few times into situation where I was trying to find out\n> what\n> > is blocking DELETE queries. Running EXPLAIN (even VERBOSE one) wasn't\n> > useful, since the reason was slow trigger (missing index on foreign key\n> > column). I had to create testing entry and run EXPLAIN ANALYZE DELETE to\n> > get this information.\n>\n> > It will be really valuable for me to show triggers in EXPLAIN query since\n> > it will make clear for me there will be any trigger \"activated\" during\n> > execution of DELETE query and that can be the reason for slow DELETE.\n>\n> I don't really see the point of this patch? You do get the trigger\n> times during EXPLAIN ANALYZE, and I don't believe that a plain EXPLAIN\n> is going to have full information about what triggers might fire or\n> not fire.\n>\n> regards, tom lane\n>\n\nHello Tom.Thanks for quick response. As I was testing this feature it shows all \"possible\" triggers to be executed running given query. The benefit of having this information in EXPLAIN as well is you do not need to execute the query (as EXPLAIN ANALYZE does). My usecase is to take a look at query before it is executed to get some idea about the plan with EXPLAIN.Do you have idea about some case where actual trigger will be missing in EXPLAIN with current implementation, but will be present in EXPLAIN ANALYZE? I can take a look if there's any way how to handle those cases as well.ne 3. 11. 2019 v 22:49 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> Recently I got few times into situation where I was trying to find out what\n> is blocking DELETE queries. Running EXPLAIN (even VERBOSE one) wasn't\n> useful, since the reason was slow trigger (missing index on foreign key\n> column). I had to create testing entry and run EXPLAIN ANALYZE DELETE to\n> get this information.\n\n> It will be really valuable for me to show triggers in EXPLAIN query since\n> it will make clear for me there will be any trigger \"activated\" during\n> execution of DELETE query and that can be the reason for slow DELETE.\n\nI don't really see the point of this patch?  You do get the trigger\ntimes during EXPLAIN ANALYZE, and I don't believe that a plain EXPLAIN\nis going to have full information about what triggers might fire or\nnot fire.\n\n                        regards, tom lane", "msg_date": "Mon, 4 Nov 2019 10:35:25 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Include triggers in EXPLAIN" }, { "msg_contents": "Hi,\n\n(minor note - on PG lists the style is to quote in-line and trip)\n\nOn 2019-11-04 10:35:25 +0100, Josef Šimánek wrote:\n> Thanks for quick response. As I was testing this feature it shows all\n> \"possible\" triggers to be executed running given query. The benefit of\n> having this information in EXPLAIN as well is you do not need to execute\n> the query (as EXPLAIN ANALYZE does). My usecase is to take a look at query\n> before it is executed to get some idea about the plan with EXPLAIN.\n\nI can actually see some value in additional information here, but I'd\nprobably want to change the format a bit. When explicitly desired (or\nperhaps just in verbose mode?), I see value in counting the number of\ntriggers we know about that need to be checked, how many were excluded\non the basis of the trigger's WHEN clause etc.\n\n\n> Do you have idea about some case where actual trigger will be missing in\n> EXPLAIN with current implementation, but will be present in EXPLAIN\n> ANALYZE? I can take a look if there's any way how to handle those cases as\n> well.\n\nAny triggers that are fired because of other, listed, triggers causing\nother changes. E.g. a logging trigger that inserts into a log table -\nEXPLAIN, without ANALYZE, doesn't have a way of knowing about that.\n\nAnd before you say that sounds like a niche issue - it's not in my\nexperience. Forgetting the necessary indexes two or three foreign keys\ndown a CASCADE chain seems to be more common than doing so for tables\ndirectly \"linked\" with busy ones.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 4 Nov 2019 10:00:28 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Include triggers in EXPLAIN" } ]